text stringlengths 10 951k | source stringlengths 39 44 |
|---|---|
Binary prefix
A binary prefix is a unit prefix for multiples of units in data processing, data transmission, and digital information, notably the bit and the byte, to indicate multiplication by a power of 2.
The computer industry has historically used the units "kilobyte", "megabyte", and "gigabyte", and the corresponding symbols KB, MB, and GB, in at least two slightly different measurement systems. In citations of main memory (RAM) capacity, "gigabyte" customarily means bytes. As this is a power of 1024, and 1024 is a power of two (210), this usage is referred to as a binary measurement.
In most other contexts, the industry uses the multipliers "kilo", "mega", "giga", etc., in a manner consistent with their meaning in the International System of Units (SI), namely as powers of 1000. For example, a 500 gigabyte hard disk holds bytes, and a 1 Gbit/s (gigabit per second) Ethernet connection transfers data at nominal speed of bit/s. In contrast with the "binary prefix" usage, this use is described as a "decimal prefix", as 1000 is a power of 10 (103).
The use of the same unit prefixes with two different meanings has caused confusion. Starting around 1998, the International Electrotechnical Commission (IEC) and several other standards and trade organizations addressed the ambiguity by publishing standards and recommendations for a set of binary prefixes that refer exclusively to powers of 1024. Accordingly, the US National Institute of Standards and Technology (NIST) requires that SI prefixes only be used in the decimal sense: kilobyte and megabyte denote one thousand bytes and one million bytes respectively (consistent with SI), while new terms such as kibibyte, mebibyte and gibibyte, having the symbols KiB, MiB, and GiB, denote 1024 bytes, bytes, and bytes, respectively. In 2008, the IEC prefixes were incorporated into the international standard system of units used alongside the International System of Quantities (see ISO/IEC 80000).
Early computers used one of two addressing methods to access the system memory; binary (base 2) or decimal (base 10).
For example, the IBM 701 (1952) used binary and could address 2048 words of 36 bits each, while the IBM 702 (1953) used decimal and could address ten thousand 7-bit words.
By the mid-1960s, binary addressing had become the standard architecture in most computer designs, and main memory sizes were most commonly powers of two. This is the most natural configuration for memory, as all combinations of their address lines map to a valid address, allowing easy aggregation into a larger block of memory with contiguous addresses.
Early computer system documentation would specify the memory size with an exact number such as 4096, 8192, or 16384 words of storage. These are all powers of two, and furthermore are small multiples of 210, or 1024. As storage capacities increased, several different methods were developed to abbreviate these quantities.
The method most commonly used today uses prefixes such as kilo, mega, giga, and corresponding symbols K, M, and G, which the computer industry originally adopted from the metric system. The prefixes "kilo-" and "mega-", meaning 1000 and respectively, were commonly used in the electronics industry before World War II.
Along with "giga-" or G-, meaning , they are now known as SI prefixes after the International System of Units (SI), introduced in 1960 to formalize aspects of the metric system.
The International System of Units does not define units for digital information but notes that the SI prefixes may be applied outside the contexts where base units or derived units would be used. But as computer main memory in a
binary-addressed system is manufactured in sizes that were easily expressed as multiples of 1024, "kilobyte", when applied to computer memory, came to be used to mean 1024 bytes instead of 1000. This usage is not consistent with the SI. Compliance with the SI requires that the prefixes take their 1000-based meaning, and that they are not to be used as placeholders for other numbers, like 1024.
The use of K in the binary sense as in a "32K core" meaning words, i.e., words, can be found as early as 1959.
Gene Amdahl's seminal 1964 article on IBM System/360 used "1K" to mean 1024.
This style was used by other computer vendors, the CDC 7600 "System Description" (1968) made extensive use of K as 1024.
Thus the first binary prefix was born.
Another style was to truncate the last three digits and append K, essentially using K as a decimal prefix similar to SI, but always truncating to the next lower whole number instead of rounding to the nearest. The exact values words, words and words would then be described as "32K", "65K" and "131K".
This style was used from about 1965 to 1975.
These two styles (K = 1024 and truncation) were used loosely around the same time, sometimes by the same company. In discussions of binary-addressed memories, the exact size was evident from context. (For memory sizes of "41K" and below, there is no difference between the two styles.) The HP 21MX real-time computer (1974) denoted (which is 192×1024) as "196K" and as "1M",
while the HP 3000 business computer (1973) could have "64K", "96K", or "128K" bytes of memory.
The "truncation" method gradually waned. Capitalization of the letter K became the "de facto" standard for binary notation, although this could not be extended to higher powers, and use of the lowercase k did persist. Nevertheless, the practice of using the SI-inspired "kilo" to indicate 1024 was later extended to "megabyte" meaning 10242 () bytes, and later "gigabyte" for 10243 () bytes. For example, a "512 megabyte" RAM module is 512×10242 bytes (512 × , or ), rather than .
The symbols Kbit, Kbyte, Mbit and Mbyte started to be used as "binary units"—"bit" or "byte" with a multiplier that is a power of 1024—in the early 1970s.
For a time, memory capacities were often expressed in K, even when M could have been used: The IBM System/370 Model 158 brochure (1972) had the following: "Real storage capacity is available in 512K increments ranging from 512K to 2,048K bytes."
Megabyte was used to describe the 22-bit addressing of DEC PDP-11/70 (1975)
and gigabyte the 30-bit addressing DEC VAX-11/780 (1977).
In 1998, the International Electrotechnical Commission IEC introduced the binary prefixes kibi, mebi, gibi ... to mean 1024, 10242, 10243 etc., so that 1048576 bytes could be referred to unambiguously as 1 mebibyte. The IEC prefixes were defined for use alongside the International System of Quantities (ISQ) in 2009.
The disk drive industry has followed a different pattern. Disk drive capacity is generally specified with unit prefixes with decimal meaning, in accordance to SI practices. Unlike computer main memory, disk architecture or construction does not mandate or make it convenient to use binary multiples. Drives can have any practical number of platters or surfaces, and the count of tracks, as well as the count of sectors per track may vary greatly between designs.
The first commercially sold disk drive, the IBM 350, had fifty physical disk platters containing a total of 50,000 sectors of 100 characters each, for a total quoted capacity of 5 million characters. It was introduced in September 1956.
In the 1960s most disk drives used IBM's variable block length format, called Count Key Data (CKD).
Any block size could be specified up to the maximum track length. Since the block headers occupied space, the usable capacity of the drive was dependent on the block size. Blocks ("records" in IBM's terminology) of 88, 96, 880 and 960 were often used because they related to the fixed block size of 80- and 96-character punch cards. The drive capacity was usually stated under conditions of full track record blocking. For example, the 100-megabyte 3336 disk pack only achieved that capacity with a full track block size of 13,030 bytes.
Floppy disks for the IBM PC and compatibles quickly standardized on 512-byte sectors, so two sectors were easily referred to as "1K". The 3.5-inch "360 KB" and "720 KB" had 720 (single-sided) and 1440 sectors (double-sided) respectively. When the High Density "1.44 MB" floppies came along, with 2880 of these 512-byte sectors, that terminology represented a hybrid binary-decimal definition of "1 MB" = 210 × 103 = 1 024 000 bytes.
In contrast, hard disk drive manufacturers used "megabytes" or "MB", meaning 106 bytes, to characterize their products as early as 1974. By 1977, in its first edition, Disk/Trend, a leading hard disk drive industry marketing consultancy segmented the industry according to MBs (decimal sense) of capacity.
One of the earliest hard disk drives in personal computing history, the Seagate ST-412, was specified as "Formatted: 10.0 Megabytes". The drive contains four heads and active surfaces (tracks per cylinder), 306 cylinders. When formatted with a sector size of 256 bytes and 32 sectors/track it has a capacity of . This drive was one of several types installed in the IBM PC/XT and extensively advertised and reported as a "10 MB" (formatted) hard disk drive.
The cylinder count of 306 is not conveniently close to any power of 1024; operating systems and programs using the customary binary prefixes show this as 9.5625 MB. Many later drives in the personal computer market used 17 sectors per track; still later, zone bit recording was introduced, causing the number of sectors per track to vary from the outer track to the inner.
The hard drive industry continues to use decimal prefixes for drive capacity, as well as for transfer rate. For example, a "300 GB" hard drive offers slightly more than , or , bytes, not (which would be about ). Operating systems such as Microsoft Windows that display hard drive sizes using the customary binary prefix "GB" (as it is used for RAM) would display this as "279.4 GB" (meaning bytes, or ). On the other hand, macOS has since version 10.6 shown hard drive size using decimal prefixes (thus matching the drive makers' packaging). (Previous versions of Mac OS X used binary prefixes.)
However, other usages still occur. Seagate has specified data transfer rates in select manuals of some hard drives in "both" IEC and decimal units.
"Advanced Format" drives using 4096-byte sectors are described as having "4K sectors."
Computer clock frequencies are always quoted using SI prefixes in their decimal sense. For example, the internal clock frequency of the original IBM PC was 4.77 MHz, that is .
Similarly, digital information transfer rates are quoted using decimal prefixes:
By the mid-1970s it was common to see K meaning 1024 and the occasional M meaning for words or bytes of main memory (RAM) while K and M were commonly used with their decimal meaning for disk storage. In the 1980s, as capacities of both types of devices increased, the SI prefix G, with SI meaning, was commonly applied to disk storage, while M in its binary meaning, became common for computer memory. In the 1990s, the prefix G, in its binary meaning, became commonly used for computer memory capacity. The first terabyte (SI prefix, bytes) hard disk drive was introduced in 2007.
The dual usage of the kilo (K), mega (M), and giga (G) prefixes as both powers of 1000 and powers of 1024 has been recorded in standards and dictionaries. For example, the 1986 ANSI/IEEE Std 1084-1986
defined dual uses for kilo and mega.
Many dictionaries have noted the practice of using traditional prefixes to indicate binary multiples.
Oxford online dictionary defines, for example, megabyte as: "Computing: a unit of information equal to one million or (strictly) bytes."
The units Kbyte, Mbyte, and Gbyte are found in the trade press and in IEEE journals. Gigabyte was formally defined in IEEE Std 610.10-1994 as either or 230 bytes.
Kilobyte, Kbyte, and KB are equivalent units and all are defined in the obsolete standard, IEEE 100–2000.
The hardware industry measures system memory (RAM) using the binary meaning while magnetic disk storage uses the SI definition. However, many exceptions exist. Labeling of diskettes uses the megabyte to denote 1024×1000 bytes. In the optical disks market, compact discs use "MB" to mean 10242 bytes while DVDs use "GB" to mean 10003 bytes.
Computer storage has become cheaper per unit and thereby larger, by many orders of magnitude since "K" was first used to mean 1024.
Because both the SI and "binary" meanings of kilo, mega, etc., are based on powers of 1000 or 1024 rather than simple multiples, the difference between 1M "binary" and 1M "decimal" is proportionally larger than that between 1K "binary" and 1k "decimal," and so on up the scale.
The relative difference between the values in the binary and decimal interpretations increases, when using the SI prefixes as the base, from 2.4% for kilo to nearly 21% for the yotta prefix.
In the early days of computers (roughly, prior to the advent of personal computers) there was little or no consumer confusion because of the technical sophistication of the buyers and their familiarity with the products. In addition, it was common for computer manufacturers to specify their products with capacities in full precision.
In the personal computing era, one source of consumer confusion is the difference in the way many operating systems display hard drive sizes, compared to the way hard drive manufacturers describe them. Hard drives are specified and sold using "GB" and "TB" in their decimal meaning: one billion and one trillion bytes. Many operating systems and other software, however, display hard drive and file sizes using "MB", "GB" or other SI-looking prefixes in their binary sense, just as they do for displays of RAM capacity. For example, many such systems display a hard drive marketed as "160 GB" as "149.05 GB". The earliest known presentation of hard disk drive capacity by an operating system using "KB" or "MB" in a binary sense is 1984; earlier operating systems generally presented the hard disk drive capacity as an exact number of bytes, with no prefix of any sort, for example, in the output of the MS-DOS or PC DOS CHKDSK command.
The different interpretations of disk size prefixes has led to class action lawsuits against digital storage manufacturers.
These cases involved both flash memory and hard disk drives.
The most recent cases (2019+) did not settle and are currently on appeal. Notably, the defendant persuaded the district court of the Northern District of California to enter judgment in its favor by citing to a publication from the National Institute of Technology from 1998 , which was published at a time that USB Drives did not exist and memory storage in gigabytes was not commercially feasible for the average consumer. However, the 1998 NIST publication was superseded in a 2008 NIST publication. The superseding publication does not maintain the same positions regarding the definition of gigabyte and megabyte as the 1998 publication. Additionally, NIST's 2008 Guide for the Use of the International System of Units (SI) makes clear that confusion of use of units is to be avoided, even if traditional units of must be used. Guide at p. 2. Thus, the litigation has not ended in favor of the manufacturers, and will not end until the appeals conclude along with any other suits that may be filed.
”
Earlier cases (2004-2007) were settled prior to any court ruling with the manufacturers admitting no wrongdoing but agreeing to clarify the storage capacity of their products on the consumer packaging.
Accordingly, many flash memory and hard disk manufacturers have disclosures on their packaging and web sites clarifying the formatted capacity of the devices
or defining MB as 1 million bytes and 1 GB as 1 billion bytes.
On 20 February 2004, Willem Vroegh filed a lawsuit against Lexar Media, Dane–Elec Memory, Fuji Photo Film USA, Eastman Kodak Company, Kingston Technology Company, Inc., Memorex Products, Inc.; PNY Technologies Inc., SanDisk Corporation, Verbatim Corporation, and Viking Interworks alleging that their descriptions of the capacity of their flash memory cards were false and misleading.
Vroegh claimed that a 256 MB Flash Memory Device had only 244 MB of accessible memory. "Plaintiffs allege that Defendants marketed the memory capacity of their products by assuming that one megabyte equals one million bytes and one gigabyte equals one billion bytes."
The plaintiffs wanted the defendants to use the traditional values of 10242 for megabyte and 10243 for gigabyte.
The plaintiffs acknowledged that the IEC and IEEE standards define a MB as one million bytes but stated that the industry has largely ignored the IEC standards.
The parties agreed that manufacturers could continue to use the decimal definition so long as the definition was added to the packaging and web sites. The consumers could apply for "a discount of ten percent off a future online purchase from Defendants' Online Stores Flash Memory Device".
On 7 July 2005, an action entitled "Orin Safier v. Western Digital Corporation, et al." was filed in the Superior Court for the City and County of San Francisco, Case No. CGC-05-442812.
The case was subsequently moved to the Northern District of California, Case No. 05-03353 BZ.
Although Western Digital maintained that their usage of units is consistent with "the indisputably correct industry standard for measuring and describing storage capacity", and that they "cannot be expected to reform the software industry", they agreed to settle in March 2006 with 14 June 2006 as the Final Approval hearing date.
Western Digital offered to compensate customers with a free download of backup and recovery software valued at US$30. They also paid $500,000 in fees and expenses to San Francisco lawyers Adam Gutride and Seth Safier, who filed the suit.
The settlement called for Western Digital to add a disclaimer to their later packaging and advertising.
A lawsuit (Cho v. Seagate Technology (US) Holdings, Inc., San Francisco Superior Court, Case No. CGC-06-453195) was filed against Seagate Technology, alleging that Seagate overrepresented the amount of usable storage by 7% on hard drives sold between March 22, 2001 and September 26, 2007. The case was settled without Seagate admitting wrongdoing, but agreeing to supply those purchasers with free backup software or a 5% refund on the cost of the drives.
While early computer scientists typically used k to mean 1000, some recognized the convenience that would result from working with multiples of 1024 and the confusion that resulted from using the same prefixes for two different meanings.
Several proposals for unique binary prefixes were made in 1968. Donald Morrison proposed to use the Greek letter kappa (κ) to denote 1024, κ2 to denote 10242, and so on.
Wallace Givens responded with a proposal to use bK as an abbreviation for 1024 and bK2 or bK2 for 10242, though he noted that neither the Greek letter nor lowercase letter b would be easy to reproduce on computer printers of the day.
Bruce Alan Martin of Brookhaven National Laboratory further proposed that the prefixes be abandoned altogether, and the letter B be used for base-2 exponents, similar to E in decimal scientific notation, to create shorthands like 3B20 for 3×220, a convention still used on some calculators to present binary floating point-numbers today.
None of these gained much acceptance, and capitalization of the letter K became the "de facto" standard for indicating a factor of 1024 instead of 1000, although this could not be extended to higher powers.
As the discrepancy between the two systems increased in the higher-order powers, more proposals for unique prefixes were made.
In 1996, Markus Kuhn proposed a system with "di" prefixes, like the "dikilobyte" (K₂B or K2B). Donald Knuth, who uses decimal notation like 1 MB = 1000 kB, expressed "astonishment" that the IEC proposal was adopted, calling them "funny-sounding" and opining that proponents were assuming "that standards are automatically adopted just because they are there." Knuth proposed that the powers of 1024 be designated as "large kilobytes" and "large megabytes" (abbreviated KKB and MMB, as "doubling the letter connotes both binary-ness and large-ness"). Double prefixes were already abolished from SI, however, having a multiplicative meaning ("MMB" would be equivalent to "TB"), and this proposed usage never gained any traction.
The set of binary prefixes that were eventually adopted, now referred to as the "IEC prefixes", were first proposed by the International Union of Pure and Applied Chemistry's (IUPAC) Interdivisional Committee on Nomenclature and Symbols (IDCNS) in 1995. At that time, it was proposed that the terms kilobyte and megabyte be used only for 103 bytes and 106 bytes, respectively. The new prefixes "kibi" (kilobinary), "mebi" (megabinary), "gibi" (gigabinary) and "tebi" (terabinary) were also proposed at the time, and the proposed symbols for the prefixes were kb, Mb, Gb and Tb respectively, rather than Ki, Mi, Gi and Ti. The proposal was not accepted at the time.
The Institute of Electrical and Electronics Engineers (IEEE) began to collaborate with the International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC) to find acceptable names for binary prefixes. IEC proposed "kibi", "mebi", "gibi" and "tebi", with the symbols Ki, Mi, Gi and Ti respectively, in 1996.
The names for the new prefixes are derived from the original SI prefixes combined with the term "binary", but contracted, by taking the first two letters of the SI prefix and "bi" from binary. The first letter of each such prefix is therefore identical to the corresponding SI prefixes, except for "K", which is used interchangeably with "k", whereas in SI, only the lower-case k represents 1000.
The IEEE decided that their standards would use the prefixes "kilo", etc. with their metric definitions, but allowed the binary definitions to be used in an interim period as long as such usage was explicitly pointed out on a case-by-case basis.
In January 1999, the IEC published the first international standard (IEC 60027-2 Amendment 2) with the new prefixes, extended up to "pebi" (Pi) and "exbi" (Ei).
The IEC 60027-2 Amendment 2 also states that the IEC position is the same as that of BIPM (the body that regulates the SI system); the SI prefixes retain their definitions in powers of 1000 and are never used to mean a power of 1024.
In usage, products and concepts typically described using powers of 1024 would continue to be, but with the new IEC prefixes. For example, a memory module of bytes () would be referred to as 512 MiB or 512 mebibytes instead of 512 MB or 512 megabytes. Conversely, since hard drives have historically been marketed using the SI convention that "giga" means , a "500 GB" hard drive would still be labeled as such. According to these recommendations, operating systems and other software would also use binary and SI prefixes in the same way, so the purchaser of a "500 GB" hard drive would find the operating system reporting either "500 GB" or "466 GiB", while bytes of RAM would be displayed as "512 MiB".
The second edition of the standard, published in 2000, defined them only up to "exbi", but in 2005, the third edition added prefixes "zebi" and "yobi", thus matching all SI prefixes with binary counterparts.
The harmonized ISO/IEC IEC 80000-13:2008 standard cancels and replaces subclauses 3.8 and 3.9 of IEC 60027-2:2005 (those defining prefixes for binary multiples). The only significant change is the addition of explicit definitions for some quantities. In 2009, the prefixes kibi-, mebi-, etc. were defined by ISO 80000-1 in their own right, independently of the kibibyte, mebibyte, and so on.
The BIPM standard JCGM 200:2012 "International vocabulary of metrology – Basic and general concepts and associated terms (VIM), 3rd edition" lists the IEC binary prefixes and states "SI prefixes refer strictly to powers of 10, and should not be used for powers of 2. For example, 1 kilobit should not be used to represent bits (210 bits), which is 1 kibibit."
The IEC standard binary prefixes are now supported by other standardization bodies and technical organizations.
The United States National Institute of Standards and Technology (NIST) supports the ISO/IEC standards for
"Prefixes for binary multiples" and has a web site documenting them, describing and justifying their use. NIST suggests that in English, the first syllable of the name of the binary-multiple prefix should be pronounced in the same way as the first syllable of the name of the corresponding SI prefix, and that the second syllable should be pronounced as "bee". NIST has stated the SI prefixes "refer strictly to powers of 10" and that the binary definitions "should not be used" for them.
The microelectronics industry standards body JEDEC describes the IEC prefixes in its online dictionary. The JEDEC standards for semiconductor memory use the customary prefix symbols K, M and G in the binary sense.
On 19 March 2005, the IEEE standard IEEE 1541-2002 ("Prefixes for Binary Multiples") was elevated to a full-use standard by the IEEE Standards Association after a two-year trial period. However, , the IEEE Publications division does not require the use of IEC prefixes in its major magazines such as "Spectrum" or "Computer".
The International Bureau of Weights and Measures (BIPM), which maintains the International System of Units (SI), expressly prohibits the use of SI prefixes to denote binary multiples, and recommends the use of the IEC prefixes as an alternative since units of information are not included in SI.
The Society of Automotive Engineers (SAE) prohibits the use of SI prefixes with anything but a power-of-1000 meaning, but does not recommend or otherwise cite the IEC binary prefixes.
The European Committee for Electrotechnical Standardization (CENELEC) adopted the IEC-recommended binary prefixes via the harmonization document HD 60027-2:2003-03.
The European Union (EU) has required the use of the IEC binary prefixes since 2007.
Most computer hardware uses SI prefixes to state capacity and define other performance parameters such as data rate. Main and cache memories are notable exceptions.
Capacities of main memory and cache memory are usually expressed with customary binary prefixes
On the other hand, flash memory, like that found in solid state drives, mostly uses SI prefixes to state capacity.
Some operating systems and other software continue to use the customary binary prefixes in displays of memory, disk storage capacity, and file size, but SI prefixes in other areas such as network communication speeds and processor speeds.
In the following subsections, unless otherwise noted, examples are first given using the common prefixes used in each case, and then followed by interpretation using other notation where appropriate.
Prior to the release of Macintosh System Software (1984), file sizes were typically reported by the operating system without any prefixes. Today, most operating systems report file sizes with prefixes.
, most software does not distinguish symbols for binary and decimal prefixes.
The IEC binary naming convention has been adopted by a few, but this is not used universally.
One of the stated goals of the introduction of the IEC prefixes was "to preserve the SI prefixes as unambiguous decimal multipliers." Programs such as fdisk/cfdisk, parted, and apt-get use SI prefixes with their decimal meaning.
Example of the use of IEC binary prefixes in the Linux operating system displaying traffic volume on a network interface in kibibytes (KiB) and mebibytes (MiB), as obtained with the ifconfig utility:
eth0 Link encap:Ethernet HWaddr 00:14:A0:B0:7A:42
Software that uses standard SI prefixes for powers of 1000, but "not" IEC binary prefixes for powers of 1024, includes:
Software that supports decimal prefixes for powers of 1000 "and" binary prefixes for powers of 1024 (but does not follow SI or IEC nomenclature for this) includes:
Software that uses IEC binary prefixes for powers of 1024 "and" uses standard SI prefixes for powers of 1000 includes:
Hardware types that use powers-of-1024 multipliers, such as memory, continue to be marketed with customary binary prefixes.
Measurements of most types of electronic memory such as RAM and ROM are given using customary binary prefixes (kilo, mega, and giga). This includes some flash memory, like EEPROMs. For example, a "512-megabyte" memory module is 512×220 bytes (512 × , or ).
JEDEC Solid State Technology Association, the semiconductor engineering standardization body of the Electronic Industries Alliance (EIA), continues to include the customary binary definitions of kilo, mega and giga in their "Terms, Definitions, and Letter Symbols" document, | https://en.wikipedia.org/wiki?curid=4077 |
National Baseball Hall of Fame and Museum
The National Baseball Hall of Fame and Museum is an American history museum and hall of fame in Cooperstown, New York, and operated by private interests. It serves as the central point for the study of the history of baseball in the United States and beyond, displays baseball-related artifacts and exhibits honoring those who have excelled in playing, managing, and serving the sport. The Hall's motto is "Preserving History, Honoring Excellence, Connecting Generations". Cooperstown is often used as shorthand (or a metonym) for the National Baseball Hall of Fame and Museum, similar to "Canton" for the Pro Football Hall of Fame in Canton, Ohio.
The Hall of Fame was established in 1939 by Stephen Carlton Clark, an heir to the Singer Sewing Machine fortune. Clark sought to bring tourists to a city hurt by the Great Depression, which reduced the local tourist trade, and Prohibition, which devastated the local hops industry. Clark constructed the Hall of Fame's building, and it was dedicated on June 12, 1939. (His granddaughter, Jane Forbes Clark, is the current chairman of the Board of Directors.) The erroneous claim that Civil War hero Abner Doubleday invented baseball in Cooperstown was instrumental in the early marketing of the Hall.
An expanded library and research facility opened in 1994. Dale Petroskey became the organization's president in 1999. In 2002, the Hall launched "Baseball As America", a traveling exhibit that toured ten American museums over six years. The Hall of Fame has since also sponsored educational programming on the Internet to bring the Hall of Fame to schoolchildren who might not visit. The Hall and Museum completed a series of renovations in spring 2005. The Hall of Fame also presents an annual exhibit at FanFest at the Major League Baseball All-Star Game.
Among baseball fans, "Hall of Fame" means not only the museum and facility in Cooperstown, New York, but the pantheon of players, managers, umpires, executives, and pioneers who have been enshrined in the Hall. The first five men elected were Ty Cobb, Babe Ruth, Honus Wagner, Christy Mathewson and Walter Johnson, chosen in 1936; roughly 20 more were selected before the entire group was inducted at the Hall's 1939 opening. , 333 people had been elected to the Hall of Fame, including 234 former Major League Baseball players, 35 Negro league baseball players and executives, 23 managers, 10 umpires, and 36 pioneers, executives, and organizers. 114 members of the Hall of Fame have been inducted posthumously, including four who died after their selection was announced. Of the 35 Negro league members, 29 were inducted posthumously, including all 24 selected since the 1990s. The Hall of Fame includes one female member, Effa Manley.
The newest members inducted on July 21, 2019, are players Harold Baines, Roy Halladay, Edgar Martínez, Mike Mussina, Mariano Rivera, and Lee Smith. Rivera was the first player ever to be elected unanimously.
Players are currently inducted into the Hall of Fame through election by either the Baseball Writers' Association of America (or BBWAA), or the Veterans Committee, which now consists of four subcommittees, each of which considers and votes for candidates from a separate era of baseball. Five years after retirement, any player with 10 years of major league experience who passes a screening committee (which removes from consideration players of clearly lesser qualification) is eligible to be elected by BBWAA members with 10 years' membership or more who also have been actively covering MLB at any time in the 10 years preceding the election (the latter requirement was added for the 2016 election). From a final ballot typically including 25–40 candidates, each writer may vote for up to 10 players; until the late 1950s, voters were advised to cast votes for the maximum 10 candidates. Any player named on 75% or more of all ballots cast is elected. A player who is named on fewer than 5% of ballots is dropped from future elections. In some instances, the screening committee had restored their names to later ballots, but in the mid-1990s, dropped players were made permanently ineligible for Hall of Fame consideration, even by the Veterans Committee. A 2001 change in the election procedures restored the eligibility of these dropped players; while their names will not appear on future BBWAA ballots, they may be considered by the Veterans Committee. Players receiving 5% or more of the votes but fewer than 75% are reconsidered annually until a maximum of ten years of eligibility (lowered from fifteen years for the 2015 election).
Under special circumstances, certain players may be deemed eligible for induction even though they have not met all requirements. Addie Joss was elected in 1978, despite only playing nine seasons before he died of meningitis. Additionally, if an otherwise eligible player dies before his fifth year of retirement, then that player may be placed on the ballot at the first election at least six months after his death. Roberto Clemente's induction in 1973 set the precedent when the writers chose to put him up for consideration after his death on New Year's Eve, 1972.
The five-year waiting period was established in 1954 after an evolutionary process. In 1936 all players were eligible, including active ones. From the 1937 election until the 1945 election, there was no waiting period, so any retired player was eligible, but writers were discouraged from voting for current major leaguers. Since there was no formal rule preventing a writer from casting a ballot for an active player, the scribes did not always comply with the informal guideline; Joe DiMaggio received a vote in 1945, for example. From the 1946 election until the 1954 election, an official one-year waiting period was in effect. (DiMaggio, for example, retired after the 1951 season and was first eligible in the 1953 election.) The modern rule establishing a wait of five years was passed in 1954, although an exception was made for Joe DiMaggio because of his high level of previous support, thus permitting him to be elected within four years of his retirement.
Contrary to popular belief, no formal exception was made for Lou Gehrig (other than to hold a special one-man election for him): there was no waiting period at that time, and Gehrig met all other qualifications, so he would have been eligible for the next regular election after he retired during the 1939 season. However, the BBWAA decided to hold a special election at the 1939 Winter Meetings in Cincinnati, specifically to elect Gehrig (most likely because it was known that he was terminally ill, making it uncertain that he would live long enough to see another election). Nobody else was on that ballot, and the numerical results have never been made public. Since no elections were held in 1940 or 1941, the special election permitted Gehrig to enter the Hall while still alive.
If a player fails to be elected by the BBWAA within 10 years of his retirement from active play, he may be selected by the Veterans Committee. Following changes to the election process for that body made in 2010 and 2016, it is now responsible for electing all otherwise eligible candidates who are not eligible for the BBWAA ballot — both long-retired players and non-playing personnel (managers, umpires, and executives). From 2011 to 2016, each candidate could be considered once every three years; now, the frequency depends on the era in which an individual made his greatest contributions. A more complete discussion of the new process is available below.
From 2008 to 2010, following changes made by the Hall in July 2007, the main Veterans Committee, then made up of living Hall of Famers, voted only on players whose careers began in 1943 or later. These changes also established three separate committees to select other figures:
Players of the Negro Leagues have also been considered at various times, beginning in 1971. In 2005, the Hall completed a study on African American players between the late 19th century and the integration of the major leagues in 1947, and conducted a special election for such players in February 2006; seventeen figures from the Negro Leagues were chosen in that election, in addition to the eighteen previously selected. Following the 2010 changes, Negro Leagues figures were primarily considered for induction alongside other figures from the 1871–1946 era, called the "Pre-Integration Era" by the Hall; since 2016, Negro Leagues figures are primarily considered alongside other figures from what the Hall calls the "Early Baseball" era (1871–1949).
Predictably, the selection process catalyzes endless debate among baseball fans over the merits of various candidates. Even players elected years ago remain the subjects of discussions as to whether they deserved election. For example, Bill James' 1994 book "Whatever Happened to the Hall of Fame?" goes into detail about who he believes does and does not belong in the Hall of Fame.
Following the banning of Pete Rose from MLB, the selection rules for the Baseball Hall of Fame were modified to prevent the induction of anyone on Baseball's "permanently ineligible" list, such as Rose or "Shoeless Joe" Jackson. Many others have been barred from participation in MLB, but none have Hall of Fame qualifications on the level of Jackson or Rose.
Jackson and Rose were both banned from MLB for life for actions related to gambling on their own teams—Jackson was determined to have cooperated with those who conspired to intentionally lose the 1919 World Series, and for accepting payment for losing, and Rose voluntarily accepted a permanent spot on the ineligible list in return for MLB's promise to make no official finding in relation to alleged betting on the Cincinnati Reds when he was their manager in the 1980s. (Baseball's Rule 21, prominently posted in every clubhouse locker room, mandates permanent banishment from the MLB for having a gambling interest of any sort on a game in which a player or manager is directly involved.) Rose later admitted that he bet on the Reds in his 2004 autobiography. Baseball fans are deeply split on the issue of whether these two should remain banned or have their punishment revoked. Writer Bill James, though he advocates Rose eventually making it into the Hall of Fame, compared the people who want to put Jackson in the Hall of Fame to "those women who show up at murder trials wanting to marry the cute murderer".
The actions and composition of the Veterans Committee have been at times controversial, with occasional selections of contemporaries and teammates of the committee members over seemingly more worthy candidates.
In 2001, the Veterans Committee was reformed to comprise the living Hall of Fame members and other honorees. The revamped Committee held three elections, in 2003 and 2007, for both players and non-players, and in 2005 for players only. No individual was elected in that time, sparking criticism among some observers who expressed doubt whether the new Veterans Committee would ever elect a player. The Committee members, most of whom were Hall members, were accused of being reluctant to elect new candidates in the hope of heightening the value of their own selection. After no one was selected for the third consecutive election in 2007, Hall of Famer Mike Schmidt noted, "The same thing happens every year. The current members want to preserve the prestige as much as possible, and are unwilling to open the doors." In 2007, the committee and its selection processes were again reorganized; the main committee then included all living members of the Hall, and voted on a reduced number of candidates from among players whose careers began in 1943 or later. Separate committees, including sportswriters and broadcasters, would select umpires, managers and executives, as well as players from earlier eras.
In the first election to be held under the 2007 revisions, two managers and three executives were elected in December 2007 as part of the 2008 election process. The next Veterans Committee elections for players were held in December 2008 as part of the 2009 election process; the main committee did not select a player, while the panel for pre–World War II players elected Joe Gordon in its first and ultimately only vote. The main committee voted as part of the election process for inductions in odd-numbered years, while the pre-World War II panel would vote every five years, and the panel for umpires, managers, and executives voted as part of the election process for inductions in even-numbered years.
Further changes to the Veterans Committee process were announced by the Hall on July 26, 2010, effective with the 2011 election.
All individuals eligible for induction but not eligible for BBWAA consideration were considered on a single ballot, grouped by the following eras in which they made their greatest contributions:
The Hall used the BBWAA's Historical Overview Committee to formulate the ballots for each era, consisting of 12 individuals for the Expansion Era and 10 for the other eras. The Hall's board of directors selected a committee of 16 voters for each era, made up of Hall of Famers, executives, baseball historians, and media members. Each committee met and voted at the Baseball Winter Meetings once every three years. The Expansion Era committee held its first vote in 2010 for 2011 induction, with longtime general manager Pat Gillick becoming the first individual elected under the new procedure. The Golden Era committee voted in 2011 for the induction class of 2012, with Ron Santo becoming the first player elected under the new procedure. The Pre-Integration Era committee voted in 2012 for the induction class of 2013, electing three figures. Subsequent elections rotated among the three committees in that order through the 2016 election.
In July 2016, however, the Hall of Fame announced a restructuring of the timeframes to be considered, with a much greater emphasis on modern eras. Four new committees were established:
All committees' ballots now include 10 candidates. At least one committee convenes each December as part of the election process for the following calendar year's induction ceremony. The Early Baseball committee convenes only in years ending in 0 (2020, 2030). The Golden Days committee convenes only in years ending in 0 and 5 (2020, 2025). The remaining two committees convene twice every 5 years. More specifically, the Today's Game and Modern Baseball committees alternate their meetings in that order, skipping years in which either the Early Baseball or Golden Days committee meets. This means that the Today's Game committee (having first met in 2016) will meet in 2021, 2023 and 2026, while the Modern Baseball committee (which first met in 2017) will meet in 2019, 2022 and 2024.
The eligibility criteria for Era Committee consideration differ between players, managers, and executives.
While the text on a player's or manager's plaque lists all teams for which the inductee was a member in that specific role, inductees are usually depicted wearing the cap of a specific team, though in a few cases, like umpires, they wear caps without logos. (Executives are not depicted wearing caps.) Additionally, as of 2015, inductee biographies on the Hall's website for all players and managers, and executives who were associated with specific teams, list a "primary team", which does not necessarily match the cap logo. The Hall selects the logo "based on where that player makes his most indelible mark."
Although the Hall always made the final decision on which logo was shown, until 2001 the Hall deferred to the wishes of players or managers whose careers were linked with multiple teams. Some examples of inductees associated with multiple teams are the following:
In all of the above cases, the "primary team" is the team for which the inductee spent the largest portion of his career except for Ryan, whose primary team is listed as the Angels despite playing one fewer season for that team than for the Astros.
In 2001, the Hall of Fame decided to change the policy on cap logo selection, as a result of rumors that some teams were offering compensation, such as number retirement, money, or organizational jobs, in exchange for the cap designation. (For example, though Wade Boggs denied the claims, some media reports had said that his contract with the Tampa Bay Devil Rays required him to request depiction in the Hall of Fame as a Devil Ray.) The Hall decided that it would no longer defer to the inductee, though the player's wishes would be considered, when deciding on the logo to appear on the plaque. Newly elected members affected by the change include the following:
According to the Hall of Fame, approximately 260,000 visitors enter the museum each year, and the running total has surpassed 17 million. These visitors see only a fraction of its 40,000 artifacts, 3 million library items (such as newspaper clippings and photos) and 140,000 baseball cards.
The Hall has seen a noticeable decrease in attendance in recent years. A 2013 story on "ESPN.com" about the village of Cooperstown and its relation to the game partially linked the reduced attendance with Cooperstown Dreams Park, a youth baseball complex about away in the town of Hartwick. The 22 fields at Dreams Park currently draw 17,000 players each summer for a week of intensive play; while the complex includes housing for the players, their parents and grandparents must stay elsewhere. According to the story,Prior to Dreams Park, a room might be filled for a week by several sets of tourists. Now, that room will be taken by just one family for the week, and that family may only go into Cooperstown and the Hall of Fame once. While there are other contributing factors (the recession and high gas prices among them), the Hall's attendance has tumbled since Dreams Park opened. The Hall drew 383,000 visitors in 1999. It drew 262,000 last year.
A controversy erupted in 1982, when it emerged that some historic items given to the Hall had been sold on the collectibles market. The items had been lent to the Baseball Commissioner's office, gotten mixed up with other property owned by the Commissioner's office and employees of the office, and moved to the garage of Joe Reichler, an assistant to Commissioner Bowie Kuhn, who sold the items to resolve his personal financial difficulties. Under pressure from the New York Attorney General, the Commissioner's Office made reparations, but the negative publicity damaged the Hall of Fame's reputation, and made it more difficult for it to solicit donations.
In 2012, Congress passed and President Barack Obama signed a law ordering the United States Mint to produce and sell commemorative, non-circulating coins to benefit the private, non-profit Hall. The bill, , was introduced in the United States House of Representatives by Rep. Richard Hanna, a Republican from New York, and passed the House on October 26, 2011. The coins, which depict baseball gloves and balls, are the first concave designs produced by the Mint. The mintage included 50,000 gold coins, 400,000 silver coins, and 750,000 clad (Nickel-Copper) coins. The Mint released them on March 27, 2014, and the gold and silver editions quickly sold out. The Hall receives money from surcharges included in the sale price: a total of $9.5 million if all the coins are sold. | https://en.wikipedia.org/wiki?curid=4078 |
BPP (complexity)
In computational complexity theory, bounded-error probabilistic polynomial time (BPP) is the class of decision problems solvable by a probabilistic Turing machine in polynomial time with an error probability bounded away from 1/3 for all instances.
BPP is one of the largest "practical" classes of problems, meaning most problems of interest in BPP have efficient probabilistic algorithms that can be run quickly on real modern machines. BPP also contains P, the class of problems solvable in polynomial time with a deterministic machine, since a deterministic machine is a special case of a probabilistic machine.
Informally, a problem is in BPP if there is an algorithm for it that has the following properties:
A language "L" is in BPP if and only if there exists a probabilistic Turing machine "M", such that
Unlike the complexity class ZPP, the machine "M" is required to run for polynomial time on all inputs, regardless of the outcome of the random coin flips.
Alternatively, BPP can be defined using only deterministic Turing machines. A language "L" is in BPP if and only if there exists a polynomial "p" and deterministic Turing machine "M", such that
In this definition, the string "y" corresponds to the output of the random coin flips that the probabilistic Turing machine would have made. For some applications this definition is preferable since it does not mention probabilistic Turing machines.
In practice, an error probability of might not be acceptable, however, the choice of in the definition is arbitrary. It can be any constant between 0 and (exclusive) and the set BPP will be unchanged. It does not even have to be constant: the same class of problems is defined by allowing error as high as − "n"−"c" on the one hand, or requiring error as small as 2−"nc" on the other hand, where "c" is any positive constant, and "n" is the length of input. The idea is that there is a probability of error, but if the algorithm is run many times, the chance that the majority of the runs are wrong drops off exponentially as a consequence of the Chernoff bound. This makes it possible to create a highly accurate algorithm by merely running the algorithm several times and taking a "majority vote" of the answers. For example, if one defined the class with the restriction that the algorithm can be wrong with probability at most , this would result in the same class of problems.
All problems in P are obviously also in BPP. However, many problems have been known to be in BPP but not known to be in P. The number of such problems is decreasing, and it is conjectured that P = BPP.
For a long time, one of the most famous problems known to be in BPP but not known to be in P was the problem of determining whether a given number is prime. However, in the 2002 paper "PRIMES is in P", Manindra Agrawal and his students Neeraj Kayal and Nitin Saxena found a deterministic polynomial-time algorithm for this problem, thus showing that it is in P.
An important example of a problem in BPP (in fact in co-RP) still not known to be in P is polynomial identity testing, the problem of determining whether a polynomial is identically equal to the zero polynomial, when you have access to the value of the polynomial for any given input, but not to the coefficients. In other words, is there an assignment of values to the variables such that when a nonzero polynomial is evaluated on these values, the result is nonzero? It suffices to choose each variable's value uniformly at random from a finite subset of at least "d" values to achieve bounded error probability, where "d" is the total degree of the polynomial.
If the access to randomness is removed from the definition of BPP, we get the complexity class P. In the definition of the class, if we replace the ordinary Turing machine with a quantum computer, we get the class BQP.
Adding postselection to BPP, or allowing computation paths to have different lengths, gives the class BPPpath. BPPpath is known to contain NP, and it is contained in its quantum counterpart PostBQP.
A Monte Carlo algorithm is a randomized algorithm which is likely to be correct. Problems in the class BPP have Monte Carlo algorithms with polynomial bounded running time. This is compared to a Las Vegas algorithm which is a randomized algorithm which either outputs the correct answer, or outputs "fail" with low probability. Las Vegas algorithms with polynomial bound running times are used to define the class ZPP. Alternatively, ZPP contains probabilistic algorithms that are always correct and have expected polynomial running time. This is weaker than saying it is a polynomial time algorithm, since it may run for super-polynomial time, but with very low probability.
It is known that BPP is closed under complement; that is, BPP = co-BPP. BPP is low for itself, meaning that a BPP machine with the power to solve BPP problems instantly (a BPP oracle machine) is not any more powerful than the machine without this extra power. In symbols, BPPBPP = BPP.
The relationship between BPP and NP is unknown: it is not known whether BPP is a subset of NP, NP is a subset of BPP or neither. If NP is contained in BPP, which is considered unlikely since it would imply practical solutions for NP-complete problems, then NP = RP and PH ⊆ BPP.
It is known that RP is a subset of BPP, and BPP is a subset of PP. It is not known whether those two are strict subsets, since we don't even know if P is a strict subset of PSPACE. BPP is contained in the second level of the polynomial hierarchy and therefore it is contained in PH. More precisely, the Sipser–Lautemann theorem states that formula_1. As a result, P = NP leads to P = BPP since PH collapses to P in this case. Thus either P = BPP or P ≠ NP or both.
Adleman's theorem states that membership in any language in BPP can be determined by a family of polynomial-size Boolean circuits, which means BPP is contained in P/poly. Indeed, as a consequence of the proof of this fact, every BPP algorithm operating on inputs of bounded length can be derandomized into a deterministic algorithm using a fixed string of random bits. Finding this string may be expensive, however.
Some weak separation results for Monte Carlo time classes were proven by , see also
.
The class BPP is closed under complementation, union and intersection.
Relative to oracles, we know that there exist oracles A and B, such that PA = BPPA and PB ≠ BPPB. Moreover, relative to a random oracle with probability 1, P = BPP and BPP is strictly contained in NP and co-NP.
There is even an oracle in which BPP=EXPNP (and hence P which can be iteratively constructed as follows. For a fixed ENP (relativized) complete problem, the oracle will give correct answers with high probability if queried with the problem instance followed by a random string of length "kn" ("n" is instance length; "k" is an appropriate small constant). Start with "n"=1. For every instance of the problem of length "n" fix oracle answers (see lemma below) to fix the instance output. Next, provide the instance outputs for queries consisting of the instance followed by "kn"-length string, and then treat output for queries of length ≤("k"+1)"n" as fixed, and proceed with instances of length "n"+1.
Lemma: Given a problem (specifically, an oracle machine code and time constraint) in relativized ENP, for every partially constructed oracle and input of length "n", the output can be fixed by specifying 2"O"("n") oracle answers.
Proof: The machine is simulated, and the oracle answers (that are not already fixed) are fixed step-by-step. There is at most one oracle query per deterministic computation step. For the relativized NP oracle, if possible fix the output to be yes by choosing a computation path and fixing the answers of the base oracle; otherwise no fixing is necessary, and either way there is at most 1 answer of the base oracle per step. Since there are 2"O"("n") steps, the lemma follows.
The lemma ensures that (for a large enough "k"), it is possible to do the construction while leaving enough strings for the relativized ENP answers. Also, we can ensure that for the relativized ENP, linear time suffices, even for function problems (if given a function oracle and linear output size) and with exponentially small (with linear exponent) error probability. Also, this construction is effective in that given an arbitrary oracle A we can arrange the oracle B to have PA≤PB and EXPNPA=EXPNPB=BPPB. Also, for a ZPP=EXP oracle (and hence ZPP=BPP=EXP
The class i.o.-SUBEXP, which stands for infinitely often SUBEXP, contains problems which have sub-exponential time algorithms for infinitely many input sizes. They also showed that P = BPP if the exponential-time hierarchy, which is defined in terms of the polynomial hierarchy and E as EPH, collapses to E; however, note that the exponential-time hierarchy is usually conjectured "not" to collapse.
Russell Impagliazzo and Avi Wigderson showed that if any problem in E, where
has circuit complexity 2Ω("n") then P = BPP. | https://en.wikipedia.org/wiki?curid=4079 |
BQP
In computational complexity theory, bounded-error quantum polynomial time (BQP) is the class of decision problems solvable by a quantum computer in polynomial time, with an error probability of at most 1/3 for all instances. It is the quantum analogue to the complexity class BPP.
A decision problem is a member of BQP if there exists a quantum algorithm (an algorithm that runs on a quantum computer) that solves the decision problem with high probability and is guaranteed to run in polynomial time. A run of the algorithm will correctly solve the decision problem with a probability of at least 2/3.
BQP can be viewed as the languages associated with certain bounded-error uniform families of quantum circuits. A language "L" is in BQP if and only if there exists a polynomial-time uniform family of quantum circuits formula_1, such that
Alternatively, one can define BQP in terms of quantum Turing machines. A language "L" is in BQP if and only if there exists a polynomial quantum Turing machine that accepts "L" with an error probability of at most 1/3 for all instances.
Similarly to other "bounded error" probabilistic classes the choice of 1/3 in the definition is arbitrary. We can run the algorithm a constant number of times and take a majority vote to achieve any desired probability of correctness less than 1, using the Chernoff bound. The complexity class is unchanged by allowing error as high as 1/2 − "n"−"c" on the one hand, or requiring error as small as 2−"nc" on the other hand, where "c" is any positive constant, and "n" is the length of input.
The number of qubits in the computer is allowed to be a polynomial function of the instance size. For example, algorithms are known for factoring an "n"-bit integer using just over 2"n" qubits (Shor's algorithm).
Usually, computation on a quantum computer ends with a measurement. This leads to a collapse of quantum state to one of the basis states. It can be said that the quantum state is measured to be in the correct state with high probability.
Quantum computers have gained widespread interest because some problems of practical interest are known to be in BQP, but suspected to be outside P. Some prominent examples are:
BQP is defined for quantum computers; the corresponding complexity class for classical computers (or more formally for probabilistic Turing machines) is BPP. Just like P and BPP, BQP is low for itself, which means BQPBQP = BQP. Informally, this is true because polynomial time algorithms are closed under composition. If a polynomial time algorithm calls as a subroutine polynomially many polynomial time algorithms, the resulting algorithm is still polynomial time.
BQP contains P and BPP and is contained in AWPP, PP and PSPACE.
In fact, BQP is low for PP, meaning that a PP machine achieves no benefit from being able to solve BQP problems instantly, an indication of the possible difference in power between these similar classes. The known relationships with classic complexity classes are:
As the problem of P ≟ PSPACE has not yet been solved, the proof of inequality between BQP and classes mentioned above is supposed to be difficult. The relation between BQP and NP is not known. In May 2018, computer scientists Ran Raz of Princeton University and Avishay Tal of Stanford University published a paper which showed that, relative to an oracle, BQP was not contained in PH.
Adding postselection to BQP results in the complexity class PostBQP which is equal to PP. | https://en.wikipedia.org/wiki?curid=4080 |
Blade Runner 3: Replicant Night
Blade Runner 3: Replicant Night is a science fiction novel by American writer K. W. Jeter published in 1996. It is a continuation of Jeter's novel "", which was itself a sequel to both the film "Blade Runner" and the novel upon which the film was based, Philip K. Dick's "Do Androids Dream of Electric Sheep?"
Living on Mars, Deckard is acting as a consultant to a movie crew filming the story of his days as a blade runner. He finds himself drawn into a mission on behalf of the replicants he was once assigned to kill. Meanwhile, the mystery surrounding the beginnings of the Tyrell Corporation is being exposed.
The plot element of a replicant giving birth served as the basis for the 2017 film "Blade Runner 2049". | https://en.wikipedia.org/wiki?curid=4081 |
Blade Runner 2: The Edge of Human
Blade Runner 2: The Edge of Human (1995) is a science fiction novel by American writer K. W. Jeter. It is a continuation of both the film "Blade Runner" and the novel upon which the film was based, Philip K. Dick's "Do Androids Dream of Electric Sheep?"
Several months after the events depicted in "Blade Runner", Deckard has retired to an isolated shack outside the city, taking the replicant Rachael with him in a Tyrell transport container, which slows down the replicant aging process. He is approached by a woman who explains she is Sarah Tyrell, niece of Eldon Tyrell, heiress to the Tyrell Corporation and the human template ("templant") for the Rachael replicant. She asks Deckard to hunt down the "missing" sixth replicant. At the same time, the templant for Roy Batty hires Dave Holden, the blade runner attacked by Leon, to help him hunt down the man he believes is the sixth replicant—Deckard.
Deckard and Holden's investigations lead them to re-visit Sebastian, Bryant, and John Isidore (from the book "Do Androids Dream Of Electric Sheep?"), learning more about the nature of the blade runners and the replicants.
When Deckard, Batty, and Holden finally clash, Batty's super-human fighting prowess leads Holden to believe he has been duped all along and that Batty is the sixth replicant. He shoots him. Deckard returns to Sarah with his suspicion: there is "no" sixth replicant. Sarah, speaking via a remote camera, confesses that she invented and maintained the rumor herself in order to deliberately discredit and eventually destroy the Tyrell Corporation because her uncle Eldon had based Rachel on her and then abandoned the real Sarah. Sarah brings Rachael back to the Corporation to meet with Deckard, and they escape.
However, Holden, recovering from his injuries during the fight, later uncovers the truth: Rachael has been killed by Tyrell agents, and the "Rachael" who escaped with Deckard was actually Sarah. She has completed her revenge by both destroying Tyrell and taking back Rachael's place.
The book's plot draws from other material related to "Blade Runner" in a number of ways:
However, it also contradicts material in some ways:
Michael Giltz of "Entertainment Weekly" gave the book a "C-", feeling that "only hardcore fans will be satisfied by this tale" and saying Jeter's "habit of echoing dialogue and scenes from the film is annoying and begs comparisons he would do well to avoid." Tal Cohen of "Tal Cohen's Bookshelf" called "The Edge of Human" "a good book", praising Jeter's "further, and deeper, investigation of the questions Philip K. Dick originally asked", but criticized the book for its "needless grandioseness" and for "rel[ying] on "Blade Runner" too heavily, [as] the number of new characters introduced is extremely small..."
Ian Kaplan of BearCave.com gave the book three stars out of five, saying that while he was "not entirely satisfied" and felt that the "story tends to be shallow", "Jeter does deal with the moral dilemma of the Blade Runners who hunt down beings that are virtually human in every way." J. Patton of "The Bent Cover" praised Jeter for "[not] try[ing] to emulate Philip K. Dick", adding, "This book also has all the grittiness and dark edges that the movie showed off so well, along with a very fast pace that will keep you reading into the wee hours of the night."
In the late 1990s, "Edge of Human" had been adapted into a screenplay by Stuart Hazeldine, "Blade Runner Down", that was to be filmed as the sequel to the 1982 film "Blade Runner".
Ultimately neither this script nor the Jeter novel was used for the eventual sequel, "Blade Runner 2049", which uses a different story. | https://en.wikipedia.org/wiki?curid=4082 |
Brainfuck
Brainfuck is an esoteric programming language created in 1993 by Urban Müller, and is notable for its extreme minimalism.
The language consists of only eight simple commands and an instruction pointer. While it is fully Turing complete, it is not intended for practical use, but to challenge and amuse programmers. Brainfuck simply requires one to break commands into microscopic steps.
The language's name is a reference to the slang term "brainfuck", which refers to things so complicated or unusual that they exceed the limits of one's understanding.
In 1992, Urban Müller, a Swiss physics student, took over a small online archive for Amiga software. The archive grew more popular, and was soon mirrored around the world. Today, it is the world's largest Amiga archive, known as Aminet.
Müller designed Brainfuck with the goal of implementing it with the smallest possible compiler, inspired by the 1024-byte compiler for the FALSE programming language. Müller's original compiler was implemented in machine language and compiled to a binary with a size of 296 bytes. He uploaded the first Brainfuck compiler to Aminet in 1993. The program came with a "Readme" file, which briefly described the language, and challenged the reader "Who can program anything useful with it? :)". Müller also included an interpreter and some quite elaborate examples. A second version of the compiler used only 240 bytes.
As Aminet grew, the compiler became popular among the Amiga community, and in time it was implemented for other platforms. Several brainfuck compilers have been made smaller than 200 bytes, and one is only 100 bytes.
Except for its two I/O commands, Brainfuck is a minor variation of the formal programming language P′′ created by Corrado Böhm in 1964, which in turn is explicitly based on the Turing machine. In fact, using six symbols equivalent to the respective Brainfuck commands codice_1, codice_2, codice_3, codice_4, codice_5, codice_6, Böhm provided an explicit program for each of the basic functions that together serve to compute any computable function. So the first "Brainfuck" programs appear in Böhm's 1964 paper – and they were programs sufficient to prove Turing completeness.
A version with explicit memory addressing rather without stack and a conditional jump was introduced by Joachim Lambek in 1961 under the name of the Infinite Abacus, consisting of an infinite number of cells and two instructions:
He proves the Infinite Abacus can compute any computable recursive function by programming Kleene set of basic μ-recursive function.
His machine was simulated by Melzac's machine modeling computation via arithmetic rather than logic mimicking a human operator moving pebbles on an abacus, hence the requirement that all numbers must be positive. Melzac, whose one instruction set computer is equivalent to an Infinite Abacus, gives programs for multiplication, gcd, nt prime number, representation in base b, sorting by magnitude, and shows how to simulate an arbitrary Turing machine.
The language consists of eight commands, listed below. A brainfuck program is a sequence of these commands, possibly interspersed with other characters (which are ignored). The commands are executed sequentially, with some exceptions: an instruction pointer begins at the first command, and each command it points to is executed, after which it normally moves forward to the next command. The program terminates when the instruction pointer moves past the last command.
The brainfuck language uses a simple machine model consisting of the program and instruction pointer, as well as an array of at least 30,000 byte cells initialized to zero; a movable data pointer (initialized to point to the leftmost byte of the array); and two streams of bytes for input and output (most often connected to a keyboard and a monitor respectively, and using the ASCII character encoding).
The eight language commands each consist of a single character:
codice_5 and codice_6 match as parentheses usually do: each codice_5 matches exactly one codice_6 and vice versa, the codice_5 comes first, and there can be no unmatched codice_5 or codice_6 between the two.
Brainfuck programs can be translated into C using the following substitutions, assuming codice_18 is of type codice_19 and has been initialized to point to an array of zeroed bytes:
As the name suggests, Brainfuck programs tend to be difficult to comprehend. This is partly because any mildly complex task requires a long sequence of commands, and partly it is because the program's text gives no direct indications of the program's state. These, as well as Brainfuck's inefficiency and its limited input/output capabilities, are some of the reasons it is not used for serious programming. Nonetheless, like any Turing complete language, Brainfuck is theoretically capable of computing any computable function or simulating any other computational model, if given access to an unlimited amount of memory. A variety of Brainfuck programs have been written. Although Brainfuck programs, especially complicated ones, are difficult to write, it is quite trivial to write an interpreter for Brainfuck in a more typical language such as C due to its simplicity. There even exist Brainfuck interpreters written in the Brainfuck language itself.
Brainfuck is an example of a so-called Turing tarpit: It can be used to write "any" program, but it is not practical to do so, because Brainfuck provides so little abstraction that the programs get very long and/or complicated.
As a first, simple example, the following code snippet will add the current cell's value to the next cell: Each time the loop is executed, the current cell is decremented, the data pointer moves to the right, that next cell is incremented, and the data pointer moves left again. This sequence is repeated until the starting cell is 0.
[->+ +++++ Cell c1 = 5
[ Start your loops with your cell pointer on the loop counter (c1 in our case)
< + Add 1 to c0
> - Subtract 1 from c1
] End your loops with the cell pointer on the loop counter
At this point our program has added 5 to 2 leaving 7 in c0 and 0 in c1
but we cannot output this value to the terminal since it is not ASCII encoded!
To display the ASCII character "7" we must add 48 to the value 7
48 = 6 * 8 so let's use another loop to help us!
++ ++++ c1 = 8 and this will be our loop counter aga.
< +++ +++ Add 6 to c0
> - Subtract 1 from c1
< . Print out c0 which has the value 55 which translates to "7"!
The following program prints "Hello World!" and a newline to the screen:
[ This program prints "Hello World!" and a newline to the screen, its
++++++ Set Cell #0 to.
] Loop till Cell #0 is zero; number of iterations is 8
The result of this is:
Cell No : 0 1 2 3 4 5 6
Contents: 0 0 72 104 88 32 8
Pointer : ^
». Cell #2 has value 72 which is 'H'
>---. Subtract 3 from Cell #3 to get 101 which is 'e'
+++++..+++. Likewise for 'llo' from Cell .
». Cell #5 is 32 for the space
++. And finally a newline from Cell #6
For "readability", this code has been spread across many lines, and blanks and comments have been added. Brainfuck ignores all characters except the eight commands codice_20 so no special syntax for comments is needed (as long as the comments do not contain the command characters). The code could just as well have been written as:
This program enciphers its input with the ROT13 cipher. To do this, it must map characters A-M (ASCII 65-77) to N-Z (78-90), and vice versa. Also it must map a-m (97-109) to n-z (110-122) and vice versa. It must map all other characters to themselves; it reads characters one at a time and outputs their enciphered equivalents until it reads an EOF (here assumed to be represented as either -1 or "no change"), at which point the program terminates.
The basic approach used is as follows. Calling the input character "x", divide "x"-1 by 32, keeping quotient and remainder. Unless the quotient is 2 or 3, just output "x", having kept a copy of it during the division. If the quotient is 2 or 3, divide the remainder (("x"-1) modulo 32) by 13; if the quotient here is 0, output "x"+13; if 1, output "x"-13; if 2, output "x".
Regarding the division algorithm, when dividing "y" by "z" to get a quotient "q" and remainder "r", there is an outer loop which sets "q" and "r" first to the quotient and remainder of 1/"z", then to those of 2/"z", and so on; after it has executed "y" times, this outer loop terminates, leaving "q" and "r" set to the quotient and remainder of "y"/"z". (The dividend "y" is used as a diminishing counter that controls how many times this loop is executed.) Within the loop, there is code to increment "r" and decrement "y", which is usually sufficient; however, every "z"th time through the outer loop, it is necessary to zero "r" and increment "q". This is done with a diminishing counter set to the divisor "z"; each time through the outer loop, this counter is decremented, and when it reaches zero, it is refilled by moving the value from "r" back into it.
-,+[ Read first character and start outer character reading loop
] End character reading loop
Partly because Urban Müller did not write a thorough language specification, the many subsequent brainfuck interpreters and compilers have come to use slightly different dialects of brainfuck.
In the classic distribution, the cells are of 8-bit size (cells are bytes), and this is still the most common size. However, to read non-textual data, a brainfuck program may need to distinguish an end-of-file condition from any possible byte value; thus 16-bit cells have also been used. Some implementations have used 32-bit cells, 64-bit cells, or bignum cells with practically unlimited range, but programs that use this extra range are likely to be slow, since storing the value formula_1 into a cell requires formula_2 time as a cell's value may only be changed by incrementing and decrementing.
In all these variants, the codice_21 and codice_22 commands still read and write data in bytes. In most of them, the cells wrap around, i.e. incrementing a cell which holds its maximal value (with the codice_1 command) will bring it to its minimal value and vice versa. The exceptions are implementations which are distant from the underlying hardware, implementations that use bignums, and implementations that try to enforce portability.
It is usually easy to write brainfuck programs that do not ever cause integer wraparound or overflow, and therefore don't depend on cell size. Generally this means avoiding increment of +255 (unsigned 8-bit wraparound), or avoiding overstepping the boundaries of [-128, +127] (signed 8-bit wraparound) (since there are no comparison operators, a program cannot distinguish between a signed and unsigned two's complement fixed-bit-size cell and negativeness of numbers is a matter of interpretation). For more details on integer wraparound, see the Integer overflow article.
In the classic distribution, the array has 30,000 cells, and the pointer begins at the leftmost cell. Even more cells are needed to store things like the millionth Fibonacci number, and the easiest way to make the language Turing complete is to make the array unlimited on the right.
A few implementations extend the array to the left as well; this is an uncommon feature, and therefore portable brainfuck programs do not depend on it.
When the pointer moves outside the bounds of the array, some implementations will give an error message, some will try to extend the array dynamically, some will not notice and will produce undefined behavior, and a few will move the pointer to the opposite end of the array. Some tradeoffs are involved: expanding the array dynamically to the right is the most user-friendly approach and is good for memory-hungry programs, but it carries a speed penalty. If a fixed-size array is used it is helpful to make it very large, or better yet let the user set the size. Giving an error message for bounds violations is very useful for debugging but even that carries a speed penalty unless it can be handled by the operating system's memory protections.
Different operating systems (and sometimes different programming environments) use subtly different versions of ASCII. The most important difference is in the code used for the end of a line of text. MS-DOS and Microsoft Windows use a CRLF, i.e. a 13 followed by a 10, in most contexts. UNIX and its descendants (including GNU/Linux and Mac OS X) and Amigas use just 10, and older Macs use just 13. It would be difficult if brainfuck programs had to be rewritten for different operating systems. However, a unified standard was easy to create. Urban Müller's compiler and his example programs use 10, on both input and output; so do a large majority of existing brainfuck programs; and 10 is also more convenient to use than CRLF. Thus, brainfuck implementations should make sure that brainfuck programs that assume newline = 10 will run properly; many do so, but some do not.
This assumption is also consistent with most of the world's sample code for C and other languages, in that they use '\n', or 10, for their newlines. On systems that use CRLF line endings, the C standard library transparently remaps "\n" to "\r\n" on output and "\r\n" to "\n" on input for streams not opened in binary mode.
The behavior of the codice_21 command when an end-of-file condition has been encountered varies. Some implementations set the cell at the pointer to 0, some set it to the C constant EOF (in practice this is usually -1), some leave the cell's value unchanged. There is no real consensus; arguments for the three behaviors are as follows.
Setting the cell to 0 avoids the use of negative numbers, and makes it marginally more concise to write a loop that reads characters until EOF occurs. This is a language extension devised by Panu Kalliokoski.
Setting the cell to -1 allows EOF to be distinguished from any byte value (if the cells are larger than bytes), which is necessary for reading non-textual data; also, it is the behavior of the C translation of codice_21 given in Müller's readme file. However, it is not obvious that those C translations are to be taken as normative.
Leaving the cell's value unchanged is the behavior of Urban Müller's brainfuck compiler. This behavior can easily coexist with either of the others; for instance, a program that assumes EOF = 0 can set the cell to 0 before each codice_21 command, and will then work correctly on implementations that do either EOF = 0 or EOF = "no change". It is so easy to accommodate the "no change" behavior that any brainfuck programmer interested in portability should do so.
Tyler Holewinski developed a C# .NET Framework, BrainF.NET, which by default runs brainfuck, but can also be used to derive various forms of the language, as well as add new commands, or modify the behavior of existing ones. BrainF.NET thereby allows for development of programs such as an IDE.
Many people have created brainfuck equivalents (languages with commands that directly map to brainfuck) or brainfuck derivatives (languages that extend its behavior or map it into new semantic territory).
Some examples: | https://en.wikipedia.org/wiki?curid=4086 |
Bartolomeo Ammannati
Bartolomeo Ammannati (18 June 151113 April 1592) was an Italian architect and sculptor, born at Settignano, near Florence. He studied under Baccio Bandinelli and Jacopo Sansovino (assisting on the design of the Library of St. Mark's, the "Biblioteca Marciana", Venice) and closely imitated the style of Michelangelo.
He was more distinguished in architecture than in sculpture. He worked in Rome in collaboration with Vignola and Vasari), including designs for the Villa Giulia, but also for works and at Lucca. He labored during 1558–1570, in the refurbishment and enlargement of Pitti Palace, creating the courtyard consisting of three wings with rusticated facades, and one lower portico leading to the amphitheatre in the Boboli Gardens. His design mirrored the appearance of the main external façade of Pitti. He was also named "Consul" of Accademia delle Arti del Disegno of Florence, which had been founded by the Duke Cosimo I in 1563.
In 1569, Ammanati was commissioned to build the Ponte Santa Trinita, a bridge over the Arno River. The three arches are elliptic, and though very light and elegant, has survived, when floods had damaged other Arno bridges at different times. Santa Trinita was destroyed in 1944, during World War II, and rebuilt in 1957.
Ammannati designed what is considered a prototypic mannerist sculptural ensemble in the "Fountain of Neptune" ("Fontana del Nettuno"), prominently located in the Piazza della Signoria in the center of Florence. The assignment was originally given to the aged Bartolommeo Bandinelli; however when Bandinelli died, Ammannati's design, bested the submissions of Benvenuto Cellini and Vincenzo Danti, to gain the commission. From 1563 and 1565, Ammannati and his assistants, among them Giambologna, sculpted the block of marble that had been chosen by Bandinelli. He took Grand Duke Cosimo I as model for Neptune's face. The statue was meant to highlight Cosimo's goal of establishing a Florentine Naval force. The ungainly sea god was placed at the corner of the Palazzo Vecchio within sight of Michelangelo's David statue, and the then 87-year-old sculptor is said to have scoffed at Ammannati— saying that he had ruined a beautiful piece of marble— with the ditty: "Ammannati, Ammanato, che bel marmo hai rovinato!" Ammannati continued work on this fountain for a decade, adding around the perimeter a cornucopia of demigod figures: bronze reclining river gods, laughing satyrs and marble sea horses emerging from the water.
In 1550 Ammannati married Laura Battiferri, an elegant poet and an accomplished woman. Later in his life he had a religious crisis, influenced by Counter-Reformation piety, which resulted in condemning his own works depicting nudity, and he left all his possessions to the Jesuits.
He died in Florence in 1592. | https://en.wikipedia.org/wiki?curid=4091 |
Bishop
A bishop is an ordained, consecrated, or appointed member of the Christian clergy who is generally entrusted with a position of authority and oversight.
Within the Catholic, Eastern Orthodox, Oriental Orthodox, Moravian, Anglican, Old Catholic and Independent Catholic churches, as well as the Assyrian Church of the East, bishops claim apostolic succession, a direct historical lineage dating back to the original Twelve Apostles. Within these churches, bishops are seen as those who possess the full priesthood and can ordain clergy, including other bishops. Some Protestant churches, including the Lutheran, Anglican and Methodist churches, have bishops serving similar functions as well, though not always understood to be within apostolic succession in the same way. A person ordained as a deacon, priest, and then bishop is understood to hold the fullness of the (ministerial) priesthood, given responsibility by Christ to govern, teach, and sanctify the Body of Christ. Priests, deacons and lay ministers co-operate and assist their bishops in pastoral ministry.
The English term "bishop" derives from the Greek word "epískopos", meaning "overseer" in Greek, the early language of the Christian Church. In the early Christian era the term was not always clearly distinguished from " presbýteros" (literally: "elder" or "senior", origin of the modern English word "priest"), but is used in the sense of the order or office of bishop, distinct from that of presbyter, in the writings attributed to Ignatius of Antioch. (died c. 110).
The earliest organization of the Church in Jerusalem was, according to most scholars, similar to that of Jewish synagogues, but it had a council or college of ordained presbyters ( "elders"). In Acts 11:30 and Acts 15:22, we see a collegiate system of government in Jerusalem chaired by James the Just, according to tradition the first bishop of the city. In Acts 14:23, the Apostle Paul ordains presbyters in churches in Anatolia.
Often, the word "presbyter" was not yet distinguished from "overseer" ( "episkopos", later used exclusively to mean "bishop"), as in Acts 20:17, Titus 1:5–7 and 1 Peter 5:1. The earliest writings of the Apostolic Fathers, the Didache and the First Epistle of Clement, for example, show the church used two terms for local church offices—presbyters (seen by many as an interchangeable term with "episcopos" or overseer) and deacon.
In Timothy and Titus in the New Testament a more clearly defined episcopate can be seen. We are told that Paul had left Timothy in Ephesus and Titus in Crete to oversee the local church. Paul commands Titus to ordain presbyters/bishops and to exercise general oversight.
Early sources are unclear but various groups of Christian communities may have had the bishop surrounded by a group or college functioning as leaders of the local churches. Eventually the head or "monarchic" bishop came to rule more clearly, and all local churches would eventually follow the example of the other churches and structure themselves after the model of the others with the one bishop in clearer charge, though the role of the body of presbyters remained important.
Eventually, as Christendom grew, bishops no longer directly served individual congregations. Instead, the Metropolitan bishop (the bishop in a large city) appointed priests to minister each congregation, acting as the bishop's delegate.
Around the end of the 1st century, the church's organization became clearer in historical documents. In the works of the Apostolic Fathers, and Ignatius of Antioch in particular, the role of the episkopos, or bishop, became more important or, rather, already was very important and being clearly defined. While Ignatius of Antioch offers the earliest clear description of monarchial bishops (a single bishop over all house churches in a city) he is an advocate of monepiscopal structure rather than describing an accepted reality. To the bishops and house churches to which he writes, he offers strategies on how to pressure house churches who don't recognize the bishop into compliance. Other contemporary Christian writers do not describe monarchial bishops, either continuing to equate them with the presbyters or speaking of episkopoi (bishops, plural) in a city.
"Blessed be God, who has granted unto you, who are yourselves so excellent, to obtain such an excellent bishop." — Epistle of Ignatius to the Ephesians 1:1
"and that, being subject to the bishop and the presbytery, ye may in all respects be sanctified." — Epistle of Ignatius to the Ephesians 2:1
"For your justly renowned presbytery, worthy of God, is fitted as exactly to the bishop as the strings are to the harp." — Epistle of Ignatius to the Ephesians 4:1
"Do ye, beloved, be careful to be subject to the bishop, and the presbyters and the deacons." — Epistle of Ignatius to the Ephesians 5:1
"Plainly therefore we ought to regard the bishop as the Lord Himself" — Epistle of Ignatius to the Ephesians 6:1.
"your godly bishop" — Epistle of Ignatius to the Magnesians 2:1.
"the bishop presiding after the likeness of God and the presbyters after the likeness of the council of the Apostles, with the deacons also who are most dear to me, having been entrusted with the diaconate of Jesus Christ" — Epistle of Ignatius to the Magnesians 6:1.
"Therefore as the Lord did nothing without the Father, [being united with Him], either by Himself or by the Apostles, so neither do ye anything without the bishop and the presbyters." — Epistle of Ignatius to the Magnesians 7:1.
"Be obedient to the bishop and to one another, as Jesus Christ was to the Father [according to the flesh], and as the Apostles were to Christ and to the Father, that there may be union both of flesh and of spirit." — Epistle of Ignatius to the Magnesians 13:2.
"In like manner let all men respect the deacons as Jesus Christ, even as they should respect the bishop as being a type of the Father and the presbyters as the council of God and as the college of Apostles. Apart from these there is not even the name of a church." — Epistle of Ignatius to the Trallesians 3:1.
"follow your bishop, as Jesus Christ followed the Father, and the presbytery as the Apostles; and to the deacons pay respect, as to God's commandment" — Epistle of Ignatius to the Smyrnans 8:1.
"He that honoureth the bishop is honoured of God; he that doeth aught without the knowledge of the bishop rendereth service to the devil" — Epistle of Ignatius to the Smyrnans 9:1.
— Lightfoot translation.
As the Church continued to expand, new churches in important cities gained their own bishop. Churches in the regions outside an important city were served by Chorbishop, an official rank of bishops. However, soon, presbyters and deacons were sent from bishop of a city church. Gradually priests replaced the chorbishops. Thus, in time, the bishop changed from being the leader of a single church confined to an urban area to being the leader of the churches of a given geographical area.
Clement of Alexandria (end of the 2nd century) writes about the ordination of a certain Zachæus as bishop by the imposition of Simon Peter Bar-Jonah's hands. The words bishop and ordination are used in their technical meaning by the same Clement of Alexandria. The bishops in the 2nd century are defined also as the only clergy to whom the ordination to priesthood (presbyterate) and diaconate is entrusted: "a priest (presbyter) lays on hands, but does not ordain." ("cheirothetei ou cheirotonei")
At the beginning of the 3rd century, Hippolytus of Rome describes another feature of the ministry of a bishop, which is that of the ""Spiritum primatus sacerdotii habere potestatem dimittere peccata"": the primate of sacrificial priesthood and the power to forgive sins.
The efficient organization of the Roman Empire became the template for the organisation of the church in the 4th century, particularly after Constantine's Edict of Milan. As the church moved from the shadows of privacy into the public forum it acquired land for churches, burials and clergy. In 391, Theodosius I decreed that any land that had been confiscated from the church by Roman authorities be returned.
The most usual term for the geographic area of a bishop's authority and ministry, the diocese, began as part of the structure of the Roman Empire under Diocletian. As Roman authority began to fail in the western portion of the empire, the church took over much of the civil administration. This can be clearly seen in the ministry of two popes: Pope Leo I in the 5th century, and Pope Gregory I in the 6th century. Both of these men were statesmen and public administrators in addition to their role as Christian pastors, teachers and leaders. In the Eastern churches, latifundia entailed to a bishop's see were much less common, the state power did not collapse the way it did in the West, and thus the tendency of bishops acquiring civil power was much weaker than in the West. However, the role of Western bishops as civil authorities, often called prince bishops, continued throughout much of the Middle Ages.
As well as being archchancellors of the Holy Roman Empire after the 9th century, bishops generally served as chancellors to medieval monarchs, acting as head of the "justiciary" and chief chaplain. The Lord Chancellor of England was almost always a bishop up until the dismissal of Cardinal Thomas Wolsey by Henry VIII. Similarly, the position of Kanclerz in the Polish kingdom was always held by a bishop until the 16th century. And today, the principality of Andorra is headed by two co-princes, one of whom is a Catholic bishop (and the other, the President of France).
In France before the French Revolution, representatives of the clergy — in practice, bishops and abbots of the largest monasteries — comprised the First Estate of the Estates-General, until their role was abolished during the French Revolution.
In the 21st century, the more senior bishops of the Church of England continue to sit in the House of Lords of the Parliament of the United Kingdom, as representatives of the established church, and are known as Lords Spiritual. The Bishop of Sodor and Man, whose diocese lies outside the United Kingdom, is an "ex officio" member of the Legislative Council of the Isle of Man. In the past, the Bishop of Durham, known as a prince bishop, had extensive viceregal powers within his northern diocese — the power to mint money, collect taxes and raise an army to defend against the Scots.
Eastern Orthodox bishops, along with all other members of the clergy, are canonically forbidden to hold political office. Occasional exceptions to this rule are tolerated when the alternative is political chaos. In the Ottoman Empire, the Patriarch of Constantinople, for example, had de facto administrative, fiscal, cultural and legal jurisdiction, as well as spiritual, over all the Christians of the empire. More recently, Archbishop Makarios III of Cyprus, served as President of the Republic of Cyprus from 1960 to 1977.
In 2001, Peter Hollingworth, AC, OBE – then the Anglican Archbishop of Brisbane – was controversially appointed Governor-General of Australia. Although Hollingworth gave up his episcopal position to accept the appointment, it still attracted considerable opposition in a country which maintains a formal separation between Church and State.
During the period of the English Civil War, the role of bishops as wielders of political power and as upholders of the established church became a matter of heated political controversy. Indeed, Presbyterianism was the polity of most Reformed Churches in Europe, and had been favored by many in England since the English Reformation. Since in the primitive church the offices of "presbyter" and "episkopos" were identical, many Puritans held that this was the only form of government the church should have. The Anglican divine, Richard Hooker, objected to this claim in his famous work "Of the Laws of Ecclesiastic Polity" while, at the same time, defending Presbyterian ordination as valid (in particular Calvin's ordination of Beza). This was the official stance of the English Church until the Commonwealth, during which time, the views of Presbyterians and Independents (Congregationalists) were more freely expressed and practiced.
Bishops form the leadership in the Catholic Church, the Eastern Orthodox Church, the Oriental Orthodox Churches, the Anglican Communion, the Lutheran Church, the Independent Catholic Churches, the Independent Anglican Churches, and certain other, smaller, denominations.
The traditional role of a bishop is as pastor of a diocese (also called a bishopric, synod, eparchy or see), and so to serve as a "diocesan bishop," or "eparch" as it is called in many Eastern Christian churches. Dioceses vary considerably in size, geographically and population-wise. Some dioceses around the Mediterranean Sea which were Christianised early are rather compact, whereas dioceses in areas of rapid modern growth in Christian commitment—as in some parts of Sub-Saharan Africa, South America and the Far East—are much larger and more populous.
As well as traditional diocesan bishops, many churches have a well-developed structure of church leadership that involves a number of layers of authority and responsibility.
In Catholicism, Eastern Orthodoxy, Oriental Orthodoxy, and Anglicanism, only a bishop can ordain other bishops, priests, and deacons.
In the Eastern liturgical tradition, a priest can celebrate the Divine Liturgy only with the blessing of a bishop. In Byzantine usage, an antimension signed by the bishop is kept on the altar partly as a reminder of whose altar it is and under whose omophorion the priest at a local parish is serving. In Syriac Church usage, a consecrated wooden block called a thabilitho is kept for the same reasons.
The pope, in addition to being the Bishop of Rome and spiritual head of the Catholic Church, is also the Patriarch of the Latin Rite. Each bishop within the Latin Rite is answerable directly to the Pope and not any other bishop except to metropolitans in certain oversight instances. The pope previously used the title "Patriarch of the West", but this title was dropped from use in 2006 a move which caused some concern within the Eastern Orthodox Communion as, to them, it implied wider papal jurisdiction.
In Catholic, Eastern Orthodox, Oriental Orthodox and Anglican cathedrals there is a special chair set aside for the exclusive use of the bishop. This is the bishop's "cathedra" and is often called the throne. In some Christian denominations, for example, the Anglican Communion, parish churches may maintain a chair for the use of the bishop when he visits; this is to signify the parish's union with the bishop.
The bishop is the ordinary minister of the sacrament of confirmation in the Latin Rite Catholic Church, and in the Anglican and Old Catholic communion only a bishop may administer this sacrament. However, in the Byzantine and other Eastern rites, whether Eastern or Oriental Orthodox or Eastern Catholic, chrismation is done immediately after baptism, and thus the priest is the one who confirms, using chrism blessed by a bishop.
Bishops in all of these communions are ordained by other bishops through the laying on of hands. While traditional teaching maintains that any bishop with apostolic succession can validly perform the ordination of another bishop, some churches require two or three bishops participate, either to ensure sacramental validity or to conform with church law. Catholic doctrine holds that one bishop can validly ordain another (priest) as a bishop. Though a minimum of three bishops participating is desirable (there are usually several more) in order to demonstrate collegiality, canonically only one bishop is necessary. The practice of only one bishop ordaining was normal in countries where the Church was persecuted under Communist rule.
The title of archbishop or metropolitan may be granted to a senior bishop, usually one who is in charge of a large ecclesiastical jurisdiction. He may, or may not, have provincial oversight of suffragan bishops and may possibly have auxiliary bishops assisting him.
Ordination of a bishop, and thus continuation of apostolic succession, takes place through a ritual centred on the imposition of hands and prayer.
Apart from the ordination, which is always done by other bishops, there are different methods as to the actual selection of a candidate for ordination as bishop. In the Catholic Church the Congregation for Bishops generally oversees the selection of new bishops with the approval of the pope. The papal nuncio usually solicits names from the bishops of a country, consults with priests and leading members of a laity, and then selects three to be forwarded to the Holy See. In Europe, some cathedral chapters have duties to elect bishops. The Eastern Catholic churches generally elect their own bishops. Most Eastern Orthodox churches allow varying amounts of formalised laity or lower clergy influence on the choice of bishops. This also applies in those Eastern churches which are in union with the pope, though it is required that he give assent.
Catholic, Eastern Orthodox, Oriental Orthodox, Anglican, Old Catholic and some Lutheran bishops claim to be part of the continuous sequence of ordained bishops since the days of the apostles referred to as apostolic succession. Since Pope Leo XIII issued the bull "Apostolicae curae" in 1896, the Catholic Church has insisted that Anglican orders are invalid because of changes in the Anglican ordination rites of the 16th century and divergence in understanding of the theology of priesthood, episcopacy and Eucharist. However, since the 1930s, Utrecht Old Catholic bishops (recognised by the Holy See as validily ordained) have sometimes taken part in the ordination of Anglican bishops. According to the writer Timothy Dufort, by 1969, all Church of England bishops had acquired Old Catholic lines of apostolic succession recognised by the Holy See. This development has muddied the waters somewhat as it could be argued that the strain of apostolic succession has been re-introduced into Anglicanism, at least within the Church of England.
The Catholic Church does recognise as valid (though illicit) ordinations done by breakaway Catholic, Old Catholic or Oriental bishops, and groups descended from them; it also regards as both valid and licit those ordinations done by bishops of the Eastern churches, so long as those receiving the ordination conform to other canonical requirements (for example, is an adult male) and an eastern orthodox rite of episcopal ordination, expressing the proper functions and sacramental status of a bishop, is used; this has given rise to the phenomenon of "episcopi vagantes" (for example, clergy of the Independent Catholic groups which claim apostolic succession, though this claim is rejected by both Catholicism and Eastern Orthodoxy).
The Eastern Orthodox Churches would not accept the validity of any ordinations performed by the Independent Catholic groups, as Eastern Orthodoxy considers to be spurious any consecration outside the Church as a whole. Eastern Orthodoxy considers apostolic succession to exist only within the Universal Church, and not through any authority held by individual bishops; thus, if a bishop ordains someone to serve outside the (Eastern Orthodox) Church, the ceremony is ineffectual, and no ordination has taken place regardless of the ritual used or the ordaining prelate's position within the Eastern Orthodox Churches.
The position of the Catholic Church is slightly different. Whilst it does recognise the validity of the orders of certain groups which separated from communion with Holy See. The Holy See accepts as valid the ordinations of the Old Catholics in communion with Utrecht, as well as the Polish National Catholic Church (which received its orders directly from Utrecht, and was—until recently—part of that communion); but Catholicism does not recognise the orders of any group whose teaching is at variance with what they consider the core tenets of Christianity; this is the case even though the clergy of the Independent Catholic groups may use the proper ordination ritual. There are also other reasons why the Holy See does not recognise the validity of the orders of the Independent clergy:
Whilst members of the Independent Catholic movement take seriously the issue of valid orders, it is highly significant that the relevant Vatican Congregations tend not to respond to petitions from Independent Catholic bishops and clergy who seek to be received into communion with the Holy See, hoping to continue in some sacramental role. In those instances where the pope does grant reconciliation, those deemed to be clerics within the Independent Old Catholic movement are invariably admitted as laity and not priests or bishops.
There is a mutual recognition of the validity of orders amongst Catholic, Eastern Orthodox, Old Catholic, Oriental Orthodox and Assyrian Church of the East churches.
Some provinces of the Anglican Communion have begun ordaining women as bishops in recent decades – for example, England, Ireland, Scotland, Wales, the United States, Australia, New Zealand, Canada and Cuba. The first woman to be consecrated a bishop within Anglicanism was Barbara Harris, who was ordained in the United States in 1989. In 2006, Katharine Jefferts Schori, the Episcopal Bishop of Nevada, became the first woman to become the presiding bishop of the Episcopal Church.
In the Evangelical Lutheran Church in America (ELCA) and the Evangelical Lutheran Church in Canada (ELCIC), the largest Lutheran Church bodies in the United States and Canada, respectively, and roughly based on the Nordic Lutheran state churches (similar to that of the Church of England), bishops are elected by Synod Assemblies, consisting of both lay members and clergy, for a term of six years, which can be renewed, depending upon the local synod's "constitution" (which is mirrored on either the ELCA or ELCIC's national constitution). Since the implementation of concordats between the ELCA and the Episcopal Church of the United States and the ELCIC and the Anglican Church of Canada, all bishops, including the presiding bishop (ELCA) or the national bishop (ELCIC), have been consecrated using the historic succession, with at least one Anglican bishop serving as co-consecrator.
Since going into ecumenical communion with their respective Anglican body, bishops in the ELCA or the ELCIC not only approve the "rostering" of all ordained pastors, diaconal ministers, and associates in ministry, but they serve as the principal celebrant of all pastoral ordination and installation ceremonies, diaconal consecration ceremonies, as well as serving as the "chief pastor" of the local synod, upholding the teachings of Martin Luther as well as the documentations of the Ninety-Five Theses and the Augsburg Confession. Unlike their counterparts in the United Methodist Church, ELCA and ELCIC synod bishops do not appoint pastors to local congregations (pastors, like their counterparts in the Episcopal Church, are called by local congregations). The presiding bishop of the ELCA and the national bishop of the ELCIC, the national bishops of their respective bodies, are elected for a single 6-year term and may be elected to an additional term.
Although ELCA agreed with the Episcopal Church to limit ordination to the bishop "ordinarily", ELCA pastor-"ordinators" are given permission to perform the rites in "extraordinary" circumstance. In practice, "extraordinary" circumstance have included disagreeing with Episcopalian views of the episcopate, and as a result, ELCA pastors ordained by other pastors are not permitted to be deployed to Episcopal Churches (they can, however, serve in Presbyterian Church USA, United Methodist Church, Reformed Church in America, and Moravian Church congregations, as the ELCA is in full communion with these denominations). The Lutheran Church–Missouri Synod (LCMS) and the Wisconsin Evangelical Lutheran Synod (WELS), the second and third largest Lutheran bodies in the United States and the two largest Confessional Lutheran bodies in North America, do not follow an episcopal form of governance, settling instead on a form of quasi-congregationalism patterned off what they believe to be the practice of the early church. The second largest of the three predecessor bodies of the ELCA, the American Lutheran Church, was a congregationalist body, with national and synod presidents before they were re-titled as bishops (borrowing from the Lutheran churches in Germany) in the 1980s. With regard to ecclesial discipline and oversight, national and synod presidents typically function similarly to bishops in episcopal bodies.
In the African Methodist Episcopal Church, "Bishops are the Chief Officers of the Connectional Organization. They are elected for life by a majority vote of the General Conference which meets every four years."
In the Christian Methodist Episcopal Church in the United States, bishops are administrative superintendents of the church; they are elected by "delegate" votes for as many years deemed until the age of 74, then he/she must retire. Among their duties, are responsibility for appointing clergy to serve local churches as pastor, for performing ordinations, and for safeguarding the doctrine and discipline of the Church. The General Conference, a meeting every four years, has an equal number of clergy and lay delegates. In each Annual Conference, CME bishops serve for four-year terms. CME Church bishops may be male or female.
In the United Methodist Church (the largest branch of Methodism in the world) bishops serve as administrative and pastoral superintendents of the church. They are elected for life from among the ordained elders (presbyters) by vote of the delegates in regional (called jurisdictional) conferences, and are consecrated by the other bishops present at the conference through the laying on of hands. In the United Methodist Church bishops remain members of the "Order of Elders" while being consecrated to the "Office of the Episcopacy". Within the United Methodist Church only bishops are empowered to consecrate bishops and ordain clergy. Among their most critical duties is the ordination and appointment of clergy to serve local churches as pastor, presiding at sessions of the Annual, Jurisdictional, and General Conferences, providing pastoral ministry for the clergy under their charge, and safeguarding the doctrine and discipline of the Church. Furthermore, individual bishops, or the Council of Bishops as a whole, often serve a prophetic role, making statements on important social issues and setting forth a vision for the denomination, though they have no legislative authority of their own. In all of these areas, bishops of the United Methodist Church function very much in the historic meaning of the term. According to the "Book of Discipline of the United Methodist Church", a bishop's responsibilities are
In each Annual Conference, United Methodist bishops serve for four-year terms, and may serve up to three terms before either retirement or appointment to a new Conference. United Methodist bishops may be male or female, with Marjorie Matthews being the first woman to be consecrated a bishop in 1980.
The collegial expression of episcopal leadership in the United Methodist Church is known as the Council of Bishops. The Council of Bishops speaks to the Church and through the Church into the world and gives leadership in the quest for Christian unity and interreligious relationships. The Conference of Methodist Bishops includes the United Methodist "Council of Bishops" plus bishops from affiliated autonomous Methodist or United Churches.
John Wesley consecrated Thomas Coke a "General Superintendent," and directed that Francis Asbury also be consecrated for the United States of America in 1784, where the Methodist Episcopal Church first became a separate denomination apart from the Church of England. Coke soon returned to England, but Asbury was the primary builder of the new church. At first he did not call himself bishop, but eventually submitted to the usage by the denomination.
Notable bishops in United Methodist history include Coke, Asbury, Richard Whatcoat, Philip William Otterbein, Martin Boehm, Jacob Albright, John Seybert, Matthew Simpson, John S. Stamm, William Ragsdale Cannon, Marjorie Matthews, Leontine T. Kelly, William B. Oden, Ntambo Nkulu Ntanda, Joseph Sprague, William Henry Willimon, and Thomas Bickerton.
In The Church of Jesus Christ of Latter-day Saints, the Bishop is the leader of a local congregation, called a ward. As with most LDS priesthood holders, the bishop is a part-time lay minister and earns a living through other employment; in all cases, he is a married man. As such, it is his duty to preside at services, call local leaders, and judge the worthiness of members for service. The bishop does not deliver sermons at every service (generally asking members to do so), but is expected to be a spiritual guide for his congregation. It is therefore believed that he has both the right and ability to receive divine inspiration (through the Holy Spirit) for the ward under his direction. Because it is a part-time position, all able members are expected to assist in the management of the ward by holding delegated lay positions (for example, women's and youth leaders, teachers) referred to as callings. Although members are asked to confess serious sins to him, unlike the Catholic Church, he is not the instrument of divine forgiveness, but merely a guide through the repentance process (and a judge in case transgressions warrant excommunication or other official discipline). The bishop is also responsible for the physical welfare of the ward, and thus collects tithing and fast offerings and distributes financial assistance where needed.
A bishop is the president of the Aaronic priesthood in his ward (and is thus a form of Mormon Kohen; in fact, a literal descendant of Aaron has "legal right" to act as a bishop after being found worthy and ordained by the First Presidency). In the absence of a literal descendant of Aaron, a high priest in the Melchizedek priesthood is called to be a bishop. Each bishop is selected from resident members of the ward by the stake presidency with approval of the First Presidency, and chooses two "counselors" to form a "bishopric". In special circumstances (such as a ward consisting entirely of young university students), a bishop may be chosen from outside the ward. A bishop is typically released after about five years and a new bishop is called to the position. Although the former bishop is released from his duties, he continues to hold the Aaronic priesthood office of bishop. Church members frequently refer to a former bishop as "Bishop" as a sign of respect and affection.
Latter-day Saint bishops do not wear any special clothing or insignia the way clergy in many other churches do, but are expected to dress and groom themselves neatly and conservatively per their local culture, especially when performing official duties. Bishops (as well as other members of the priesthood) can trace their line of authority back to Joseph Smith, who, according to church doctrine, was ordained to lead the Church in modern times by the ancient apostles Peter, James, and John, who were ordained to lead the Church by Jesus Christ.
At the global level, the presiding bishop oversees the temporal affairs (buildings, properties, commercial corporations, and so on) of the worldwide Church, including the Church's massive global humanitarian aid and social welfare programs. The presiding bishop has two counselors; the three together form the presiding bishopric. As opposed to ward bishoprics, where the counselors do not hold the office of bishop, all three men in the presiding bishopric hold the office of bishop, and thus the counselors, as with the presiding bishop, are formally referred to as "Bishop".
The New Apostolic Church (NAC) knows three classes of ministries: Deacons, Priests and Apostles. The Apostles, who are all included in the apostolate with the Chief Apostle as head, are the highest ministries.
Of the several kinds of priest...ministries, the bishop is the highest. Nearly all bishops are set in line directly from the chief apostle. They support and help their superior apostle.
In the Church of God in Christ (COGIC), the ecclesiastical structure is composed of large dioceses that are called "jurisdictions" within COGIC, each under the authority of a bishop, sometimes called "state bishops". They can either be made up of large geographical regions of churches or churches that are grouped and organized together as their own separate jurisdictions because of similar affiliations, regardless of geographical location or dispersion. Each state in the U.S. has at least one jurisdiction while others may have several more, and each jurisdiction is usually composed of between 30 and 100 churches. Each jurisdiction is then broken down into several districts, which are smaller groups of churches (either grouped by geographical situation or by similar affiliations) which are each under the authority of District Superintendents who answer to the authority of their jurisdictional/state bishop. There are currently over 170 jurisdictions in the United States, and over 30 jurisdictions in other countries. The bishops of each jurisdiction, according to the COGIC Manual, are considered to be the modern day equivalent in the church of the early apostles and overseers of the New Testament church, and as the highest ranking clergymen in the COGIC, they are tasked with the responsibilities of being the head overseers of all religious, civil, and economic ministries and protocol for the church denomination. They also have the authority to appoint and ordain local pastors, elders, ministers, and reverends within the denomination. The bishops of the COGIC denomination are all collectively called "The Board of Bishops." From the Board of Bishops, and the General Assembly of the COGIC, the body of the church composed of clergy and lay delegates that are responsible for making and enforcing the bylaws of the denomination, every four years, twelve bishops from the COGIC are elected as "The General Board" of the church, who work alongside the delegates of the General Assembly and Board of Bishops to provide administration over the denomination as the church's head executive leaders. One of twelve bishops of the General Board is also elected the "presiding bishop" of the church, and two others are appointed by the presiding bishop himself, as his first and second assistant presiding bishops.
Bishops in the Church of God in Christ usually wear black clergy suits which consist of a black suit blazer, black pants, a purple or scarlet clergy shirt and a white clerical collar, which is usually referred to as "Class B Civic attire." Bishops in COGIC also typically wear the Anglican Choir Dress style vestments of a long purple or scarlet chimere, cuffs, and tippet worn over a long white rochet, and a gold pectoral cross worn around the neck with the tippet. This is usually referred to as "Class A Ceremonial attire". The bishops of COGIC alternate between Class A Ceremonial attire and Class B Civic attire depending on the protocol of the religious services and other events they have to attend.
In the polity of the Church of God (Cleveland, Tennessee), the international leader is the presiding bishop, and the members of the executive committee are executive bishops. Collectively, they supervise and appoint national and state leaders across the world. Leaders of individual states and regions are administrative bishops, who have jurisdiction over local churches in their respective states and are vested with appointment authority for local pastorates. All ministers are credentialed at one of three levels of licensure, the most senior of which is the rank of ordained bishop. To be eligible to serve in state, national, or international positions of authority, a minister must hold the rank of ordained bishop.
In 2002, the general convention of the Pentecostal Church of God came to a consensus to change the title of their overseer from general superintendent to bishop. The change was brought on because internationally, the term "bishop" is more commonly related to religious leaders than the previous title.
The title "bishop" is used for both the general (international leader) and the district (state) leaders. The title is sometimes used in conjunction with the previous, thus becoming general (district) superintendent/bishop.
According to the Seventh-day Adventist understanding of the doctrine of the Church:
"The "elders" (Greek, presbuteros) or "bishops" (episkopos) were the most important officers of the church. The term elder means older one, implying dignity and respect. His position was similar to that of the one who had supervision of the synagogue. The term bishop means "overseer." Paul used these terms interchangeably, equating elders with overseers or bishops (Acts 20:17,; Titus 1:5, 7).
"Those who held this position supervised the newly formed churches. Elder referred to the status or rank of the office, while bishop denoted the duty or responsibility of the office—"overseer." Since the apostles also called themselves elders (1 Peter 5:1; 2 John 1; 3 John 1), it is apparent that there were both local elders and itinerant elders, or elders at large. But both kinds of elder functioned as shepherds of the congregations."
The above understanding is part of the basis of Adventist organizational structure. The world wide Seventh-day Adventist church is organized into local districts, conferences or missions, union conferences or union missions, divisions, and finally at the top is the general conference. At each level (with exception to the local districts), there is an elder who is elected president and a group of elders who serve on the executive committee with the elected president. Those who have been elected president would in effect be the "bishop" while never actually carrying the title or ordained as such because the term is usually associated with the episcopal style of church governance most often found in Catholic, Anglican, Methodist and some Pentecostal/Charismatic circles.
Some Baptists also have begun taking on the title of "bishop".
In some smaller Protestant denominations and independent churches, the term "bishop" is used in the same way as "pastor", to refer to the leader of the local congregation, and may be male or female. This usage is especially common in African-American churches in the US.
In the Church of Scotland, which has a Presbyterian church structure, the word "bishop" refers to an ordained person, usually a normal parish minister, who has temporary oversight of a trainee minister. In the Presbyterian Church (USA), the term bishop is an expressive name for a Minister of Word and Sacrament who serves a congregation and exercises "the oversight of the flock of Christ." The term is traceable to the 1789 Form of Government of the PC (USA) and the Presbyterian understanding of the pastoral office.
While not considered orthodox Christian, the Ecclesia Gnostica Catholica uses roles and titles derived from Christianity for its clerical hierarchy, including bishops who have much the same authority and responsibilities as in Catholicism.
The Salvation Army does not have bishops but has appointed leaders of geographical areas, known as Divisional Commanders. Larger geographical areas, called Territories, are led by a Territorial Commander, who is the highest-ranking officer in that Territory.
Jehovah's Witnesses do not use the title ‘Bishop’ within their organizational structure, but appoint elders to be overseers (to fulfill the role of oversight) within their congregations.
The HKBP of Indonesia, the most prominent Protestant denomination in Indonesia, uses the term Ephorus instead of Bishop.
In the Vietnamese syncretist religion of Caodaism, bishops ("giáo sư") comprise the fifth of nine hierarchical levels, and are responsible for spiritual and temporal education as well as record-keeping and ceremonies in their parishes. At any one time there are seventy-two bishops. Their authority is described in Section I of the text "Tân Luật" (revealed through seances in December 1926). Caodai bishops wear robes and headgear of embroidered silk depicting the Divine Eye and the Eight Trigrams. (The color varies according to branch.) This is the full ceremonial dress; the simple version consists of a seven-layered turban.
Traditionally, a number of items are associated with the office of a bishop, most notably the mitre, crosier, and ecclesiastical ring. Other vestments and insignia vary between Eastern and Western Christianity.
In the Latin Rite of the Catholic Church, the choir dress of a bishop includes the purple cassock with amaranth trim, rochet, purple zucchetto (skull cap), purple biretta, and pectoral cross. The cappa magna may be worn, but only within the bishop's own diocese and on especially solemn occasions. The mitre, zuchetto, and stole are generally worn by bishops when presiding over liturgical functions. For liturgical functions other than the Mass the bishop typically wears the cope. Within his own diocese and when celebrating solemnly elsewhere with the consent of the local ordinary, he also uses the crosier. When celebrating Mass, a bishop, like a priest, wears the chasuble. The Caeremoniale Episcoporum recommends, but does not impose, that in solemn celebrations a bishop should also wear a dalmatic, which can always be white, beneath the chasuble, especially when administering the sacrament of holy orders, blessing an abbot or abbess, and dedicating a church or an altar. The Caeremoniale Episcoporum no longer makes mention of episcopal gloves, episcopal sandals, liturgical stockings (also known as buskins), or the accoutrements that it once prescribed for the bishop's horse. The coat of arms of a Latin Rite Catholic bishop usually displays a galero with a cross and crosier behind the escutcheon; the specifics differ by location and ecclesiastical rank (see Ecclesiastical heraldry).
Anglican bishops generally make use of the mitre, crosier, ecclesiastical ring, purple cassock, purple zucchetto, and pectoral cross. However, the traditional choir dress of Anglican bishops retains its late mediaeval form, and looks quite different from that of their Catholic counterparts; it consists of a long rochet which is worn with a chimere.
In the Eastern Churches (Eastern Orthodox, Eastern Rite Catholic) a bishop will wear the mandyas, panagia (and perhaps an enkolpion), sakkos, omophorion and an Eastern-style mitre. Eastern bishops do not normally wear an episcopal ring; the faithful kiss (or, alternatively, touch their forehead to) the bishop's hand. To seal official documents, he will usually use an inked stamp. An Eastern bishop's coat of arms will normally display an Eastern-style mitre, cross, eastern style crosier and a red and white (or red and gold) mantle. The arms of Oriental Orthodox bishops will display the episcopal insignia (mitre or turban) specific to their own liturgical traditions. Variations occur based upon jurisdiction and national customs. | https://en.wikipedia.org/wiki?curid=4092 |
Bordeaux
Bordeaux (; Gascon ) is a port city on the Garonne in the Gironde department in Southwestern France.
The municipality (commune) of Bordeaux proper has a population of 257,804 (2019). Bordeaux is the centre of Bordeaux Métropole that has a population of 796,273 (2019), the 5th largest in France after Paris, Marseille, Lyon and Lille with its immediate suburbs and closest satellite towns. The larger metropolitan area has a population of 1,232,550 (2016). It is the capital of the Nouvelle-Aquitaine region, as well as the prefecture of the Gironde department. Its inhabitants are called ""Bordelais"" (for men) or ""Bordelaises"" (women). The term "Bordelais" may also refer to the city and its surrounding region.
Being France's most prominent wine region, with 3,37 Billion € turnover it is both the center of a major wine-growing and wine-producing region hosting the world's most renowned estates, and a prominent powerhouse exercising significant influence on the world's wine and spirits industry, although no wine production is conducted within the city limits. It is home to the world's main wine fair, Vinexpo, and the wine economy in the metro area takes in 14.5 billion euros each year. Bordeaux wine has been produced in the region since the 8th century. The historic part of the city is on the UNESCO World Heritage List as "an outstanding urban and architectural ensemble" of the 18th century. After Paris, Bordeaux has the highest number of preserved historical buildings of any city in France.
Around 300 BC, the region was the settlement of a Celtic tribe, the Bituriges Vivisci, named the town Burdigala, probably of Aquitanian origin.
In 107 BC, the Battle of Burdigala was fought by the Romans who were defending the Allobroges, a Gallic tribe allied to Rome, and the Tigurini led by Divico. The Romans were defeated and their commander, the consul Lucius Cassius Longinus, was killed in battle.
The city came under Roman rule around 60 BC, and it became an important commercial centre for tin and lead. It continued to flourish, especially during the Severan dynasty (3rd century), and acquired the status of capital of Roman Aquitaine. During this period were built the amphitheatre and the momument "Les Piliers de Tutelle".
In 276, it was sacked by the Vandals. The Vandals attacked again in 409, followed by the Visigoths in 414, and the Franks in 498, and afterwards the city fell into a period of relative obscurity.
In the late 6th century the city re-emerged as the seat of a county and an archdiocese within the Merovingian kingdom of the Franks, but royal Frankish power was never strong. The city started to play a regional role as a major urban center on the fringes of the newly founded Frankish Duchy of Vasconia. Around 585 Gallactorius was made count of Bordeaux and fought the Basques.
In 732, the city was plundered by the troops of Abd er Rahman who stormed the fortifications and overwhelmed the Aquitanian garrison. Duke Eudes mustered a force to engage the Umayyads, eventually engaging them in the Battle of the River Garonne somewhere near the river Dordogne. The battle had a high death toll, and although Eudes was defeated he had enough troops to engage in the Battle of Poitiers and so retain his grip on Aquitaine.
In 773, following his father Eudes's death, the Aquitanian duke Hunald led a rebellion to which Charles responded by launching an expedition that captured Bordeaux. However, it was not retained for long, during the following year the Frankish commander clashed in battle with the Aquitanians but then left to take on hostile Burgundian authorities and magnates. In 745 Aquitaine faced another expedition where Charles's sons Pepin and Carloman challenged Hunald's power and defeated him. Hunald's son Waifer replaced him and confirmed Bordeaux as the capital city (along with Bourges in the north).
During the last stage of the war against Aquitaine (760–768), it was one of Waifer's last important strongholds to fall to the troops of King Pepin the Short. Charlemagne built the fortress of Fronsac ("Frontiacus", "Franciacus") near Bordeaux on a hill across the border with the Basques ("Wascones"), where Basque commanders came and pledged their loyalty (769).
In 778, Seguin (or Sihimin) was appointed count of Bordeaux, probably undermining the power of the Duke Lupo, and possibly leading to the Battle of Roncevaux Pass[9]-. In 814, Seguin was made Duke of Vasconia, but was deposed in 816 for failing to suppress a Basque rebellion. Under the Carolingians, sometimes the Counts of Bordeaux held the title concomitantly with that of Duke of Vasconia. They were to keep the Basques in check and defend the mouth of the Garonne from the Vikings when they appeared in c. 844. In Autumn 845, the Vikings were raiding Bordeaux and Saintes, count Seguin II marched on them but was captured and executed.
Although the port of Bordeaux was a buzzing trade center, the stability and success of the city was constantly threatened by Germanic and Norman invasions. It wasn't until the marriage of Eleanor of Aquitaine and Henry Plantagenet in 1152 established some sort of protection, as it provided a connection with the English After this union, Bordeaux suddenly had access to naval protection, which made the attacks from the nomadic groups few and far between.
From the 12th to the 15th century, Bordeaux regained importance following the marriage of Duchess Eléonore of Aquitaine to the French-speaking Count Henri Plantagenet, born in Le Mans, who within months of their wedding became King Henry II of England. The city flourished, primarily due to the wine trade, and the cathedral of St. André and the belfry (Grosse Cloche) were built. After granting a tax-free trade status with England, King Henry II was adored by the locals as they could be even more profitable in the wine trade, their main source of income. The city cathedral St. Cathédrale St-André was built in 1227, incorporating the artisan quarter of Saint-Paul. It was also the capital of an independent state under Edward, the Black Prince (1362–1372), but after the Battle of Castillon (1453) it was annexed by France, and so extended its territory.
In 1462, Bordeaux created a local parliament. However, it only begun to regain its importance during the 17th century when it became a major trading centre for sugar and slaves from the West Indies, along with its traditional wine exports.
Bordeaux adhered to the Fronde, being effectively annexed to the Kingdom of France only in 1653, when the army of Louis XIV entered the city.
The 18th century saw another golden age of Bordeaux. The Port of the Moon supplied the majority of Europe with coffee, cocoa, sugar, cotton and indigo, becoming France's busiest port and the second busiest port in the world after London. Many downtown buildings (about 5,000), including those on the quays, are from this period. Victor Hugo found the town so beautiful he said: "Take Versailles, add Antwerp, and you have Bordeaux". Georges-Eugène Haussmann, a long-time prefect of Bordeaux, used Bordeaux's 18th-century large-scale rebuilding as a model when he was asked by Emperor Napoleon III to transform a then still quasi-medieval Paris into a "modern" capital that would make France proud.
Towards the end of the Peninsula war on 12 March 1814, the Duke of Wellington sent William Beresford with two divisions and seized Bordeaux encountering little resistance. Bordeaux was largely anti-Bonapartist and the majority supported the Bourbons, so the British troops were treated as liberators.
In 1870, at the beginning of the Franco-Prussian war against Prussia, the French government temporarily relocated to Bordeaux from Paris. This recurred during the World War I and again very briefly during the World War II, when it became clear that Paris would fall into German hands.
During World War II, Bordeaux fell under German Occupation.
In May and June 1940, Bordeaux was the site of the life-saving actions of the Portuguese consul-general, Aristides de Sousa Mendes, who illegally granted thousands of Portuguese visas, which were needed to pass the Spanish border, to refugees fleeing the German Occupation.
From 1941 to 1943, the Italian Royal Navy ("Regia Marina Italiana") established BETASOM, a submarine base at Bordeaux. Italian submarines participated in the Battle of the Atlantic from this base, which was also a major base for German U-boats as headquarters of 12th U-boat Flotilla. The massive, reinforced concrete U-boat pens have proved impractical to demolish and are now partly used as a cultural center for exhibitions.
Bordeaux is located close to the European Atlantic coast, in the southwest of France and in the north of the Aquitaine region. It is around southwest of Paris. The city is built on a bend of the river Garonne, and is divided into two parts: the right bank to the east and left bank in the west. Historically the left bank is more developed because when flowing outside the bend, the water makes a furrow of the required depth to allow the passing of merchant ships, which used to offload on this side of the river. But, today, the right bank is developing, including new urban projects. In Bordeaux, the Garonne River is accessible to ocean liners through the Gironde estuary. The right bank of the Garonne is a low-lying, often marshy plain.
Bordeaux's climate is classified as a temperate oceanic climate (Köppen climate classification "Cfb"), or in the Trewartha climate classification system as temperate oceanic or Do climate. Bordeaux lies close to the humid subtropical climate zone, its summers not quite warm enough for that classification.
Winters are cool because of the prevalence of westerly winds from the Atlantic. Summers are warm and long due to the influence from the Bay of Biscay (surface temperature reaches ). The average seasonal winter temperature is , but recent winters have been warmer than this. Frosts in the winter occur several times during a winter, but snowfall is very rare, occurring only once every three years. The average summer seasonal temperature is . The summer of 2003 set a record with an average temperature of .
Bordeaux is a major centre for business in France as it has the sixth largest metropolitan population in France. It serves as a major regional center for trade, administration, services and industry.
, the GDP of Bordeaux is €32.7 Billion.
The vine was introduced to the Bordeaux region by the Romans, probably in the mid-first century, to provide wine for local consumption, and wine production has been continuous in the region since.
Bordeaux wine growing area has about of vineyards, 57 appellations, 10,000 wine-producing estates (châteaux) and 13,000 grape growers. With an annual production of approximately 960 million bottles, the Bordeaux area produces large quantities of everyday wine as well as some of the most expensive wines in the world. Included among the latter are the area's five "premier cru" (first growth) red wines (four from Médoc and one, Château Haut-Brion, from Graves), established by the Bordeaux Wine Official Classification of 1855:
Both red and white wines are made in the Bordeaux region. Red Bordeaux wine is called claret in the United Kingdom. Red wines are generally made from a blend of grapes, and may be made from Cabernet Sauvignon, Merlot, Cabernet Franc, Petit verdot, Malbec, and, less commonly in recent years, Carménère.
White Bordeaux is made from Sauvignon blanc, Sémillon, and Muscadelle. Sauternes is a sub-region of Graves known for its intensely sweet, white, dessert wines such as Château d'Yquem.
Because of a wine glut (wine lake) in the generic production, the price squeeze induced by an increasingly strong international competition, and vine pull schemes, the number of growers has recently dropped from 14,000 and the area under vine has also decreased significantly. In the meantime, the global demand for first growths and the most famous labels markedly increased and their prices skyrocketed.
The Cité du Vin, a museum as well as a place of exhibitions, shows, movie projections and academic seminars on the theme of wine opened its doors in June 2016.
The Laser Mégajoule will be one of the most powerful lasers in the world, allowing fundamental research and the development of the laser and plasma technologies. This project, carried by the French Ministry of Defence, involves an investment of 2 billion euros. The "Road of the lasers", a major project of regional planning, promotes regional investment in optical and laser related industries leading to the Bordeaux area having the most important concentration of optical and laser expertise in Europe.
Some 20,000 people work for the aeronautic industry in Bordeaux. The city has some of the biggest companies including Dassault, EADS Sogerma, Snecma, Thales, SNPE, and others. The Dassault Falcon private jets are built there as well as the military aircraft Rafale and Mirage 2000, the Airbus A380 cockpit, the boosters of Ariane 5, and the M51 SLBM missile.
Tourism, especially wine tourism, is a major industry. Globelink.co.uk mentioned Bordeaux as the best tourist destination in Europe in 2015.
Access to the port from the Atlantic is via the Gironde estuary. Almost nine million tonnes of goods arrive and leave each year.
This list includes indigenous Bordeaux-based companies and companies that have major presence in Bordeaux, but are not necessarily headquartered there.
At the January 2011 census, there were 239,399 inhabitants in the city proper (commune) of Bordeaux. Bordeaux in its hey day had a population of 262,662 in 1968. The majority of the population is French, but there are sizable groups of Italians, Spaniards (Up to 20% of the Bordeaux population claim some degree of Spanish heritage), Portuguese, Turks, Germans.
The built-up area has grown for more than a century beyond the municipal borders of Bordeaux due to urban sprawl, so that by the January 2011 census there were 1,140,668 people living in the overall metropolitan area of Bordeaux, only a fifth of whom lived in the city proper.
Largest communities of foreigners :
At the 2007 presidential election, the Bordelais gave 31.37% of their votes to Ségolène Royal of the Socialist Party against 30.84% to Nicolas Sarkozy, president of the UMP. Then came François Bayrou with 22.01%, followed by Jean-Marie Le Pen who recorded 5.42%. None of the other candidates exceeded the 5% mark. Nationally, Nicolas Sarkozy led with 31.18%, then Ségolène Royal with 25.87%, followed by François Bayrou with 18.57%. After these came Jean-Marie Le Pen with 10.44%, none of the other candidates exceeded the 5% mark. In the second round, the city of Bordeaux gave Ségolène Royal 52.44% against 47.56% for Nicolas Sarkozy, the latter being elected President of the Republic with 53.06% against 46.94% for Ségolène Royal. The abstention rates for Bordeaux were 14.52% in the first round and 15.90% in the second round.
In the parliamentary elections of 2007, the left won eight constituencies against only three for the right. It should be added that after the partial 2008 elections, the eighth district of Gironde switched to the left, bringing the count to nine. In Bordeaux, the left was for the first time in its history the majority as it held two of three constituencies following the elections. In the first division of the Gironde, the outgoing UMP MP Chantal Bourragué was well ahead with 44.81% against 25.39% for the Socialist candidate Beatrice Desaigues. In the second round, it was Chantal Bourragué who was re-elected with 54.45% against 45.55% for his socialist opponent. In the second district of Gironde the UMP mayor and all new Minister of Ecology, Energy, Sustainable Development and the Sea Alain Juppé confronted the General Counsel PS Michèle Delaunay. In the first round, Alain Juppé was well ahead with 43.73% against 31.36% for Michèle Delaunay. In the second round, it was finally Michèle Delaunay who won the election with 50.93% of the votes against 49.07% for Alain Juppé, the margin being only 670 votes. The defeat of the so-called constituency "Mayor" showed that Bordeaux was rocking increasingly left. Finally, in the third constituency of the Gironde, Noël Mamère was well ahead with 39.82% against 28.42% for the UMP candidate Elizabeth Vine. In the second round, Noël Mamère was re-elected with 62.82% against 37.18% for his right-wing rival.
In 2008 municipal elections saw the clash between mayor of Bordeaux, Alain Juppé and the President of the Regional Council of Aquitaine Socialist Alain Rousset. The PS had put up a Socialist heavyweight in the Gironde and had put great hopes in this election after the victory of Ségolène Royal and Michèle Delaunay in 2007. However, after a rather exciting campaign it was Alain Juppé who was widely elected in the first round with 56.62%, far ahead of Alain Rousset who has managed to get 34.14%. At present, of the eight cantons that has Bordeaux, five are held by the PS and three by the UMP, the left eating a little each time into the right's numbers.
In the European elections of 2009, Bordeaux voters largely voted for the UMP candidate Dominique Baudis, who won 31.54% against 15.00% for PS candidate Kader Arif. The candidate of Europe Ecology José Bové came second with 22.34%. None of the other candidates reached the 10% mark. The 2009 European elections were like the previous ones in eight constituencies. Bordeaux is located in the district "Southwest", here are the results:
UMP candidate Dominique Baudis: 26.89%. His party gained four seats. PS candidate Kader Arif: 17.79%, gaining two seats in the European Parliament. Europe Ecology candidate Bove: 15.83%, obtaining two seats. MoDem candidate Robert Rochefort: 8.61%, winning a seat. Left Front candidate Jean-Luc Mélenchon: 8.16%, gaining the last seat. At regional elections in 2010, the Socialist incumbent president Alain Rousset won the first round by totaling 35.19% in Bordeaux, but this score was lower than the plan for Gironde and Aquitaine. Xavier Darcos, Minister of Labour followed with 28.40% of the votes, scoring above the regional and departmental average. Then came Monique De Marco, Green candidate with 13.40%, followed by the member of Pyrenees-Atlantiques and candidate of the MoDem Jean Lassalle who registered a low 6.78% while qualifying to the second round on the whole Aquitaine, closely followed by Jacques Colombier, candidate of the National Front, who gained 6.48%. Finally the candidate of the Left Front Gérard Boulanger with 5.64%, no other candidate above the 5% mark. In the second round, Alain Rousset had a tidal wave win as national totals rose to 55.83%. If Xavier Darcos largely lost the election, he nevertheless achieved a score above the regional and departmental average obtaining 33.40%. Jean Lassalle, who qualified for the second round, passed the 10% mark by totaling 10.77%. The ballot was marked by abstention amounting to 55.51% in the first round and 53.59% in the second round.
"Only candidates obtaining more than 5% are listed"
The Mayor of the city is Nicolas Florian.
Virginie Calmels is Deputy Mayor of Bordeaux in charge of the Economy, Employment and Sustainable Growth and Vice-President of the Urban Community of Bordeaux.
Bordeaux is the capital of five cantons and the Prefecture of the Gironde and Aquitaine.
The town is divided into three districts, the first three of Gironde. The headquarters of Urban Community of Bordeaux Mériadeck is located in the neighbourhood and the city is at the head of the Chamber of Commerce and Industry that bears his name.
The number of inhabitants of Bordeaux is greater than 199,999 and less than 250,000 and so the number of municipal councilors is 61. They are divided according to the following composition:
Since 1947, there have been 5 mayors of Bordeaux:
The university was created by the archbishop Pey Berland in 1441 and was abolished in 1793, during the French Revolution, before reappearing in 1808 with Napoleon. Bordeaux accommodates approximately 70,000 students on one of the largest campuses of Europe (235 ha).
The University of Bordeaux is divided into four:
Bordeaux has numerous public and private schools offering undergraduate and postgraduate programs.
Engineering schools:
Business and management schools:
Other:
The "École Compleméntaire Japonaise de Bordeaux" (ボルドー日本語補習授業校 "Borudō Nihongo Hoshū Jugyō Kō"), a part-time Japanese supplementary school, is held in the "Salle de L'Athenee Municipal" in Bordeaux.
Bordeaux is classified "City of Art and History". The city is home to 362 "monuments historiques" (only Paris has more in France) with some buildings dating back to Roman times. Bordeaux, Port of the moon, has been inscribed on UNESCO World Heritage List as ""an outstanding urban and architectural ensemble"".
Bordeaux is home to one of Europe's biggest 18th-century architectural urban areas, making it a sought-after destination for tourists and cinema production crews. It stands out as one of the first French cities, after Nancy, to have entered an era of urbanism and metropolitan big scale projects, with the team Gabriel father and son, architects for King Louis XV, under the supervision of two intendants (Governors), first Nicolas-François Dupré de Saint-Maur then the Marquis de Tourny.
Saint-André Cathedral, Saint-Michel Basilica and Saint-Seurin Basilica are part of the World Heritage Sites of the Routes of Santiago de Compostela in France.
Main sights include:
Slavery was part of a growing drive for the city. Firstly, during the 18th and 19th centuries, Bordeaux was an important slave port, which saw some 500 slave expeditions that cause the deportation of 150,000 Africans by Bordeaux shipowners. Secondly, even though the "Triangular trade" represented only 5% of Bordeaux's wealth, the city's direct trade with the Caribbean, that accounted for the other 95%, concerns the colonial stuffs made by the slave (sugar, coffee, cocoa). And thirdly, in that same period, a major migratory movement by Aquitanians took place to the Caribbean colonies, with Saint-Domingue (now Haiti) being the most popular destination. 40% of the white population of the island came from Aquitaine. They prospered with plantations incomes, until the first slave revolts which concluded in 1848 in the final abolition of slavery in France.
Today a lot of traces and memorial sites are visible in the city. Moreover, in May 2009, the Museum of Aquitaine opened the spaces dedicated to "Bordeaux in the 18th century, trans-Atlantic trading and slavery". This work, richly illustrated with original documents, contributes to disseminate the state of knowledge on this question, presenting above all the facts and their chronology.
The region of Bordeaux was also the land of several prominent abolitionists, as Montesquieu, Laffon deLadébat and Elisée Reclus. Others were members of the Society of the Friends of the Blacks as the revolutionaries Boyer-Fonfrède, Gensonné, Guadet and Ducos.
Europe's longest-span vertical-lift bridge, the Pont Jacques Chaban-Delmas, was opened in 2013 in Bordeaux, spanning the River Garonne. The central lift span is and can be lifted vertically up to to let tall ships pass underneath. The €160 million bridge was inaugurated by President François Hollande and Mayor Alain Juppé on 16 March 2013. The bridge was named after the late Jacques Chaban-Delmas, who was a former Prime Minister and Mayor of Bordeaux.
Bordeaux has many shopping options. In the heart of Bordeaux is "Rue Sainte-Catherine". This pedestrian-only shopping street has of shops, restaurants and cafés; it is also one of the longest shopping streets in Europe. "Rue Sainte-Catherine" starts at "Place de la Victoire" and ends at "Place de la Comédie" by the "Grand Théâtre". The shops become progressively more upmarket as one moves towards "Place de la Comédie" and the nearby "Cours de l'Intendance" is where one finds the more exclusive shops and boutiques.
Bordeaux is also the first city in France to have created, in the 1980s, an architecture exhibition and research centre, "Arc en rêve". Bordeaux offers a large number of cinemas, theatres, and is the home of the Opéra national de Bordeaux. There are many music venues of varying capacity. The city also offers several festivals throughout the year.
Bordeaux is an important road and motorway junction. The city is connected to Paris by the A10 motorway, with Lyon by the A89, with Toulouse by the A62, and with Spain by the A63. There is a ring road called the "Rocade" which is often very busy. Another ring road is under consideration.
Bordeaux has five road bridges that cross the Garonne, the Pont de pierre built in the 1820s and three modern bridges built after 1960: the Pont Saint Jean, just south of the Pont de pierre (both located downtown), the Pont d'Aquitaine, a suspended bridge downstream from downtown, and the Pont François Mitterrand, located upstream of downtown. These two bridges are part of the ring road around Bordeaux. A fifth bridge, the Pont Jacques-Chaban-Delmas, was constructed in 2009–2012 and opened to traffic in March 2013. Located halfway between the Pont de pierre and the Pont d'Aquitaine and serving downtown rather than highway traffic, it is a vertical-lift bridge with a height comparable to the Pont de pierre in closed position, and to the Pont d'Aquitaine in open position. All five road bridges, including the two highway bridges, are open to cyclists and pedestrians as well.
Another bridge, the Pont Jean-Jacques Bosc, is to be built in 2018.
Lacking any steep hills, Bordeaux is relatively friendly to cyclists. Cycle paths (separate from the roadways) exist on the highway bridges, along the riverfront, on the university campuses, and incidentally elsewhere in the city. Cycle lanes and bus lanes that explicitly allow cyclists exist on many of the city's boulevards. A paid bicycle-sharing system with automated stations has been established in 2010.
The main railway station, Gare de Bordeaux Saint-Jean, near the center of the city, has 12 million passengers a year. It is served by the French national (SNCF) railway's high speed train, the TGV, that gets to Paris in two hours, with connections to major European centers such as Lille, Brussels, Amsterdam, Cologne, Geneva and London. The TGV also serves Toulouse and Irun (Spain) from Bordeaux. A regular train service is provided to Nantes, Nice, Marseille and Lyon. The Gare Saint-Jean is the major hub for regional trains (TER) operated by the SNCF to Arcachon, Limoges, Agen, Périgueux, Langon, Pau, Le Médoc, Angoulême and Bayonne.
Historically the train line used to terminate at a station on the right bank of the river Garonne near the Pont de Pierre, and passengers crossed the bridge to get into the city. Subsequently, a double-track steel railway bridge was constructed in the 1850s, by Gustave Eiffel, to bring trains across the river direct into Gare de Bordeaux Saint-Jean. The old station was later converted and in 2010 comprised a cinema and restaurants.
The two-track Eiffel bridge with a speed limit of became a bottleneck and a new bridge was built, opening in 2009. The new bridge has four tracks and allows trains to pass at . During the planning there was much lobbying by the Eiffel family and other supporters to preserve the old bridge as a footbridge across the Garonne, with possibly a museum to document the history of the bridge and Gustave Eiffel's contribution. The decision was taken to save the bridge, but by early 2010 no plans had been announced as to its future use. The bridge remains intact, but unused and without any means of access.
Since July 2017, the LGV Sud Europe Atlantique is fully operational and makes Bordeaux city 2h04 from Paris.
Bordeaux is served by Bordeaux–Mérignac Airport, located from the city centre in the suburban city of Mérignac.
Bordeaux has an important public transport system called Transports Bordeaux Métropole (TBM). This company is run by the Keolis group. The network consists of:
This network is operated from 5 am to 2 am.
There had been several plans for a subway network to be set up, but they stalled for both geological and financial reasons. Work on the Tramway de Bordeaux system was started in the autumn of 2000, and services started in December 2003 connecting Bordeaux with its suburban areas. The tram system uses ground-level power supply technology (APS), a new cable-free technology developed by French company Alstom and designed to preserve the aesthetic environment by eliminating overhead cables in the historic city. Conventional overhead cables are used outside the city. The system was controversial for its considerable cost of installation, maintenance and also for the numerous initial technical problems that paralysed the network. Many streets and squares along the tramway route became pedestrian areas, with limited access for cars.
The planned Bordeaux tramway system is to link with the airport to the city centre towards the end of 2019.
There are more than 400 taxicabs in Bordeaux.
The average amount of time people spend commuting with public transit in Bordeaux, for example to and from work, on a weekday is 51 min. 12.% of public transit riders, ride for more than 2 hours every day. The average amount of time people wait at a stop or station for public transit is 13 min, while 15.5% of riders wait for over 20 minutes on average every day. The average distance people usually ride in a single trip with public transit is , while 8% travel for over in a single direction.
The 41,458-capacity Nouveau Stade de Bordeaux is the largest stadium in Bordeaux. The stadium was opened in 2015 and replaced the Stade Chaban-Delmas, which was a venue for the FIFA World Cup in 1938 and 1998, as well as the 2007 Rugby World Cup. In the 1938 FIFA World Cup, it hosted a violent quarter-final known as the Battle of Bordeaux. The ground was formerly known as the "Stade du Parc Lescure" until 2001, when it was renamed in honour of the city's long-time mayor, Jacques Chaban-Delmas.
There are two major sport teams in Bordeaux, Girondins de Bordeaux is the football team, playing in Ligue 1 in the French football championship. Union Bordeaux Bègles is a rugby team in the Top 14 in the Ligue Nationale de Rugby.
Skateboarding, rollerblading, and BMX biking are activities enjoyed by many young inhabitants of the city. Bordeaux is home to a beautiful quay which runs along the Garonne river. On the quay there is a skate-park divided into three sections. One section is for Vert tricks, one for street style tricks, and one for little action sports athletes with easier features and softer materials. The skate-park is very well maintained by the municipality.
Bordeaux is also the home to one of the strongest cricket teams in France and are champions of the South West League.
There is a wooden velodrome, Vélodrome du Lac, in Bordeaux which hosts international cycling competition in the form of UCI Track Cycling World Cup events.
The 2015 Trophee Eric Bompard was in Bordeaux. But the Free Skate was cancelled in all of the divisions due to the Paris bombing(s) and aftermath. The Short Program occurred hours before the bombing. French skaters Chafik Besseghier (68.36) in 10th place, Romain Ponsart (62.86) in 11th. Mae-Berenice-Meite (46.82) in 11th and Laurine Lecavelier (46.53) in 12th. Vanessa James/Morgan Cipres (65.75) in 2nd.
Between 1951 and 1955, an annual Formula 1 motor race was held on a 2.5-kilometre circuit which looped around the Esplanade des Quinconces and along the waterfront, attracting drivers such as Juan Manuel Fangio, Stirling Moss, Jean Behra and Maurice Trintignant.
Bordeaux is twinned with: | https://en.wikipedia.org/wiki?curid=4097 |
Puzzle Bobble
At the start of each round, the rectangular playing arena contains a prearranged pattern of colored "bubbles". (These are actually referred to in the translation as "balls"; however, they were clearly intended to be bubbles, since they pop, and are taken from "Bubble Bobble".) At the bottom of the screen, the player controls a device called a "pointer", which aims and fires bubbles up the screen. The color of bubbles fired is randomly generated and chosen from the colors of bubbles still left on the screen.
The objective of the game is to clear all the bubbles from the arena without any bubble crossing the bottom line. Bubbles will fire automatically if the player remains idle. After clearing the arena, the next round begins with a new pattern of bubbles to clear. The game consists of 32 levels. The fired bubbles travel in straight lines (possibly bouncing off the sidewalls of the arena), stopping when they touch other bubbles or reach the top of the arena. If a bubble touches identically-colored bubbles, forming a group of three or more, those bubbles—as well as any bubbles hanging from them—are removed from the field of play, and points are awarded.After every few shots, the "ceiling" of the playing arena drops downwards slightly, along with all the bubbles stuck to it. The number of shots between each drop of the ceiling is influenced by the number of bubble colors remaining. The closer the bubbles get to the bottom of the screen, the faster the music plays and if they cross the line at the bottom then the game is over.
Two different versions of the original game were released. "Puzzle Bobble" was originally released in Japan only in June 1994 by Taito Corporation, running on Taito's B System hardware (with the preliminary title "Bubble Buster"). Then, 6 months later in December, the international Neo Geo version of "Puzzle Bobble" was released. It was almost identical aside from being in stereo and having some different sound effects and translated text.
When set to the US region, the Neo Geo version displays the alternative title "Bust a Move" and features anti-drugs and anti-littering messages in the title sequence. The Bust-a-Move title was used for all subsequent games in the series in the United States and Canada, as well as for some (non-Taito published) console releases in Europe.
As with many popular arcade games, experienced players (who can complete the game relatively easily) become much more interested in the secondary challenge of obtaining a high score (which involves a lot more skill and strategy). "Puzzle Bobble" caters to this interest very well, featuring an exponential scoring system that allows extremely high scores to be achieved.
"Popped" bubbles (that is, bubbles of the same color which are destroyed) are worth 10 points each. However, "dropped" bubbles (that is, bubbles that were hanging from popped bubbles), are worth far more: one dropped bubble scores 20 points; two scores 40; three score 80. This figure continues doubling for each bubble dropped, up to 17 or more bubbles which scores 1,310,720 points. It is possible to achieve this maximum on most rounds (sometimes twice or more), resulting in a potential total score of 30 million and beyond.
Bonus points are also awarded for completing a round quickly. The maximum 50,000-point bonus is awarded for clearing a round in 5 seconds or less; this bonus then drops down to zero over the next minute, after which no bonus is awarded.
There are no rounds in the two-player game. Both players have an arena each (both visible on screen) and an identical arrangement of colored bubbles in each arena. When a player removes a large group (four bubbles or more) some of those removed are transferred to the opponent's arena, usually delaying their efforts to remove all the bubbles from their individual arena. In some versions, the two-player game can also be played by one player against a computer opponent.
Reviewing the Super NES version, Mike Weigand of "Electronic Gaming Monthly" called it "a thoroughly enjoyable and incredibly addicting puzzle game". He considered the two player mode the highlight, but also said that the one player mode provides a solid challenge. "GamePro" gave it a generally negative review, saying it "starts out fun but ultimately lacks intricacy and longevity." They elaborated that in one player mode all the levels feel the same, and that two player matches are over too quickly to build up any excitement. They also criticized the lack of any 3D effects in the graphics. "Next Generation" reviewed the SNES version of the game, and stated that "It's very simple, using only the control pad and one button to fire, and it's addictive as hell."
A reviewer for "Next Generation", while questioning the continued viability of the action puzzle genre, admitted that the game is "very simple and "very" addictive". He remarked that though the 3DO version makes no significant additions, none are called for by a game with such simple enjoyment. "GamePro"s brief review of the 3DO version commented, "The move-and-shoot controls are very responsive and the simple visuals and music are well done. This is one puzzler that isn't a bust."
The simplicity of the concept has led to many clones, both commercial and otherwise. 1996's "Snood" replaced the bubbles with small creatures and has been successful in its own right. "Worms Blast" was Team 17's take on the concept. Mobile clones include "Bubble Witch Saga" and "Bubble Shooter". "Frozen Bubble" is a free software clone. | https://en.wikipedia.org/wiki?curid=4098 |
Bone
A bone is a rigid organ that constitutes part of the vertebrate skeleton in animals. Bones protect the various organs of the body, produce red and white blood cells, store minerals, provide structure and support for the body, and enable mobility. Bones come in a variety of shapes and sizes and have a complex internal and external structure. They are lightweight yet strong and hard, and serve multiple functions.
Bone tissue (osseous tissue) is a hard tissue, a type of dense connective tissue. It has a honeycomb-like matrix internally, which helps to give the bone rigidity. Bone tissue is made up of different types of bone cells. Osteoblasts and osteocytes are involved in the formation and mineralization of bone; osteoclasts are involved in the resorption of bone tissue. Modified (flattened) osteoblasts become the lining cells that form a protective layer on the bone surface. The mineralised matrix of bone tissue has an organic component of mainly collagen called "ossein" and an inorganic component of bone mineral made up of various salts. Bone tissue is a mineralized tissue of two types, cortical bone and cancellous bone. Other types of tissue found in bones include bone marrow, endosteum, periosteum, nerves, blood vessels and cartilage.
In the human body at birth, there are approximately 270 bones present; many of these fuse together during development, leaving a total of 206 separate bones in the adult, not counting numerous small sesamoid bones. The largest bone in the body is the femur or thigh-bone, and the smallest is the stapes in the middle ear.
The Greek word for bone is ὀστέον ("osteon"), hence the many terms that use it as a prefix—such as osteopathy.
Bone is not uniformly solid, but consists of a flexible matrix (about 30%) and bound minerals (about 70%) which are intricately woven and endlessly remodeled by a group of specialized bone cells. Their unique composition and design allows bones to be relatively hard and strong, while remaining lightweight.
Bone matrix is 90 to 95% composed of elastic collagen fibers, also known as ossein, and the remainder is ground substance. The elasticity of collagen improves fracture resistance. The matrix is hardened by the binding of inorganic mineral salt, calcium phosphate, in a chemical arrangement known as calcium hydroxylapatite. It is the bone mineralization that give bones rigidity.
Bone is actively constructed and remodeled throughout life by special bone cells known as osteoblasts and osteoclasts. Within any single bone, the tissue is woven into two main patterns, known as cortical and cancellous bone, and each with different appearance and characteristics.
The hard outer layer of bones is composed of cortical bone, which is also called compact bone as it is much denser than cancellous bone. It forms the hard exterior (cortex) of bones. The cortical bone gives bone its smooth, white, and solid appearance, and accounts for 80% of the total bone mass of an adult human skeleton. It facilitates bone's main functions - to support the whole body, to protect organs, to provide levers for movement, and to store and release chemical elements, mainly calcium. It consists of multiple microscopic columns, each called an osteon or Haversian system. Each column is multiple layers of osteoblasts and osteocytes around a central canal called the haversian canal. Volkmann's canals at right angles connect the osteons together. The columns are metabolically active, and as bone is reabsorbed and created the nature and location of the cells within the osteon will change. Cortical bone is covered by a periosteum on its outer surface, and an endosteum on its inner surface. The endosteum is the boundary between the cortical bone and the cancellous bone. The primary anatomical and functional unit of cortical bone is the osteon.
Cancellous bone, also called trabecular or spongy bone, is the internal tissue of the skeletal bone and is an open cell porous network. Cancellous bone has a higher surface-area-to-volume ratio than cortical bone and it is less dense. This makes it weaker and more flexible. The greater surface area also makes it suitable for metabolic activities such as the exchange of calcium ions. Cancellous bone is typically found at the ends of long bones, near joints and in the interior of vertebrae. Cancellous bone is highly vascular and often contains red bone marrow where hematopoiesis, the production of blood cells, occurs. The primary anatomical and functional unit of cancellous bone is the trabecula. The trabeculae are aligned towards the mechanical load distribution that a bone experiences within long bones such as the femur. As far as short bones are concerned, trabecular alignment has been studied in the vertebral pedicle. Thin formations of osteoblasts covered in endosteum create an irregular network of spaces, known as trabeculae. Within these spaces are bone marrow and hematopoietic stem cells that give rise to platelets, red blood cells and white blood cells. Trabecular marrow is composed of a network of rod- and plate-like elements that make the overall organ lighter and allow room for blood vessels and marrow. Trabecular bone accounts for the remaining 20% of total bone mass but has nearly ten times the surface area of compact bone.
The words "cancellous" and "trabecular" refer to the tiny lattice-shaped units (trabeculae) that form the tissue. It was first illustrated accurately in the engravings of Crisóstomo Martinez.
Bone marrow, also known as myeloid tissue in red bone marrow, can be found in almost any bone that holds cancellous tissue. In newborns, all such bones are filled exclusively with red marrow or hematopoietic marrow, but as the child ages the hematopoietic fraction decreases in quantity and the fatty/ yellow fraction called marrow adipose tissue (MAT) increases in quantity. In adults, red marrow is mostly found in the bone marrow of the femur, the ribs, the vertebrae and pelvic bones.
Bone is a metabolically active tissue composed of several types of cells. These cells include osteoblasts, which are involved in the creation and mineralization of bone tissue, osteocytes, and osteoclasts, which are involved in the reabsorption of bone tissue. Osteoblasts and osteocytes are derived from osteoprogenitor cells, but osteoclasts are derived from the same cells that differentiate to form macrophages and monocytes. Within the marrow of the bone there are also hematopoietic stem cells. These cells give rise to other cells, including white blood cells, red blood cells, and platelets.
Osteoblasts are mononucleate bone-forming cells. They are located on the surface of osteon seams and make a protein mixture known as osteoid, which mineralizes to become bone. The osteoid seam is a narrow region of newly formed organic matrix, not yet mineralized, located on the surface of a bone. Osteoid is primarily composed of Type I collagen. Osteoblasts also manufacture hormones, such as prostaglandins, to act on the bone itself. The osteoblast creates and repairs new bone by actually building around itself. First, the osteoblast puts up collagen fibers. These collagen fibers are used as a framework for the osteoblasts' work. The osteoblast then deposits calcium phosphate which is hardened by hydroxide and bicarbonate ions. The brand new bone created by the osteoblast is called osteoid. Once the osteoblast is finished working it is actually trapped inside the bone once it hardens. When the osteoblast becomes trapped, it becomes known as an osteocyte. Other osteoblasts remain on the top of the new bone and are used to protect the underlying bone, these become known as lining cells.
Osteocytes are mostly inactive osteoblasts. Osteocytes originate from osteoblasts that have migrated into and become trapped and surrounded by bone matrix that they themselves produced. The spaces they occupy are known as lacunae. Osteocytes have many processes that reach out to meet osteoblasts and other osteocytes probably for the purposes of communication. Osteocytes remain in contact with other cells in the bone through gap junctions—coupled cell processes—which pass through small channels in the bone matrix called the canaliculi.
Osteoclasts are very large multinucleate cells that are responsible for the breakdown of bones by the process of bone resorption. New bone is then formed by the osteoblasts. Bone is constantly remodelled by the resorption of osteoclasts and created by osteoblasts. Osteoclasts are large cells with multiple nuclei located on bone surfaces in what are called "Howship's lacunae" (or "resorption pits"). These lacunae are the result of surrounding bone tissue that has been reabsorbed. Because the osteoclasts are derived from a monocyte stem-cell lineage, they are equipped with phagocytic-like mechanisms similar to circulating macrophages. Osteoclasts mature and/or migrate to discrete bone surfaces. Upon arrival, active enzymes, such as tartrate resistant acid phosphatase, are secreted against the mineral substrate. The reabsorption of bone by osteoclasts also plays a role in calcium homeostasis.
Bones consist of living cells embedded in a mineralized organic matrix. This matrix consists of organic components, mainly type I collagen – "organic" referring to materials produced as a result of the human body – and inorganic components, primarily hydroxyapatite and other salts of calcium and phosphate. Above 30% of the acellular part of bone consists of the organic components, and 70% of salts. The collagen fibers give bone its tensile strength, and the interspersed crystals of hydroxyapatite give bone its compressive strength. These effects are synergistic.
The inorganic composition of bone (bone mineral) is primarily formed from salts of calcium and phosphate, the major salt being hydroxyapatite (Ca10(PO4)6(OH)2). The exact composition of the matrix may be subject to change over time due to nutrition and biomineralization, with the ratio of calcium to phosphate varying between 1.3 and 2.0 (per weight), and trace minerals such as magnesium, sodium, potassium and carbonate also being found.
Type I collagen composes 90–95% of the organic matrix, with remainder of the matrix being a homogenous liquid called ground substance consisting of proteoglycans such as hyaluronic acid and chondroitin sulfate, as well as non-collagenous proteins such as osteocalcin, osteopontin or bone sialoprotein. Collagen consists of strands of repeating units, which give bone tensile strength, and are arranged in an overlapping fashion that prevents shear stress. The function of ground substance is not fully known. Two types of bone can be identified microscopically according to the arrangement of collagen: woven and lamellar.
Woven bone is produced when osteoblasts produce osteoid rapidly, which occurs initially in all fetal bones, but is later replaced by more resilient lamellar bone. In adults woven bone is created after fractures or in Paget's disease. Woven bone is weaker, with a smaller number of randomly oriented collagen fibers, but forms quickly; it is for this appearance of the fibrous matrix that the bone is termed "woven". It is soon replaced by lamellar bone, which is highly organized in concentric sheets with a much lower proportion of osteocytes to surrounding tissue. Lamellar bone, which makes its first appearance in humans in the fetus during the third trimester, is stronger and filled with many collagen fibers parallel to other fibers in the same layer (these parallel columns are called osteons). In cross-section, the fibers run in opposite directions in alternating layers, much like in plywood, assisting in the bone's ability to resist torsion forces. After a fracture, woven bone forms initially and is gradually replaced by lamellar bone during a process known as "bony substitution." Compared to woven bone, lamellar bone formation takes place more slowly. The orderly deposition of collagen fibers restricts the formation of osteoid to about 1 to 2 µm per day. Lamellar bone also requires a relatively flat surface to lay the collagen fibers in parallel or concentric layers.
The extracellular matrix of bone is laid down by osteoblasts, which secrete both collagen and ground substance. These synthesise collagen within the cell, and then secrete collagen fibrils. The collagen fibers rapidly polymerise to form collagen strands. At this stage they are not yet mineralised, and are called "osteoid". Around the strands calcium and phosphate precipitate on the surface of these strands, within days to weeks becoming crystals of hydroxyapatite.
In order to mineralise the bone, the osteoblasts secrete vesicles containing alkaline phosphatase. This cleaves the phosphate groups and acts as the foci for calcium and phosphate deposition. The vesicles then rupture and act as a centre for crystals to grow on. More particularly, bone mineral is formed from globular and plate structures.
There are five types of bones in the human body: long, short, flat, irregular, and sesamoid.
In the study of anatomy, anatomists use a number of anatomical terms to describe the appearance, shape and function of bones. Other anatomical terms are also used to describe the location of bones. Like other anatomical terms, many of these derive from Latin and Greek. Some anatomists still use Latin to refer to bones. The term "osseous", and the prefix "osteo-", referring to things related to bone, are still used commonly today.
Some examples of terms used to describe bones include the term "foramen" to describe a hole through which something passes, and a "canal" or "meatus" to describe a tunnel-like structure. A protrusion from a bone can be called a number of terms, including a "condyle", "crest", "spine", "eminence", "tubercle" or "tuberosity", depending on the protrusion's shape and location. In general, long bones are said to have a "head", "neck", and "body".
When two bones join together, they are said to "articulate". If the two bones have a fibrous connection and are relatively immobile, then the joint is called a "suture".
The formation of bone is called ossification. During the fetal stage of development this occurs by two processes: intramembranous ossification and endochondral ossification. Intramembranous ossification involves the formation of bone from connective tissue whereas endochondral ossification involves the formation of bone from cartilage.
Intramembranous ossification mainly occurs during formation of the flat bones of the skull but also the mandible, maxilla, and clavicles; the bone is formed from connective tissue such as mesenchyme tissue rather than from cartilage. The process includes: the development of the ossification center, calcification, trabeculae formation and the development of the periosteum.
Endochondral ossification occurs in long bones and most other bones in the body; it involves the development of bone from cartilage. This process includes the development of a cartilage model, its growth and development, development of the primary and secondary ossification centers, and the formation of articular cartilage and the epiphyseal plates.
Endochondral ossification begins with points in the cartilage called "primary ossification centers." They mostly appear during fetal development, though a few short bones begin their primary ossification after birth. They are responsible for the formation of the diaphyses of long bones, short bones and certain parts of irregular bones. Secondary ossification occurs after birth, and forms the epiphyses of long bones and the extremities of irregular and flat bones. The diaphysis and both epiphyses of a long bone are separated by a growing zone of cartilage (the epiphyseal plate). At skeletal maturity (18 to 25 years of age), all of the cartilage is replaced by bone, fusing the diaphysis and both epiphyses together (epiphyseal closure). In the upper limbs, only the diaphyses of the long bones and scapula are ossified. The epiphyses, carpal bones, coracoid process, medial border of the scapula, and acromion are still cartilaginous.
The following steps are followed in the conversion of cartilage to bone:
Bones have a variety of functions:
Bones serve a variety of mechanical functions. Together the bones in the body form the skeleton. They provide a frame to keep the body supported, and an attachment point for skeletal muscles, tendons, ligaments and joints, which function together to generate and transfer forces so that individual body parts or the whole body can be manipulated in three-dimensional space (the interaction between bone and muscle is studied in biomechanics).
Bones protect internal organs, such as the skull protecting the brain or the ribs protecting the heart and lungs. Because of the way that bone is formed, bone has a high compressive strength of about 170 MPa (1800 kgf/cm²), poor tensile strength of 104–121 MPa, and a very low shear stress strength (51.6 MPa). This means that bone resists pushing(compressional) stress well, resist pulling(tensional) stress less well, but only poorly resists shear stress (such as due to torsional loads). While bone is essentially brittle, bone does have a significant degree of elasticity, contributed chiefly by collagen.
Mechanically, bones also have a special role in hearing. The ossicles are three small bones in the middle ear which are involved in sound transduction.
The cancellous part of bones contain bone marrow. Bone marrow produces blood cells in a process called hematopoiesis. Blood cells that are created in bone marrow include red blood cells, platelets and white blood cells. Progenitor cells such as the hematopoietic stem cell divide in a process called mitosis to produce precursor cells. These include precursors which eventually give rise to white blood cells, and erythroblasts which give rise to red blood cells. Unlike red and white blood cells, created by mitosis, platelets are shed from very large cells called megakaryocytes. This process of progressive differentiation occurs within the bone marrow. After the cells are matured, they enter the circulation. Every day, over 2.5 billion red blood cells and platelets, and 50–100 billion granulocytes are produced in this way.
As well as creating cells, bone marrow is also one of the major sites where defective or aged red blood cells are destroyed.
°Determined by the species, age, and the type of bone, bone cells make up to 15 percent of the bone. Growth factor storage – mineralized bone matrix stores important growth factors such as insulin-like growth factors, transforming growth factor, bone morphogenetic proteins and others.
Bone is constantly being created and replaced in a process known as remodeling. This ongoing turnover of bone is a process of resorption followed by replacement of bone with little change in shape. This is accomplished through osteoblasts and osteoclasts. Cells are stimulated by a variety of signals, and together referred to as a remodeling unit. Approximately 10% of the skeletal mass of an adult is remodelled each year. The purpose of remodeling is to regulate calcium homeostasis, repair microdamaged bones from everyday stress, and to shape the skeleton during growth. Repeated stress, such as weight-bearing exercise or bone healing, results in the bone thickening at the points of maximum stress (Wolff's law). It has been hypothesized that this is a result of bone's piezoelectric properties, which cause bone to generate small electrical potentials under stress.
The action of osteoblasts and osteoclasts are controlled by a number of chemical enzymes that either promote or inhibit the activity of the bone remodeling cells, controlling the rate at which bone is made, destroyed, or changed in shape. The cells also use paracrine signalling to control the activity of each other. For example, the rate at which osteoclasts resorb bone is inhibited by calcitonin and osteoprotegerin. Calcitonin is produced by parafollicular cells in the thyroid gland, and can bind to receptors on osteoclasts to directly inhibit osteoclast activity. Osteoprotegerin is secreted by osteoblasts and is able to bind RANK-L, inhibiting osteoclast stimulation.
Osteoblasts can also be stimulated to increase bone mass through increased secretion of osteoid and by inhibiting the ability of osteoclasts to break down osseous tissue. Increased secretion of osteoid is stimulated by the secretion of growth hormone by the pituitary, thyroid hormone and the sex hormones (estrogens and androgens). These hormones also promote increased secretion of osteoprotegerin. Osteoblasts can also be induced to secrete a number of cytokines that promote reabsorbtion of bone by stimulating osteoclast activity and differentiation from progenitor cells. Vitamin D, parathyroid hormone and stimulation from osteocytes induce osteoblasts to increase secretion of RANK-ligand and interleukin 6, which cytokines then stimulate increased reabsorption of bone by osteoclasts. These same compounds also increase secretion of macrophage colony-stimulating factor by osteoblasts, which promotes the differentiation of progenitor cells into osteoclasts, and decrease secretion of osteoprotegerin.
Bone volume is determined by the rates of bone formation and bone resorption. Recent research has suggested that certain growth factors may work to locally alter bone formation by increasing osteoblast activity. Numerous bone-derived growth factors have been isolated and classified via bone cultures. These factors include insulin-like growth factors I and II, transforming growth factor-beta, fibroblast growth factor, platelet-derived growth factor, and bone morphogenetic proteins. Evidence suggests that bone cells produce growth factors for extracellular storage in the bone matrix. The release of these growth factors from the bone matrix could cause the proliferation of osteoblast precursors. Essentially, bone growth factors may act as potential determinants of local bone formation. Research has suggested that cancellous bone volume in postemenopausal osteoporosis may be determined by the relationship between the total bone forming surface and the percent of surface resorption.
A number of diseases can affect bone, including arthritis, fractures, infections, osteoporosis and tumours. Conditions relating to bone can be managed by a variety of doctors, including rheumatologists for joints, and orthopedic surgeons, who may conduct surgery to fix broken bones. Other doctors, such as rehabilitation specialists may be involved in recovery, radiologists in interpreting the findings on imaging, and pathologists in investigating the cause of the disease, and family doctors may play a role in preventing complications of bone disease such as osteoporosis.
When a doctor sees a patient, a history and exam will be taken. Bones are then often imaged, called radiography. This might include ultrasound X-ray, CT scan, MRI scan and other imaging such as a Bone scan, which may be used to investigate cancer. Other tests such as a blood test for autoimmune markers may be taken, or a synovial fluid aspirate may be taken.
In normal bone, fractures occur when there is significant force applied, or repetitive trauma over a long time. Fractures can also occur when a bone is weakened, such as with osteoporosis, or when there is a structural problem, such as when the bone remodels excessively (such as Paget's disease) or is the site of the growth of cancer. Common fractures include wrist fractures and hip fractures, associated with osteoporosis, vertebral fractures associated with high-energy trauma and cancer, and fractures of long-bones. Not all fractures are painful. When serious, depending on the fractures type and location, complications may include flail chest, compartment syndromes or fat embolism.
Compound fractures involve the bone's penetration through the skin. Some complex fractures can be treated by the use of bone grafting procedures that replace missing bone portions.
Fractures and their underlying causes can be investigated by X-rays, CT scans and MRIs. Fractures are described by their location and shape, and several classification systems exist, depending on the location of the fracture. A common long bone fracture in children is a Salter–Harris fracture. When fractures are managed, pain relief is often given, and the fractured area is often immobilised. This is to promote bone healing. In addition, surgical measures such as internal fixation may be used. Because of the immobilisation, people with fractures are often advised to undergo rehabilitation.
There are several types of tumour that can affect bone; examples of benign bone tumours include osteoma, osteoid osteoma, osteochondroma, osteoblastoma, enchondroma, giant cell tumour of bone, and aneurysmal bone cyst.
Cancer can arise in bone tissue, and bones are also a common site for other cancers to spread (metastasise) to. Cancers that arise in bone are called "primary" cancers, although such cancers are rare. Metastases within bone are "secondary" cancers, with the most common being breast cancer, lung cancer, prostate cancer, thyroid cancer, and kidney cancer. Secondary cancers that affect bone can either destroy bone (called a "lytic" cancer) or create bone (a "sclerotic" cancer). Cancers of the bone marrow inside the bone can also affect bone tissue, examples including leukemia and multiple myeloma. Bone may also be affected by cancers in other parts of the body. Cancers in other parts of the body may release parathyroid hormone or parathyroid hormone-related peptide. This increases bone reabsorption, and can lead to bone fractures.
Bone tissue that is destroyed or altered as a result of cancers is distorted, weakened, and more prone to fracture. This may lead to compression of the spinal cord, destruction of the marrow resulting in bruising, bleeding and immunosuppression, and is one cause of bone pain. If the cancer is metastatic, then there might be other symptoms depending on the site of the original cancer. Some bone cancers can also be felt.
Cancers of the bone are managed according to their type, their stage, prognosis, and what symptoms they cause. Many primary cancers of bone are treated with radiotherapy. Cancers of bone marrow may be treated with chemotherapy, and other forms of targeted therapy such as immunotherapy may be used. Palliative care, which focuses on maximising a person's quality of life, may play a role in management, particularly if the likelihood of survival within five years is poor.
Osteoporosis is a disease of bone where there is reduced bone mineral density, increasing the likelihood of fractures. Osteoporosis is defined in women by the World Health Organization as a bone mineral density of 2.5 standard deviations below peak bone mass, relative to the age and sex-matched average. This density is measured using dual energy X-ray absorptiometry (DEXA), with the term "established osteoporosis" including the presence of a fragility fracture. Osteoporosis is most common in women after menopause, when it is called "postmenopausal osteoporosis", but may develop in men and premenopausal women in the presence of particular hormonal disorders and other chronic diseases or as a result of smoking and medications, specifically glucocorticoids. Osteoporosis usually has no symptoms until a fracture occurs. For this reason, DEXA scans are often done in people with one or more risk factors, who have developed osteoporosis and are at risk of fracture.
Osteoporosis treatment includes advice to stop smoking, decrease alcohol consumption, exercise regularly, and have a healthy diet. Calcium and trace mineral supplements may also be advised, as may Vitamin D. When medication is used, it may include bisphosphonates, Strontium ranelate, and hormone replacement therapy.
Osteopathic medicine is a school of medical thought originally developed based on the idea of the link between the musculoskeletal system and overall health, but now very similar to mainstream medicine. , over 77,000 physicians in the United States are trained in osteopathic medical schools.
The study of bones and teeth is referred to as osteology. It is frequently used in anthropology, archeology and forensic science for a variety of tasks. This can include determining the nutritional, health, age or injury status of the individual the bones were taken from. Preparing fleshed bones for these types of studies can involve the process of maceration.
Typically anthropologists and archeologists study bone tools made by "Homo sapiens" and "Homo neanderthalensis". Bones can serve a number of uses such as projectile points or artistic pigments, and can also be made from external bones such as antlers.
Bird skeletons are very lightweight. Their bones are smaller and thinner, to aid flight. Among mammals, bats come closest to birds in terms of bone density, suggesting that small dense bones are a flight adaptation. Many bird bones have little marrow due to their being hollow.
A bird's beak is primarily made of bone as projections of the mandibles which are covered in keratin.
A deer's antlers are composed of bone which is an unusual example of bone being outside the skin of the animal once the velvet is shed.
The extinct predatory fish "Dunkleosteus" had sharp edges of hard exposed bone along its jaws.
Many animals possess an exoskeleton that is not made of bone. These include insects and crustaceans.
The proportion of cortical bone that is 80% in the human skeleton may be much lower in other animals, especially in marine mammals and marine turtles, or in various Mesozoic marine reptiles, such as ichthyosaurs, among others.
Many animals, particularly herbivores, practice osteophagy – the eating of bones. This is presumably carried out in order to replenish lacking phosphate.
Many bone diseases that affect humans also affect other vertebrates – an example of one disorder is skeletal fluorosis.
Bones from slaughtered animals have a number of uses. In prehistoric times, they have been used for making bone tools. They have further been used in bone carving, already important in prehistoric art, and also in modern time as crafting materials for buttons, beads, handles, bobbins, calculation aids, head nuts, dice, poker chips, pick-up sticks, ornaments, etc. A special genre is scrimshaw.
Bone glue can be made by prolonged boiling of ground or cracked bones, followed by filtering and evaporation to thicken the resulting fluid. Historically once important, bone glue and other animal glues today have only a few specialized uses, such as in antiques restoration. Essentially the same process, with further refinement, thickening and drying, is used to make gelatin.
Broth is made by simmering several ingredients for a long time, traditionally including bones.
Ground bones are used as an organic phosphorus-nitrogen fertilizer and as additive in animal feed. Bones, in particular after calcination to bone ash, are used as source of calcium phosphate for the production of bone china and previously also phosphorus chemicals.
Bone char, a porous, black, granular material primarily used for filtration and also as a black pigment, is produced by charring mammal bones.
Oracle bone script was a writing system used in Ancient China based on inscriptions in bones. Its name originates from oracle bones, which were mainly ox clavicle. The Ancient Chinese (mainly in the Shang Dynasty), would write their questions on the Oracle Bone, and burn the bone, and where the bone cracked would be the answer for the questions.
To point the bone at someone is considered bad luck in some cultures, such as Australian aborigines, such as by the Kurdaitcha.
The wishbones of fowl have been used for divination, and are still customarily used in a tradition to determine which one of two people pulling on either prong of the bone may make a wish.
Various cultures throughout history have adopted the custom of shaping an infant's head by the practice of artificial cranial deformation. A widely practised
custom in China was that of foot binding to limit the normal growth of the foot. | https://en.wikipedia.org/wiki?curid=4099 |
Bretwalda
Bretwalda (also brytenwalda and bretenanwealda, sometimes capitalised) is an Old English word. The first record comes from the late 9th-century "Anglo-Saxon Chronicle". It is given to some of the rulers of Anglo-Saxon kingdoms from the 5th century onwards who had achieved overlordship of some or all of the other Anglo-Saxon kingdoms. It is unclear whether the word dates back to the 5th century and was used by the kings themselves or whether it is a later, 9th-century, invention. The term "bretwalda" also appears in a 10th-century charter of Æthelstan. The literal meaning of the word is disputed and may translate to either 'wide-ruler' or 'Britain-ruler'.
The rulers of Mercia were generally the most powerful of the Anglo-Saxon kings from the mid 7th century to the early 9th century but are not accorded the title of "bretwalda" by the "Chronicle", which had an anti-Mercian bias. The "Annals of Wales" continued to recognise the kings of Northumbria as "Kings of the Saxons" until the death of Osred I of Northumbria in 716.
The first syllable of the term "bretwalda" may be related to "Briton" or "Britain". The second element is taken to mean 'ruler' or 'sovereign', though is more literally 'wielder'. Thus, this interpretation would mean 'sovereign of Britain' or 'wielder of Britain'. The word may be a compound containing the Old English adjective "brytten" (from the verb "breotan" meaning 'to break' or 'to disperse'), an element also found in the terms "bryten rice" ('kingdom'), "bryten-grund" ('the wide expanse of the earth') and "bryten cyning" ('king whose authority was widely extended'). Though the origin is ambiguous, the draughtsman of the charter issued by Æthelstan used the term in a way that can only mean 'wide-ruler'.
The latter etymology was first suggested by John Mitchell Kemble who alluded that "of six manuscripts in which this passage occurs, one only reads "Bretwalda": of the remaining five, four have "Bryten-walda" or "-wealda", and one "Breten-anweald", which is precisely synonymous with Brytenwealda"; that Æthelstan was called "brytenwealda ealles ðyses ealondes", which Kemble translates as 'ruler of all these islands'; and that "bryten-" is a common prefix to words meaning 'wide or general dispersion' and that the similarity to the word "bretwealh" ('Briton') is "merely accidental".
The first recorded use of the term "Bretwalda" comes from a West Saxon chronicle of the late 9th century that applied the term to Ecgberht, who ruled Wessex from 802 to 839. The chronicler also wrote down the names of seven kings that Bede listed in his "Historia ecclesiastica gentis Anglorum" in 731. All subsequent manuscripts of the "Chronicle" use the term "Brytenwalda", which may have represented the original term or derived from a common error.
There is no evidence that the term was a title that had any practical use, with implications of formal rights, powers and office, or even that it had any existence before the 9th-century. Bede wrote in Latin and never used the term and his list of kings holding "imperium" should be treated with caution, not least in that he overlooks kings such as Penda of Mercia, who clearly held some kind of dominance during his reign. Similarly, in his list of bretwaldas, the West Saxon chronicler ignored such Mercian kings as Offa.
The use of the term "Bretwalda" was the attempt by a West Saxon chronicler to make some claim of West Saxon kings to the whole of Great Britain. The concept of the overlordship of the whole of Britain was at least recognised in the period, whatever was meant by the term. Quite possibly it was a survival of a Roman concept of "Britain": it is significant that, while the hyperbolic inscriptions on coins and titles in charters often included the title "rex Britanniae", when England was unified the title used was "rex Angulsaxonum", ('king of the Anglo-Saxons'.)
For some time, the existence of the word "bretwalda" in the "Anglo-Saxon Chronicle", which was based in part on the list given by Bede in his "Historia Ecclesiastica", led historians to think that there was perhaps a "title" held by Anglo-Saxon overlords. This was particularly attractive as it would lay the foundations for the establishment of an English monarchy. The 20th-century historian Frank Stenton said of the Anglo-Saxon chronicler that "his inaccuracy is more than compensated by his preservation of the English title applied to these outstanding kings". He argued that the term "bretwalda" "falls into line with the other evidence which points to the Germanic origin of the earliest English institutions".
Over the later 20th century, this assumption was increasingly challenged. Patrick Wormald interpreted it as "less an objectively realized office than a subjectively perceived status" and emphasised the partiality of its usage in favour of Southumbrian rulers. In 1991, Steven Fanning argued that "it is unlikely that the term ever existed as a title or was in common usage in Anglo-Saxon England". The fact that Bede never mentioned a special title for the kings in his list implies that he was unaware of one. In 1995, Simon Keynes observed that "if Bede's concept of the Southumbrian overlord, and the chronicler's concept of the 'Bretwalda', are to be regarded as artificial constructs, which have no validity outside the context of the literary works in which they appear, we are released from the assumptions about political development which they seem to involve... we might ask whether kings in the eighth and ninth centuries were quite so obsessed with the establishment of a pan-Southumbrian state".
Modern interpretations view the concept of "bretwalda" overlordship as complex and an important indicator of how a 9th-century chronicler interpreted history and attempted to insert the increasingly powerful Saxon kings into that history.
A complex array of dominance and subservience existed during the Anglo-Saxon period. A king who used charters to grant land in another kingdom indicated such a relationship. If a king held sway over a large kingdom, such as when the Mercians dominated the East Anglians, the relationship would have been more equal than in the case of the Mercian dominance of the Hwicce, which was a comparatively small kingdom. Mercia was arguably the most powerful Anglo-Saxon kingdom for much of the late 7th though 8th centuries, though Mercian kings are missing from the two main "lists". For Bede, Mercia was a traditional enemy of his native Northumbria and he regarded powerful kings such as the pagan Penda as standing in the way of the Christian conversion of the Anglo-Saxons. Bede omits them from his list, even though it is evident that Penda held a considerable degree of power. Similarly powerful Mercia kings such as Offa are missed out of the West Saxon "Anglo-Saxon Chronicle", which sought to demonstrate the legitimacy of their kings to rule over other Anglo-Saxon peoples. | https://en.wikipedia.org/wiki?curid=4100 |
Brouwer fixed-point theorem
Brouwer's fixed-point theorem is a fixed-point theorem in topology, named after L. E. J. (Bertus) Brouwer. It states that for any continuous function formula_1 mapping a compact convex set to itself there is a point formula_2 such that formula_3. The simplest forms of Brouwer's theorem are for continuous functions formula_1 from a closed interval formula_5 in the real numbers to itself or from a closed disk formula_6 to itself. A more general form than the latter is for continuous functions from a convex compact subset formula_7 of Euclidean space to itself.
Among hundreds of fixed-point theorems, Brouwer's is particularly well known, due in part to its use across numerous fields of mathematics.
In its original field, this result is one of the key theorems characterizing the topology of Euclidean spaces, along with the Jordan curve theorem, the hairy ball theorem and the Borsuk–Ulam theorem.
This gives it a place among the fundamental theorems of topology. The theorem is also used for proving deep results about differential equations and is covered in most introductory courses on differential geometry.
It appears in unlikely fields such as game theory. In economics, Brouwer's fixed-point theorem and its extension, the Kakutani fixed-point theorem, play a central role in the proof of existence of general equilibrium in market economies as developed in the 1950s by economics Nobel prize winners Kenneth Arrow and Gérard Debreu.
The theorem was first studied in view of work on differential equations by the French mathematicians around Henri Poincaré and Charles Émile Picard. Proving results such as the Poincaré–Bendixson theorem requires the use of topological methods. This work at the end of the 19th century opened into several successive versions of the theorem. The general case was first proved in 1910 by Jacques Hadamard and by Luitzen Egbertus Jan Brouwer.
The theorem has several formulations, depending on the context in which it is used and its degree of generalization. The simplest is sometimes given as follows:
This can be generalized to an arbitrary finite dimension:
A slightly more general version is as follows:
An even more general form is better known under a different name:
The theorem holds only for sets that are "compact" (thus, in particular, bounded and closed) and "convex" (or homeomorphic to convex). The following examples show why the pre-conditions are important.
Consider the function
which is a continuous function from formula_9 to itself. As it shifts every point to the right, it cannot have a fixed point. The space formula_9 is convex and closed, but not bounded.
Consider the function
which is a continuous function from the open interval (−1,1) to itself. In this interval, it shifts every point to the right, so it cannot have a fixed point. The space (−1,1) is convex and bounded, but not closed. The function "f" have a fixed point for the closed interval [−1,1], namely "f"(1) = 1.
Convexity is not strictly necessary for BFPT. Because the properties involved (continuity, being a fixed point) are invariant under homeomorphisms, BFPT is equivalent to forms in which the domain is required to be a closed unit ball formula_12. For the same reason it holds for every set that is homeomorphic to a closed ball (and therefore also closed, bounded, connected, without holes, etc.).
The following example shows that BFPT doesn't work for domains with holes. Consider the function formula_13,
which is a continuous function from the unit circle to itself. Since "-x≠x" holds for any point of the unit circle, "f" has no fixed point. The analogous example works for the "n"-dimensional sphere (or any symmetric domain that does not contain the origin). The unit circle is closed and bounded, but it has a hole (and so it is not convex) . The function "f" have a fixed point for the unit disc, since it takes the origin to itself.
A formal generalization of BFPT for "hole-free" domains can be derived from the Lefschetz fixed-point theorem.
The continuous function in this theorem is not required to be bijective or even surjective.
The theorem has several "real world" illustrations. Here are some examples.
1. Take two sheets of graph paper of equal size with coordinate systems on them, lay one flat on the table and crumple up (without ripping or tearing) the other one and place it, in any fashion, on top of the first so that the crumpled paper does not reach outside the flat one. There will then be at least one point of the crumpled sheet that lies directly above its corresponding point (i.e. the point with the same coordinates) of the flat sheet. This is a consequence of the "n" = 2 case of Brouwer's theorem applied to the continuous map that assigns to the coordinates of every point of the crumpled sheet the coordinates of the point of the flat sheet immediately beneath it.
2. Take an ordinary map of a country, and suppose that that map is laid out on a table inside that country. There will always be a "You are Here" point on the map which represents that same point in the country.
3. In three dimensions a consequence of the Brouwer fixed-point theorem is that, no matter how much you stir a cocktail in a glass (or think about milk shake), when the liquid has come to rest, some point in the liquid will end up in exactly the same place in the glass as before you took any action, assuming that the final position of each point is a continuous function of its original position, that the liquid after stirring is contained within the space originally taken up by it, and that the glass (and stirred surface shape) maintain a convex volume. Ordering a cocktail shaken, not stirred defeats the convexity condition ("shaking" being defined as a dynamic series of non-convex inertial containment states in the vacant headspace under a lid). In that case, the theorem would not apply, and thus all points of the liquid disposition are potentially displaced from the original state.
The theorem is supposed to have originated from Brouwer's observation of a cup of coffee.
If one stirs to dissolve a lump of sugar, it appears there is always a point without motion.
He drew the conclusion that at any moment, there is a point on the surface that is not moving.
The fixed point is not necessarily the point that seems to be motionless, since the centre of the turbulence moves a little bit.
The result is not intuitive, since the original fixed point may become mobile when another fixed point appears.
Brouwer is said to have added: "I can formulate this splendid result different, I take a horizontal sheet, and another identical one which I crumple, flatten and place on the other. Then a point of the crumpled sheet is in the same place as on the other sheet."
Brouwer "flattens" his sheet as with a flat iron, without removing the folds and wrinkles. Unlike the coffee cup example, the crumpled paper example also demonstrates that more than one fixed point may exist. This distinguishes Brouwer's result from other fixed-point theorems, such as Stefan Banach's, that guarantee uniqueness.
In one dimension, the result is intuitive and easy to prove. The continuous function "f" is defined on a closed interval ["a", "b"] and takes values in the same interval. Saying that this function has a fixed point amounts to saying that its graph (dark green in the figure on the right) intersects that of the function defined on the same interval ["a", "b"] which maps "x" to "x" (light green).
Intuitively, any continuous line from the left edge of the square to the right edge must necessarily intersect the green diagonal. To prove this, consider the function "g" which maps "x" to "f"("x") - "x". It is ≥ 0 on "a" and ≤ 0 on "b". By the intermediate value theorem, "g" has a zero in ["a", "b"]; this zero is a fixed point.
Brouwer is said to have expressed this as follows: "Instead of examining a surface, we will prove the theorem about a piece of string. Let us begin with the string in an unfolded state, then refold it. Let us flatten the refolded string. Again a point of the string has not changed its position with respect to its original position on the unfolded string."
The Brouwer fixed point theorem was one of the early achievements of algebraic topology, and is the basis of more general fixed point theorems which are important in functional analysis. The case "n" = 3 first was proved by Piers Bohl in 1904 (published in "Journal für die reine und angewandte Mathematik"). It was later proved by L. E. J. Brouwer in 1909. Jacques Hadamard proved the general case in 1910, and Brouwer found a different proof in the same year. Since these early proofs were all non-constructive indirect proofs, they ran contrary to Brouwer's intuitionist ideals. Although the existence of a fixed point is not constructive in the sense of constructivism in mathematics, methods to approximate fixed points guaranteed by Brouwer's theorem are now known.
To understand the prehistory of Brouwer's fixed point theorem one needs to pass through differential equations. At the end of the 19th century, the old problem of the stability of the solar system returned into the focus of the mathematical community.
Its solution required new methods. As noted by Henri Poincaré, who worked on the three-body problem, there is no hope to find an exact solution: "Nothing is more proper to give us an idea of the hardness of the three-body problem, and generally of all problems of Dynamics where there is no uniform integral and the Bohlin series diverge."
He also noted that the search for an approximate solution is no more efficient: "the more we seek to obtain precise approximations, the more the result will diverge towards an increasing imprecision".
He studied a question analogous to that of the surface movement in a cup of coffee. What can we say, in general, about the trajectories on a surface animated by a constant flow? Poincaré discovered that the answer can be found in what we now call the topological properties in the area containing the trajectory. If this area is compact, i.e. both closed and bounded, then the trajectory either becomes stationary, or it approaches a limit cycle. Poincaré went further; if the area is of the same kind as a disk, as is the case for the cup of coffee, there must necessarily be a fixed point. This fixed point is invariant under all functions which associate to each point of the original surface its position after a short time interval "t". If the area is a circular band, or if it is not closed, then this is not necessarily the case.
To understand differential equations better, a new branch of mathematics was born. Poincaré called it "analysis situs". The French Encyclopædia Universalis defines it as the branch which "treats the properties of an object that are invariant if it is deformed in any continuous way, without tearing". In 1886, Poincaré proved a result that is equivalent to Brouwer's fixed-point theorem, although the connection with the subject of this article was not yet apparent. A little later, he developed one of the fundamental tools for better understanding the analysis situs, now known as the fundamental group or sometimes the Poincaré group. This method can be used for a very compact proof of the theorem under discussion.
Poincaré's method was analogous to that of Émile Picard, a contemporary mathematician who generalized the Cauchy–Lipschitz theorem. Picard's approach is based on a result that would later be formalised by another fixed-point theorem, named after Banach. Instead of the topological properties of the domain, this theorem uses the fact that the function in question is a contraction.
At the dawn of the 20th century, the interest in analysis situs did not stay unnoticed. However, the necessity of a theorem equivalent to the one discussed in this article was not yet evident. Piers Bohl, a Latvian mathematician, applied topological methods to the study of differential equations. In 1904 he proved the three-dimensional case of our theorem, but his publication was not noticed.
It was Brouwer, finally, who gave the theorem its first patent of nobility. His goals were different from those of Poincaré. This mathematician was inspired by the foundations of mathematics, especially mathematical logic and topology. His initial interest lay in an attempt to solve Hilbert's fifth problem. In 1909, during a voyage to Paris, he met Henri Poincaré, Jacques Hadamard, and Émile Borel. The ensuing discussions convinced Brouwer of the importance of a better understanding of Euclidean spaces, and were the origin of a fruitful exchange of letters with Hadamard. For the next four years, he concentrated on the proof of certain great theorems on this question. In 1912 he proved the hairy ball theorem for the two-dimensional sphere, as well as the fact that every continuous map from the two-dimensional ball to itself has a fixed point. These two results in themselves were not really new. As Hadamard observed, Poincaré had shown a theorem equivalent to the hairy ball theorem. The revolutionary aspect of Brouwer's approach was his systematic use of recently developed tools such as homotopy, the underlying concept of the Poincaré group. In the following year, Hadamard generalised the theorem under discussion to an arbitrary finite dimension, but he employed different methods. Hans Freudenthal comments on the respective roles as follows: "Compared to Brouwer's revolutionary methods, those of Hadamard were very traditional, but Hadamard's participation in the birth of Brouwer's ideas resembles that of a midwife more than that of a mere spectator."
Brouwer's approach yielded its fruits, and in 1910 he also found a proof that was valid for any finite dimension, as well as other key theorems such as the invariance of dimension. In the context of this work, Brouwer also generalized the Jordan curve theorem to arbitrary dimension and established the properties connected with the degree of a continuous mapping. This branch of mathematics, originally envisioned by Poincaré and developed by Brouwer, changed its name. In the 1930s, analysis situs became algebraic topology.
The theorem proved its worth in more than one way. During the 20th century numerous fixed-point theorems were developed, and even a branch of mathematics called fixed-point theory.
Brouwer's theorem is probably the most important. It is also among the foundational theorems on the topology of topological manifolds and is often used to prove other important results such as the Jordan curve theorem.
Besides the fixed-point theorems for more or less contracting functions, there are many that have emerged directly or indirectly from the result under discussion. A continuous map from a closed ball of Euclidean space to its boundary cannot be the identity on the boundary. Similarly, the Borsuk–Ulam theorem says that a continuous map from the "n"-dimensional sphere to Rn has a pair of antipodal points that are mapped to the same point. In the finite-dimensional case, the Lefschetz fixed-point theorem provided from 1926 a method for counting fixed points. In 1930, Brouwer's fixed-point theorem was generalized to Banach spaces. This generalization is known as Schauder's fixed-point theorem, a result generalized further by S. Kakutani to multivalued functions. One also meets the theorem and its variants outside topology. It can be used to prove the Hartman-Grobman theorem, which describes the qualitative behaviour of certain differential equations near certain equilibria. Similarly, Brouwer's theorem is used for the proof of the Central Limit Theorem. The theorem can also be found in existence proofs for the solutions of certain partial differential equations.
Other areas are also touched. In game theory, John Nash used the theorem to prove that in the game of Hex there is a winning strategy for white. In economics, P. Bich explains that certain generalizations of the theorem show that its use is helpful for certain classical problems in game theory and generally for equilibria (Hotelling's law), financial equilibria and incomplete markets.
Brouwer's celebrity is not exclusively due to his topological work. The proofs of his great topological theorems are not constructive, and Brouwer's dissatisfaction with this is partly what led him to articulate the idea of constructivity. He became the originator and zealous defender of a way of formalising mathematics that is known as intuitionism, which at the time made a stand against set theory. Brouwer disavowed his original proof of the fixed-point theorem. The first algorithm to approximate a fixed point was proposed by Herbert Scarf. A subtle aspect of Scarf's algorithm is that it finds a point that is by a function "f", but in general cannot find a point that is close to an actual fixed point. In mathematical language, if is chosen to be very small, Scarf's algorithm can be used to find a point "x" such that "f"("x") is very close to "x", i.e., formula_14. But Scarf's algorithm cannot be used to find a point "x" such that "x" is very close to a fixed point: we cannot guarantee formula_15 where formula_16 Often this latter condition is what is meant by the informal phrase "approximating a fixed point".
Brouwer's original 1911 proof relied on the notion of the degree of a continuous mapping. Modern accounts of the proof can also be found in the literature.
Let formula_17 denote the closed unit ball in formula_18 centered at the origin. Suppose for simplicitly that formula_19 is continuously differentiable. A regular value of formula_1 is a point formula_21 such that the Jacobian of formula_1 is non-singular at every point of the preimage of formula_23. In particular, by the inverse function theorem, every point of the preimage of formula_1 lies in formula_25 (the interior of formula_26). The degree of formula_1 at a regular value formula_21 is defined as the sum of the signs of the Jacobian determinant of formula_1 over the preimages of formula_23 under formula_1:
The degree is, roughly speaking, the number of "sheets" of the preimage "f" lying over a small open set around "p", with sheets counted oppositely if they are oppositely oriented. This is thus a generalization of winding number to higher dimensions.
The degree satisfies the property of "homotopy invariance": let formula_1 and formula_34 be two continuously differentiable functions, and formula_35 for formula_36. Suppose that the point formula_23 is a regular value of formula_38 for all "t". Then formula_39.
If there is no fixed point of the boundary of formula_26, then the function
is well-defined, and
formula_42
defines a homotopy from the identity function to it. The identity function has degree one at every point. In particular, the identity function has degree one at the origin, so formula_34 also has degree one at the origin. As a consequence, the preimage formula_44 is not empty. The elements of formula_44 are precisely the fixed points of the original function "f".
This requires some work to make fully general. The definition of degree must be extended to singular values of "f", and then to continuous functions. The more modern advent of homology theory simplifies the construction of the degree, and so has become a standard proof in the literature.
The proof uses the observation that the boundary of the "n"-disk "D""n" is "S""n"−1, the ("n" − 1)-sphere.
Suppose, for contradiction, that a continuous function "f" : "D""n" → "D""n" has "no" fixed point. This means that, for every point x in "D""n", the points "x" and "f"("x") are distinct. Because they're distinct, for every point x in "D""n", we can construct a unique ray from "f"("x") to "x" and follow the ray until it intersects the boundary "S""n"−1 (see illustration). By calling this intersection point "F"("x"), we define a function "F" : "D""n" → "S""n"−1 sending each point in the disk to its corresponding intersection point on the boundary. As a special case, whenever x itself is on the boundary, then the intersection point "F"("x") must be "x".
Consequently, F is a special type of continuous function known as a retraction: every point of the codomain (in this case "S""n"−1) is a fixed point of "F".
Intuitively it seems unlikely that there could be a retraction of "D""n" onto "S""n"−1, and in the case "n" = 1, the impossibility is more basic, because "S"0 (i.e., the endpoints of the closed interval "D"1) is not even connected. The case "n" = 2 is less obvious, but can be proven by using basic arguments involving the fundamental groups of the respective spaces: the retraction would induce an injective group homomorphism from the fundamental group of "S"1 to that of "D"2, but the first group is isomorphic to Z while the latter group is trivial, so this is impossible. The case "n" = 2 can also be proven by contradiction based on a theorem about non-vanishing vector fields.
For "n" > 2, however, proving the impossibility of the retraction is more difficult. One way is to make use of homology groups: the homology "H""n" − 1("D""n") is trivial, while "H""n" − 1("S""n"−1) is infinite cyclic. This shows that the retraction is impossible, because again the retraction would induce an injective group homomorphism from the latter to the former group.
To prove that a continuous map formula_46 has fixed points, one can assume that it is smooth, because if a map has no fixed points then formula_47, its convolution with an appropriate mollifier (a smooth function of sufficiently small support and integral one), will produce a smooth function with no fixed points. As in the proof using homology, the problem is reduced to proving that there is no smooth retraction formula_46 from the ball formula_49 onto its boundary formula_50. If formula_51 is a volume form on the boundary then by Stokes' Theorem,
giving a contradiction.
More generally, this shows that there is no smooth retraction from any non-empty smooth orientable compact manifold onto its boundary. The proof using Stokes' theorem is closely related to the proof using homology, because the form formula_51 generates the de Rham cohomology group formula_54 which is isomorphic to the homology group formula_55 by de Rham's Theorem.
The BFPT can be proved using Sperner's lemma. We now give an outline of the proof for the special case in which "f" is a function from the standard "n"-simplex, formula_56 to itself, where
For every point formula_58 also formula_59 Hence the sum of their coordinates is equal:
Hence, by the pigeonhole principle, for every formula_58 there must be an index formula_62 such that the formula_63th coordinate of formula_64 is greater than or equal to the formula_63th coordinate of its image under "f":
Moreover, if formula_64 lies on a "k"-dimensional sub-face of formula_56 then by the same argument, the index formula_63 can be selected from among the coordinates which are not zero on this sub-face.
We now use this fact to construct a Sperner coloring. For every triangulation of formula_56 the color of every vertex formula_64 is an index formula_63 such that formula_73
By construction, this is a Sperner coloring. Hence, by Sperner's lemma, there is an "n"-dimensional simplex whose vertices are colored with the entire set of available colors.
Because "f" is continuous, this simplex can be made arbitrarily small by choosing an arbitrarily fine triangulation. Hence, there must be a point formula_64 which satisfies the labeling condition in all coordinates: formula_75 for all formula_76
Because the sum of the coordinates of formula_64 and formula_78 must be equal, all these inequalities must actually be equalities. But this means that:
That is, formula_64 is a fixed point of formula_81
There is also a quick proof, by Morris Hirsch, based on the impossibility of a differentiable retraction. The indirect proof starts by noting that the map "f" can be approximated by a smooth map retaining the property of not fixing a point; this can be done by using the Weierstrass approximation theorem, for example. One then defines a retraction as above which must now be differentiable. Such a retraction must have a non-singular value, by Sard's theorem, which is also non-singular for the restriction to the boundary (which is just the identity). Thus the inverse image would be a 1-manifold with boundary. The boundary would have to contain at least two end points, both of which would have to lie on the boundary of the original ball—which is impossible in a retraction.
R. Bruce Kellogg, Tien-Yien Li, and James A. Yorke turned Hirsch's proof into a computable proof by observing that the retract is in fact defined everywhere except at the fixed points. For almost any point, "q", on the boundary, (assuming it is not a fixed point) the one manifold with boundary mentioned above does exist and the only possibility is that it leads from "q" to a fixed point. It is an easy numerical task to follow such a path from "q" to the fixed point so the method is essentially computable. gave a conceptually similar path-following version of the homotopy proof which extends to a wide variety of related problems.
A variation of the preceding proof does not employ the Sard's theorem, and goes as follows. If formula_82 is a smooth retraction, one considers the smooth deformation formula_83 and the smooth function
Differentiating under the sign of integral it is not difficult to check that ""("t") = 0 for all "t", so "φ" is a constant function, which is a contradiction because "φ"(0) is the "n"-dimensional volume of the ball, while "φ"(1) is zero. The geometric idea is that "φ"("t") is the oriented area of "g""t"("B") (that is, the Lebesgue measure of the image of the ball via "g""t", taking into account multiplicity and orientation), and should remain constant (as it is very clear in the one-dimensional case). On the other hand, as the parameter "t" passes form 0 to 1 the map "g""t" transforms continuously from the identity map of the ball, to the retraction "r", which is a contradiction since the oriented area of the identity coincides with the volume of the ball, while the oriented area of "r" is necessarily 0, as its image is the boundary of the ball, a set of null measure.
A quite different proof given by David Gale is based on the game of Hex. The basic theorem about Hex is that no game can end in a draw. This is equivalent to the Brouwer fixed-point theorem for dimension 2. By considering "n"-dimensional versions of Hex, one can prove in general that Brouwer's theorem is equivalent to the determinacy theorem for Hex.
The Lefschetz fixed-point theorem says that if a continuous map "f" from a finite simplicial complex "B" to itself has only isolated fixed points, then the number of fixed points counted with multiplicities (which may be negative) is equal to the Lefschetz number
and in particular if the Lefschetz number is nonzero then "f" must have a fixed point. If "B" is a ball (or more generally is contractible) then the Lefschetz number is one because the only non-zero homology group is :formula_86 and "f" acts as the identity on this group, so "f" has a fixed point.
In reverse mathematics, Brouwer's theorem can be proved in the system WKL0, and conversely over the base system RCA0 Brouwer's theorem for a square implies the weak König's lemma, so this gives a precise description of the strength of Brouwer's theorem.
The Brouwer fixed-point theorem forms the starting point of a number of more general fixed-point theorems.
The straightforward generalization to infinite dimensions, i.e. using the unit ball of an arbitrary Hilbert space instead of Euclidean space, is not true. The main problem here is that the unit balls of infinite-dimensional Hilbert spaces are not compact. For example, in the Hilbert space ℓ2 of square-summable real (or complex) sequences, consider the map "f" : ℓ2 → ℓ2 which sends a sequence ("x""n") from the closed unit ball of ℓ2 to the sequence ("y""n") defined by
It is not difficult to check that this map is continuous, has its image in the unit sphere of ℓ2, but does not have a fixed point.
The generalizations of the Brouwer fixed-point theorem to infinite dimensional spaces therefore all include a compactness assumption of some sort, and also often an assumption of convexity. See fixed-point theorems in infinite-dimensional spaces for a discussion of these theorems.
There is also finite-dimensional generalization to a larger class of spaces: If formula_88 is a product of finitely many chainable continua, then every continuous function formula_89 has a fixed point, where a chainable continuum is a (usually but in this case not necessarily metric) compact Hausdorff space of which every open cover has a finite open refinement formula_90, such that formula_91 if and only if formula_92. Examples of chainable continua include compact connected linearly ordered spaces and in particular closed intervals of real numbers.
The Kakutani fixed point theorem generalizes the Brouwer fixed-point theorem in a different direction: it stays in R"n", but considers upper hemi-continuous set-valued functions (functions that assign to each point of the set a subset of the set). It also requires compactness and convexity of the set.
The Lefschetz fixed-point theorem applies to (almost) arbitrary compact topological spaces, and gives a condition in terms of singular homology that guarantees the existence of fixed points; this condition is trivially satisfied for any map in the case of "D""n". | https://en.wikipedia.org/wiki?curid=4101 |
Benzoic acid
Benzoic acid is a white (or colorless) solid with the formula C6H5CO2H. It is the simplest aromatic carboxylic acid. The name is derived from gum benzoin, which was for a long time its only source. Benzoic acid occurs naturally in many plants and serves as an intermediate in the biosynthesis of many secondary metabolites. Salts of benzoic acid are used as food preservatives. Benzoic acid is an important precursor for the industrial synthesis of many other organic substances. The salts and esters of benzoic acid are known as benzoates .
Benzoic acid was discovered in the sixteenth century. The dry distillation of gum benzoin was first described by Nostradamus (1556), and then by Alexius Pedemontanus (1560) and Blaise de Vigenère (1596).
Justus von Liebig and Friedrich Wöhler determined the composition of benzoic acid. These latter also investigated how hippuric acid is related to benzoic acid.
In 1875 Salkowski discovered the antifungal abilities of benzoic acid, which was used for a long time in the preservation of benzoate-containing cloudberry fruits.
It is also one of the chemical compounds found in castoreum. This compound is gathered from the castor sacs of the North American beaver.
Benzoic acid is produced commercially by partial oxidation of toluene with oxygen. The process is catalyzed by cobalt or manganese naphthenates. The process uses abundant materials, and proceeds in high yield.
The first industrial process involved the reaction of benzotrichloride (trichloromethyl benzene) with calcium hydroxide in water, using iron or iron salts as catalyst. The resulting calcium benzoate is converted to benzoic acid with hydrochloric acid. The product contains significant amounts of chlorinated benzoic acid derivatives. For this reason, benzoic acid for human consumption was obtained by dry distillation of gum benzoin. Food-grade benzoic acid is now produced synthetically.
Benzoic acid is cheap and readily available, so the laboratory synthesis of benzoic acid is mainly practiced for its pedagogical value. It is a common undergraduate preparation.
Benzoic acid can be purified by recrystallization from water because of its high solubility in hot water and poor solubility in cold water. The avoidance of organic solvents for the recrystallization makes this experiment particularly safe. This process usually gives a yield of around 65%
Like other nitriles and amides, benzonitrile and benzamide can be hydrolyzed to benzoic acid or its conjugate base in acid or basic conditions.
Bromobenzene can be converted to benzoic acid by "carboxylation" of the intermediate phenylmagnesium bromide. This synthesis offers a convenient exercise for students to carry out a Grignard reaction, an important class of carbon–carbon bond forming reaction in organic chemistry. | https://en.wikipedia.org/wiki?curid=4106 |
Boltzmann distribution
In statistical mechanics and mathematics, a Boltzmann distribution (also called Gibbs distribution) is a probability distribution or probability measure that gives the probability that a system will be in a certain state as a function of that state's energy and the temperature of the system. The distribution is expressed in the form:
where is the probability of the system being in state , is the energy of that state, and a constant of the distribution is the product of Boltzmann's constant and thermodynamic temperature . The symbol formula_2 denotes proportionality (see for the proportionality constant).
The term system here has a very wide meaning; it can range from a single atom to a macroscopic system such as a natural gas storage tank. Because of this the Boltzmann distribution can be used to solve a very wide variety of problems. The distribution shows that states with lower energy will always have a higher probability of being occupied .
The "ratio" of probabilities of two states is known as the Boltzmann factor and characteristically only depends on the states' energy difference:
The Boltzmann distribution is named after Ludwig Boltzmann who first formulated it in 1868 during his studies of the statistical mechanics of gases in thermal equilibrium. Boltzmann's statistical work is borne out in his paper “On the Relationship between the Second Fundamental Theorem of the Mechanical Theory of Heat and Probability Calculations Regarding the Conditions for Thermal Equilibrium"
The distribution was later investigated extensively, in its modern generic form, by Josiah Willard Gibbs in 1902.
The generalized Boltzmann distribution is a sufficient and necessary condition for the equivalence between the statistical mechanics definition of entropy (The Gibbs entropy formula formula_4) and the thermodynamic definition of entropy (formula_5, and the fundamental thermodynamic relation).
The Boltzmann distribution should not be confused with the Maxwell–Boltzmann distribution. The former gives the probability that a system will be in a certain state as a function of that state's energy; in contrast, the latter is used to describe particle speeds in idealized gases.
The Boltzmann distribution is a probability distribution that gives the probability of a certain state as a function of that state's energy and temperature of the system to which the distribution is applied. It is given as
where "pi" is the probability of state "i", "εi" the energy of state "i", "k" the Boltzmann constant, "T" the temperature of the system and "M" is the number of all states accessible to the system of interest. Implied parentheses around the denominator "kT" are omitted for brevity. The normalization denominator "Q" (denoted by some authors by "Z") is the canonical partition function
It results from the constraint that the probabilities of all accessible states must add up to 1.
The Boltzmann distribution is the distribution that maximizes the entropy
subject to the constraint that formula_9 equals a particular mean energy value (which can be proven using Lagrange multipliers).
The partition function can be calculated if we know the energies of the states accessible to the system of interest. For atoms the partition function values can be found in the NIST Atomic Spectra Database.
The distribution shows that states with lower energy will always have a higher probability of being occupied than the states with higher energy. It can also give us the quantitative relationship between the probabilities of the two states being occupied. The ratio of probabilities for states "i" and "j" is given as
where "pi" is the probability of state "i", "pj" the probability of state "j", and "εi" and "εj" are the energies of states "i" and "j", respectively.
The Boltzmann distribution is often used to describe the distribution of particles, such as atoms or molecules, over energy states accessible to them. If we have a system consisting of many particles, the probability of a particle being in state "i" is practically the probability that, if we pick a random particle from that system and check what state it is in, we will find it is in state "i". This probability is equal to the number of particles in state "i" divided by the total number of particles in the system, that is the fraction of particles that occupy state "i".
where "Ni" is the number of particles in state "i" and "N" is the total number of particles in the system. We may use the Boltzmann distribution to find this probability that is, as we have seen, equal to the fraction of particles that are in state i. So the equation that gives the fraction of particles in state "i" as a function of the energy of that state is
This equation is of great importance to spectroscopy. In spectroscopy we observe a spectral line of atoms or molecules that we are interested in going from one state to another. In order for this to be possible, there must be some particles in the first state to undergo the transition. We may find that this condition is fulfilled by finding the fraction of particles in the first state. If it is negligible, the transition is very likely not to be observed at the temperature for which the calculation was done. In general, a larger fraction of molecules in the first state means a higher number of transitions to the second state. This gives a stronger spectral line. However, there are other factors that influence the intensity of a spectral line, such as whether it is caused by an allowed or a forbidden transition.
The Boltzmann distribution is related to the softmax function commonly used in machine learning.
The Boltzmann distribution appears in statistical mechanics when considering isolated (or nearly-isolated) systems of fixed composition that are in thermal equilibrium (equilibrium with respect to energy exchange). The most general case is the probability distribution for the canonical ensemble, but also some special cases (derivable from the canonical ensemble) also show the Boltzmann distribution in different aspects:
Although these cases have strong similarities, it is helpful to distinguish them as they generalize in different ways when the crucial assumptions are changed:
In more general mathematical settings, the Boltzmann distribution is also known as the Gibbs measure. In statistics and machine learning, it is called a log-linear model. In deep learning, the Boltzmann distribution is used in the sampling distribution of stochastic neural networks such as the Boltzmann machine, Restricted Boltzmann machine, Energy-Based models and deep Boltzmann machine.
The Boltzmann distribution can be introduced to allocate permits in emissions trading. The new allocation method using the Boltzmann distribution can describe the most probable, natural, and unbiased distribution of emissions permits among multiple countries. Simple and versatile, this new method holds potential for many economic and environmental applications.
The Boltzmann distribution has the same form as the multinomial logit model. As a discrete choice model, this is very well known in economics since Daniel McFadden made the connection to random utility maximization. | https://en.wikipedia.org/wiki?curid=4107 |
Blythe Danner
Blythe Katherine Danner (born February 3, 1943) is an American actress. She is the recipient of several accolades, including two Primetime Emmy Awards for Best Supporting Actress in a Drama Series for her role as Izzy Huffstodt on "Huff" (2004–2006), and a Tony Award for Best Actress for her performance in "Butterflies Are Free" on Broadway (1969–1972). Danner was twice nominated for the Primetime Emmy for Outstanding Guest Actress in a Comedy Series for portraying Marilyn Truman on "Will & Grace" (2001–06; 2018), and the Primetime Emmy for Outstanding Lead Actress in a Miniseries or Movie for her roles in "We Were the Mulvaneys" (2002) and "Back When We Were Grownups" (2004). For the latter, she also received a Golden Globe Award nomination.
Danner played Dina Byrnes in "Meet the Parents" (2000) and its sequels "Meet the Fockers" (2004) and "Little Fockers" (2010). She has collaborated on several occasions with Woody Allen, appearing in three of his films: "Another Woman" (1988), "Alice" (1990), and "Husbands and Wives" (1992). Her other notable film credits include "1776" (1972), "Hearts of the West" (1975), "The Great Santini" (1979), "Mr. and Mrs. Bridge" (1990), "The Prince of Tides" (1991), "To Wong Foo, Thanks for Everything! Julie Newmar" (1995), "The Myth of Fingerprints" (1997), "The X-Files" (1998), "Forces of Nature" (1999), "The Last Kiss" (2006), "Paul" (2011), "Hello I Must Be Going" (2012), "I'll See You in My Dreams" (2015), and "What They Had" (2018).
Danner is the sister of Harry Danner and the widow of Bruce Paltrow. She is the mother of actress Gwyneth Paltrow and director Jake Paltrow.
Danner was born in Philadelphia, Pennsylvania, the daughter of Katharine (née Kile; 1909–2006) and Harry Earl Danner, a bank executive. She has a brother, opera singer and actor Harry Danner; a sister, performer-turned-director Dorothy "Dottie" Danner; and a maternal half-brother, violin maker William Moennig. Danner has Pennsylvania Dutch (German), and some English and Irish, ancestry; her maternal grandmother was a German immigrant, and one of her paternal great-grandmothers was born in Barbados (to a family of European descent).
Danner graduated from George School, a Quaker high school located near Newtown, Bucks County, Pennsylvania in 1960.
A graduate of Bard College, Danner's first roles included the 1967 musical "Mata Hari" (closed out of town), and the 1968 Off-Broadway production of "Summertree". Her early Broadway appearances included "Cyrano de Bergerac" (1968) and her Theatre World Award-winning performance in "The Miser" (1969). She won the Tony Award for Best Featured Actress in a Play for portraying a free-spirited divorcée in "Butterflies Are Free" (1970).
In 1972, Danner portrayed Martha Jefferson in the film version of "1776". That same year, she played a wife whose husband has been unfaithful, opposite Peter Falk and John Cassavetes, in the "Columbo" episode "Etude in Black".
Her earliest starring film role was opposite Alan Alda in "To Kill a Clown" (1972). Danner appeared in the episode of "M*A*S*H" entitled "The More I See You", playing the love interest of Alda's character Hawkeye Pierce. She played lawyer Amanda Bonner in television's "Adam's Rib", also opposite Ken Howard as Adam Bonner. She played Zelda Fitzgerald in "F. Scott Fitzgerald and 'The Last of the Belles'" (1974). She was the eponymous heroine in the film "Lovin' Molly" (1974) (directed by Sidney Lumet). She appeared in "Futureworld", playing Tracy Ballard with co-star Peter Fonda (1976). In the 1982 TV movie "Inside the Third Reich", she played the wife of Albert Speer. In the film version of Neil Simon's semi-autobiographical play "Brighton Beach Memoirs" (1986), she portrayed a middle-aged Jewish mother. She has appeared in two films based on the novels of Pat Conroy, "The Great Santini" (1979) and "The Prince of Tides" (1991), as well as two television movies adapted from books by Anne Tyler, "Saint Maybe" and "Back When We Were Grownups", both for the Hallmark Hall of Fame.
Danner appeared opposite Robert De Niro in the 2000 comedy hit "Meet the Parents", and its sequels, "Meet the Fockers" (2004) and "Little Fockers" (2010).
From 2001 to 2006, she regularly appeared on NBC's sitcom "Will & Grace" as Will Truman's mother Marilyn. From 2004 to 2006, she starred in the main cast of the comedy-drama series "Huff". In 2005, she was nominated for three Primetime Emmy Awards for her work on "Will & Grace", "Huff", and the television film "Back When We Were Grownups", winning for her role in "Huff". The following year, she won a second consecutive Emmy Award for "Huff". For 25 years, she has been a regular performer at the Williamstown Summer Theater Festival, where she also serves on the Board of Directors.
In 2006, Danner was awarded an inaugural Katharine Hepburn Medal by Bryn Mawr College's Katharine Houghton Hepburn Center. In 2015, Danner was inducted into the American Theater Hall of Fame.
Danner has been involved in environmental issues such as recycling and conservation for over 30 years. She has been active with INFORM, Inc., is on the Board of Environmental Advocates of New York and the Board of Directors of the Environmental Media Association, and won the 2002 EMA Board of Directors Ongoing Commitment Award. In 2011, Danner joined Moms Clean Air Force, to help call on parents to join in the fight against toxic air pollution.
After the death of her husband Bruce Paltrow from oral cancer, she became involved with the nonprofit Oral Cancer Foundation. In 2005, she filmed a public service announcement to raise public awareness of the disease and the need for early detection. She has since appeared on morning talk shows and given interviews in such magazines as "People". The Bruce Paltrow Oral Cancer Fund, administered by the Oral Cancer Foundation, raises funding for oral cancer research and treatment, with a particular focus on those communities in which healthcare disparities exist.
She has also appeared in commercials for Prolia, a brand of denosumab used in the treatment of osteoporosis.
Danner was married to producer and director Bruce Paltrow, who died of oral cancer in 2002. She and Paltrow had two children together, actress Gwyneth Paltrow and director Jake Paltrow.
Danner's niece is the actress Katherine Moennig, the daughter of her maternal half-brother William.
Danner co-starred with her daughter in the 1992 television film "Cruel Doubt" and again in the 2003 film "Sylvia", in which she portrayed Aurelia Plath, mother to Gwyneth's title role of Sylvia Plath.
Danner is a practitioner of transcendental meditation, which she has described as "very helpful and comforting." | https://en.wikipedia.org/wiki?curid=4110 |
Bioleaching
Bioleaching is the extraction of metals from their ores through the use of living organisms. This is much cleaner than the traditional heap leaching using cyanide. Bioleaching is one of several applications within biohydrometallurgy and several methods are used to recover copper, zinc, lead, arsenic, antimony, nickel, molybdenum, gold, silver, and cobalt.
Bioleaching can involve numerous ferrous iron and sulfur oxidizing bacteria, including "Acidithiobacillus ferrooxidans" (formerly known as "Thiobacillus ferrooxidans") and "Acidithiobacillus thiooxidans " (formerly known as "Thiobacillus thiooxidans"). As a general principle, Fe3+ ions are used to oxidize the ore. This step is entirely independent of microbes. The role of the bacteria is the further oxidation of the ore, but also the regeneration of the chemical oxidant Fe3+ from Fe2+. For example, bacteria catalyse the breakdown of the mineral pyrite (FeS2) by oxidising the sulfur and metal (in this case ferrous iron, (Fe2+)) using oxygen. This yields soluble products that can be further purified and refined to yield the desired metal.
Pyrite leaching (FeS2):
In the first step, disulfide is spontaneously oxidized to thiosulfate by ferric ion (Fe3+), which in turn is reduced to give ferrous ion (Fe2+):
The ferrous ion is then oxidized by bacteria using oxygen:
Thiosulfate is also oxidized by bacteria to give sulfate:
The ferric ion produced in reaction (2) oxidized more sulfide as in reaction (1), closing the cycle and given the net reaction:
The net products of the reaction are soluble ferrous sulfate and sulfuric acid.
The microbial oxidation process occurs at the cell membrane of the bacteria. The electrons pass into the cells and are used in biochemical processes to produce energy for the bacteria while reducing oxygen to water. The critical reaction is the oxidation of sulfide by ferric iron. The main role of the bacterial step is the regeneration of this reactant.
The process for copper is very similar, but the efficiency and kinetics depend on the copper mineralogy. The most efficient minerals are supergene minerals such as chalcocite, Cu2S and covellite, CuS. The main copper mineral chalcopyrite (CuFeS2) is not leached very efficiently, which is why the dominant copper-producing technology remains flotation, followed by smelting and refining. The leaching of CuFeS2 follows the two stages of being dissolved and then further oxidised, with Cu2+ ions being left in solution.
Chalcopyrite leaching:
net reaction:
In general, sulfides are first oxidized to elemental sulfur, whereas disulfides are oxidized to give thiosulfate, and the processes above can be applied to other sulfidic ores. Bioleaching of non-sulfidic ores such as pitchblende also uses ferric iron as an oxidant (e.g., UO2 + 2 Fe3+ ==> UO22+ + 2 Fe2+). In this case, the sole purpose of the bacterial step is the regeneration of Fe3+. Sulfidic iron ores can be added to speed up the process and provide a source of iron. Bioleaching of non-sulfidic ores by layering of waste sulfides and elemental sulfur, colonized by "Acidithiobacillus" spp., has been accomplished, which provides a strategy for accelerated leaching of materials that do not contain sulfide minerals.
The dissolved copper (Cu2+) ions are removed from the solution by ligand exchange solvent extraction, which leaves other ions in the solution. The copper is removed by bonding to a ligand, which is a large molecule consisting of a number of smaller groups, each possessing a lone electron pair. The ligand-copper complex is extracted from the solution using an organic solvent such as kerosene:
The ligand donates electrons to the copper, producing a complex - a central metal atom (copper) bonded to the ligand. Because this complex has no charge, it is no longer attracted to polar water molecules and dissolves in the kerosene, which is then easily separated from the solution. Because the initial reaction is reversible, it is determined by pH. Adding concentrated acid reverses the equation, and the copper ions go back into an aqueous solution.
Then the copper is passed through an electro-winning process to increase its purity: An electric current is passed through the resulting solution of copper ions. Because copper ions have a 2+ charge, they are attracted to the negative cathodes and collect there.
The copper can also be concentrated and separated by displacing the copper with Fe from scrap iron:
The electrons lost by the iron are taken up by the copper. Copper is the oxidising agent (it accepts electrons), and iron is the reducing agent (it loses electrons).
Traces of precious metals such as gold may be left in the original solution. Treating the mixture with sodium cyanide in the presence of free oxygen dissolves the gold. The gold is removed from the solution by adsorbing (taking it up on the surface) to charcoal.
Several species of fungi can be used for bioleaching. Fungi can be grown on many different substrates, such as electronic scrap, catalytic converters, and fly ash from municipal waste incineration. Experiments have shown that two fungal strains ("Aspergillus niger, Penicillium simplicissimum") were able to mobilize Cu and Sn by 65%, and Al, Ni, Pb, and Zn by more than 95%. "Aspergillus niger" can produce some organic acids such as citric acid. This form of leaching does not rely on microbial oxidation of metal but rather uses microbial metabolism as source of acids that directly dissolve the metal.
Extractions involve many expensive steps such as roasting, pressure oxidation, and smelting, which require sufficient concentrations of elements in ores and are environmentally unfriendly. Low concentrations are not a problem for bacteria because they simply ignore the waste that surrounds the metals, attaining extraction yields of over 90% in some cases. These microorganisms actually gain energy by breaking down minerals into their constituent elements. The company simply collects the ions out of the solution after the bacteria have finished. There is a limited amount of ores.
At the current time, it is more economical to smelt copper ore rather than to use bioleaching, since the concentration of copper in its ore is in general quite high. The profit obtained from the speed and yield of smelting justifies its cost. Nonetheless, at the largest copper mine of the world, Escondida in Chile the process seems to be favorable.
However, the concentration of gold in its ore is in general very low. In this case, the lower cost of bacterial leaching outweighs the time it takes to extract the metal.
Economically it is also very expensive and many companies once started can not keep up with the demand and end up in debt. Projects like Finnish Talvivaara proved to be environmentally and economically disastrous. | https://en.wikipedia.org/wiki?curid=4111 |
Bouldering
Bouldering is a form of rock climbing that is performed on small rock formations or artificial rock walls without the use of ropes or harnesses. While bouldering can be done without any equipment, most climbers use climbing shoes to help secure footholds, chalk to keep their hands dry and to provide a firmer grip, and bouldering mats to prevent injuries from falls. Unlike free solo climbing, which is also performed without ropes, bouldering problems (the sequence of moves that a climber performs to complete the climb) are usually less than 6 meters (20 ft.) tall. Traverses, which are a form of boulder problem, require the climber to climb horizontally from one end to another. Artificial climbing walls allow boulderers to climb indoors in areas without natural boulders. In addition, bouldering competitions take place in both indoor and outdoor settings.
The sport was originally a method of training for roped climbs and mountaineering, so climbers could practice specific moves at a safe distance from the ground. Additionally, the sport served to build stamina and increase finger strength. Throughout the 20th century, bouldering evolved into a separate discipline. Individual problems are assigned ratings based on difficulty. Although there have been various rating systems used throughout the history of bouldering, modern problems usually use either the V-scale or the Fontainebleau scale.
The growing popularity of bouldering has caused several environmental concerns, including soil erosion and trampled vegetation, as climbers hike off-trail to reach bouldering sites. This has caused some landowners to restrict access or prohibit bouldering altogether.
The characteristics of boulder problems depend largely on the type of rock being climbed. For example, granite often features long cracks and slabs while sandstone rocks are known for their steep overhangs and frequent horizontal breaks. Limestone and volcanic rock are also used for bouldering.
There are many prominent bouldering areas throughout the United States, including Hueco Tanks in Texas, Mount Evans in Colorado, and The Buttermilks in Bishop, California. Squamish, British Columbia is one of the most popular bouldering areas in Canada. Europe also hosts a number of bouldering sites, such as Fontainebleau in France, Albarracín in Spain, and various mountains throughout Switzerland. Africa's most prominent bouldering areas include the more established Rocklands in South Africa, the newer Oukaimeden in Morocco or more recently opened areas like Chimanimani in Zimbabwe.
Artificial climbing walls are used to simulate boulder problems in an indoor environment, usually at climbing gyms. These walls are constructed with wooden panels, polymer cement panels, concrete shells, or precast molds of actual rock walls. Holds, usually made of plastic, are then bolted onto the wall to create problems. The walls often feature steep overhanging surfaces which force the climber to employ highly technical movements while supporting much of their weight with their upper body strength. However, in more recent times, many problems set on flat walls now require the climber to execute a series of coordinated movements in order to complete the route. These routes were likely to have originated at the Stuntwerk gym in Germany, and the style of climbing can be said to closely resemble the sport of Parkour. The IFSC Climbing World Championships have noticeably included more of such problems in their competitions as of late.
Climbing gyms often feature multiple problems within the same section of wall. In the US the most common method routesetters use to designate the intended problem is by placing colored tape next to each hold. For example, red tape would indicate one bouldering problem while green tape would be used to set a different problem in the same area. Across much of the rest of the world problems and grades are usually designated using a set color of plastic hold to indicate problems and their difficulty levels. Using colored holds to set has certain advantages, the most notable of which are that it makes it more obvious where the holds for a problem are, and that there is no chance of tape being accidentally kicked off footholds. Smaller, resource-poor climbing gyms may prefer taped problems because large, expensive holds can be used in multiple routes by marking them with more than one color of tape.
Bouldering problems are assigned numerical difficulty ratings by routesetters and climbers. The two most widely used rating systems are the V-scale and the Fontainebleau system.
The V-scale, which originated in the United States, is an open-ended rating system with higher numbers indicating a higher degree of difficulty. The V1 rating indicates that a problem can be completed by a novice climber in good physical condition after several attempts. The scale begins at V0, and as of 2013, the highest V rating that has been assigned to a bouldering problem is V17. Some climbing gyms also use a VB grade to indicate beginner problems.
The Fontainebleau scale follows a similar system, with each numerical grade divided into three ratings with the letters "a", "b", and "c". For example, Fontainebleau 7A roughly corresponds with V6, while Fontainebleau 7C+ is equivalent to V10. In both systems, grades are further differentiated by appending "+" to indicate a small increase in difficulty. Despite this level of specificity, ratings of individual problems are often controversial, as ability level is not the only factor that affects how difficult a problem may be for a particular climber. Height, arm length, flexibility, and other body characteristics can also be relevant to perceived difficulty.
Highball bouldering is simply climbing high, difficult, long, and tall boulders. Using the same protection as standard bouldering climbers venture up house-sized rocks that test not only their physical skill and strength but mental focus. Highballing, like most of climbing, is open to interpretation. Most climbers say anything above 15 feet is a highball and can range in height up to 35–40 feet where highball bouldering then turns into free soloing.
Highball bouldering may have begun in 1961 when John Gill, without top-rope rehearsal, bouldered a steep face on a 37-foot (11 meter) granite spire called ""The Thimble"". Five years earlier, Fred Nicole had proposed 8C for "Dreamtime", on the other side of the same boulder, but most repeaters downgraded it to 8b+ (see below).
Thirty-nine years after its solution, it has been described as one of the world's most famous bouldering problems, together with "Dreamtime" (see above).
Unlike other climbing sports, bouldering can be performed safely and effectively with very little equipment, an aspect which makes the discipline highly appealing, but opinions differ. While bouldering pioneer John Sherman asserted that "The only gear really needed to go bouldering is boulders," others suggest the use of climbing shoes and a chalkbag – a small pouch where ground-up chalk is kept – as the bare minimum, and more experienced boulderers typically bring multiple pairs of climbing shoes, chalk, brushes, crash pads, and a skincare kit.
Climbing shoes have the most direct impact on performance. Besides protecting the climber's feet from rough surfaces, climbing shoes are designed to help the climber secure footholds. Climbing shoes typically fit much tighter than other athletic footwear and often curl the toes downwards to enable precise footwork. They are manufactured in a variety of different styles to perform in different situations. For example, High-top shoes provide better protection for the ankle, while low-top shoes provide greater flexibility and freedom of movement. Stiffer shoes excel at securing small edges, whereas softer shoes provide greater sensitivity. The front of the shoe, called the "toe box," can be asymmetric, which performs well on overhanging rocks, or symmetric, which is better suited for vertical problems and slabs.
To absorb sweat, most boulderers use gymnastics chalk on their hands, stored in a chalkbag, which can be tied around the waist (also called sport climbing chalkbags), allowing the climber to reapply chalk during the climb. There are also versions of floor chalkbags (also called bouldering chalkbags), which are usually bigger than sport climbing chalkbags and are meant to be kept on the floor while climbing; this is because boulders do not usually have so many movements as to require chalking up more than once. Different sizes of brushes are used to remove excess chalk and debris from boulders in between climbs; they are often attached to the end of a long straight object in order to reach higher holds. Crash pads, also referred to as bouldering mats, are foam cushions placed on the ground to protect climbers from falls.
Boulder problems are generally shorter than from ground to top. This makes the sport significantly safer than free solo climbing, which is also performed without ropes, but with no upper limit on the height of the climb. However, minor injuries are common in bouldering, particularly sprained ankles and wrists. Two factors contribute to the frequency of injuries in bouldering: first, boulder problems typically feature more difficult moves than other climbing disciplines, making falls more common. Second, without ropes to arrest the climber's descent, every fall will cause the climber to hit the ground.
To prevent injuries, boulderers position crash pads near the boulder to provide a softer landing, as well as one or more spotters (people watching out for the climber to fall in convenient position) to help redirect the climber towards the pads. Upon landing, boulderers employ falling techniques similar to those used in gymnastics: spreading the impact across the entire body to avoid bone fractures, and positioning limbs to allow joints to move freely throughout the impact.
Although every type of rock climbing requires a high level of technique and strength, bouldering - the most dynamic form of the sport - requires the highest level of power, and thus places considerable strain on the body. Training routines that strengthen fingers and forearms are useful in preventing injuries such as tendonitis and ruptured ligaments.
However, as with other forms of climbing, bouldering technique begins with proper footwork. Leg muscles are significantly stronger than arm muscles; thus, proficient boulderers use their arms to maintain balance and body positioning as much as possible, relying on their legs to push them up the rock. Boulderers also keep their arms straight with their shoulders engaged whenever feasible, allowing their bones to support their body weight rather than their muscles.
Bouldering movements are described as either "static" or "dynamic." Static movements are those that are performed slowly, with the climber's position controlled by maintaining contact on the boulder with the other three limbs. Dynamic movements use the climber's momentum to reach holds that would be difficult or impossible to secure statically, with an increased risk of falling if the movement is not performed accurately.
Bouldering can damage vegetation that grows on rocks, such as mosses and lichens. This can occur as a result of the climber intentionally cleaning the boulder, or unintentionally from repeated use of handholds and footholds. Vegetation on the ground surrounding the boulder can also be damaged from overuse, particularly by climbers laying down crash pads. Soil erosion can occur when boulderers trample vegetation while hiking off of established trails, or when they unearth small rocks near the boulder in an effort to make the landing zone safer in case of a fall. The repeated use of white climbing chalk can damage the rock surface of boulders and cliffs, particularly sandstone and other porous rock types, and the scrubbing of rocks to remove chalk can also degrade the rock surface. For avoiding chalk damaging the surface of the rock, it is important to remove it gently with a brush after a rock climbing session. Other environmental concerns include littering, improperly disposed feces, and graffiti. These issues have caused some land managers to prohibit bouldering, as was the case in Tea Garden, a popular bouldering area in Rocklands, South Africa. | https://en.wikipedia.org/wiki?curid=4113 |
Boiling point
The boiling point of a substance is the temperature at which the vapor pressure of a liquid equals the pressure surrounding the liquid and the liquid changes into a vapor.
The boiling point of a liquid varies depending upon the surrounding environmental pressure. A liquid in a partial vacuum has a lower boiling point than when that liquid is at atmospheric pressure. A liquid at high pressure has a higher boiling point than when that liquid is at atmospheric pressure. For example, water boils at at sea level, but at at altitude. For a given pressure, different liquids will boil at different temperatures.
The normal boiling point (also called the atmospheric boiling point or the atmospheric pressure boiling point) of a liquid is the special case in which the vapor pressure of the liquid equals the defined atmospheric pressure at sea level, one atmosphere. At that temperature, the vapor pressure of the liquid becomes sufficient to overcome atmospheric pressure and allow bubbles of vapor to form inside the bulk of the liquid. The standard boiling point has been defined by IUPAC since 1982 as the temperature at which boiling occurs under a pressure of one bar.
The heat of vaporization is the energy required to transform a given quantity (a mol, kg, pound, etc.) of a substance from a liquid into a gas at a given pressure (often atmospheric pressure).
Liquids may change to a vapor at temperatures below their boiling points through the process of evaporation. Evaporation is a surface phenomenon in which molecules located near the liquid's edge, not contained by enough liquid pressure on that side, escape into the surroundings as vapor. On the other hand, boiling is a process in which molecules anywhere in the liquid escape, resulting in the formation of vapor bubbles within the liquid.
A "saturated liquid" contains as much thermal energy as it can without boiling (or conversely a "saturated vapor" contains as little thermal energy as it can without condensing).
Saturation temperature means "boiling point". The saturation temperature is the temperature for a corresponding saturation pressure at which a liquid boils into its vapor phase. The liquid can be said to be saturated with thermal energy. Any addition of thermal energy results in a phase transition.
If the pressure in a system remains constant (isobaric), a vapor at saturation temperature will begin to condense into its liquid phase as thermal energy (heat) is removed. Similarly, a liquid at saturation temperature and pressure will boil into its vapor phase as additional thermal energy is applied.
The boiling point corresponds to the temperature at which the vapor pressure of the liquid equals the surrounding environmental pressure. Thus, the boiling point is dependent on the pressure. Boiling points may be published with respect to the NIST, USA standard pressure of 101.325 kPa (or 1 atm), or the IUPAC standard pressure of 100.000 kPa. At higher elevations, where the atmospheric pressure is much lower, the boiling point is also lower. The boiling point increases with increased pressure up to the critical point, where the gas and liquid properties become identical. The boiling point cannot be increased beyond the critical point. Likewise, the boiling point decreases with decreasing pressure until the triple point is reached. The boiling point cannot be reduced below the triple point.
If the heat of vaporization and the vapor pressure of a liquid at a certain temperature are known, the boiling point can be calculated by using the Clausius–Clapeyron equation, thus:
where:
Saturation pressure is the pressure for a corresponding saturation temperature at which a liquid boils into its vapor phase. Saturation pressure and saturation temperature have a direct relationship: as saturation pressure is increased, so is saturation temperature.
If the temperature in a system remains constant (an "isothermal" system), vapor at saturation pressure and temperature will begin to condense into its liquid phase as the system pressure is increased. Similarly, a liquid at saturation pressure and temperature will tend to flash into its vapor phase as system pressure is decreased.
There are two conventions regarding the "standard boiling point of water": The "normal boiling point" is at a pressure of 1 atm (i.e., 101.325 kPa). The IUPAC recommended "standard boiling point of water" at a standard pressure of 100 kPa (1 bar) is . For comparison, on top of Mount Everest, at elevation, the pressure is about and the boiling point of water is .
The Celsius temperature scale was defined until 1954 by two points: 0 °C being defined by the water freezing point and 100 °C being defined by the water boiling point at standard atmospheric pressure.
The higher the vapor pressure of a liquid at a given temperature, the lower the normal boiling point (i.e., the boiling point at atmospheric pressure) of the liquid.
The vapor pressure chart to the right has graphs of the vapor pressures versus temperatures for a variety of liquids. As can be seen in the chart, the liquids with the highest vapor pressures have the lowest normal boiling points.
For example, at any given temperature, methyl chloride has the highest vapor pressure of any of the liquids in the chart. It also has the lowest normal boiling point (−24.2 °C), which is where the vapor pressure curve of methyl chloride (the blue line) intersects the horizontal pressure line of one atmosphere (atm) of absolute vapor pressure.
The critical point of a liquid is the highest temperature (and pressure) it will actually boil at.
See also Vapour pressure of water.
The element with the lowest boiling point is helium. Both the boiling points of rhenium and tungsten exceed 5000 K at standard pressure; because it is difficult to measure extreme temperatures precisely without bias, both have been cited in the literature as having the higher boiling point.
As can be seen from the above plot of the logarithm of the vapor pressure vs. the temperature for any given pure chemical compound, its normal boiling point can serve as an indication of that compound's overall volatility. A given pure compound has only one normal boiling point, if any, and a compound's normal boiling point and melting point can serve as characteristic physical properties for that compound, listed in reference books. The higher a compound's normal boiling point, the less volatile that compound is overall, and conversely, the lower a compound's normal boiling point, the more volatile that compound is overall. Some compounds decompose at higher temperatures before reaching their normal boiling point, or sometimes even their melting point. For a stable compound, the boiling point ranges from its triple point to its critical point, depending on the external pressure. Beyond its triple point, a compound's normal boiling point, if any, is higher than its melting point. Beyond the critical point, a compound's liquid and vapor phases merge into one phase, which may be called a superheated gas. At any given temperature, if a compound's normal boiling point is lower, then that compound will generally exist as a gas at atmospheric external pressure. If the compound's normal boiling point is higher, then that compound can exist as a liquid or solid at that given temperature at atmospheric external pressure, and will so exist in equilibrium with its vapor (if volatile) if its vapors are contained. If a compound's vapors are not contained, then some volatile compounds can eventually evaporate away in spite of their higher boiling points.
In general, compounds with ionic bonds have high normal boiling points, if they do not decompose before reaching such high temperatures. Many metals have high boiling points, but not all. Very generally—with other factors being equal—in compounds with covalently bonded molecules, as the size of the molecule (or molecular mass) increases, the normal boiling point increases. When the molecular size becomes that of a macromolecule, polymer, or otherwise very large, the compound often decomposes at high temperature before the boiling point is reached. Another factor that affects the normal boiling point of a compound is the polarity of its molecules. As the polarity of a compound's molecules increases, its normal boiling point increases, other factors being equal. Closely related is the ability of a molecule to form hydrogen bonds (in the liquid state), which makes it harder for molecules to leave the liquid state and thus increases the normal boiling point of the compound. Simple carboxylic acids dimerize by forming hydrogen bonds between molecules. A minor factor affecting boiling points is the shape of a molecule. Making the shape of a molecule more compact tends to lower the normal boiling point slightly compared to an equivalent molecule with more surface area.
Most volatile compounds (anywhere near ambient temperatures) go through an intermediate liquid phase while warming up from a solid phase to eventually transform to a vapor phase. By comparison to boiling, a sublimation is a physical transformation in which a solid turns directly into vapor, which happens in a few select cases such as with carbon dioxide at atmospheric pressure. For such compounds, a sublimation point is a temperature at which a solid turning directly into vapor has a vapor pressure equal to the external pressure.
In the preceding section, boiling points of pure compounds were covered. Vapor pressures and boiling points of substances can be affected by the presence of dissolved impurities (solutes) or other miscible compounds, the degree of effect depending on the concentration of the impurities or other compounds. The presence of non-volatile impurities such as salts or compounds of a volatility far lower than the main component compound decreases its mole fraction and the solution's volatility, and thus raises the normal boiling point in proportion to the concentration of the solutes. This effect is called boiling point elevation. As a common example, salt water boils at a higher temperature than pure water.
In other mixtures of miscible compounds (components), there may be two or more components of varying volatility, each having its own pure component boiling point at any given pressure. The presence of other volatile components in a mixture affects the vapor pressures and thus boiling points and dew points of all the components in the mixture. The dew point is a temperature at which a vapor condenses into a liquid. Furthermore, at any given temperature, the composition of the vapor is different from the composition of the liquid in most such cases. In order to illustrate these effects between the volatile components in a mixture, a boiling point diagram is commonly used. Distillation is a process of boiling and [usually] condensation which takes advantage of these differences in composition between liquid and vapor phases. | https://en.wikipedia.org/wiki?curid=4115 |
Big Bang
The Big Bang theory is a cosmological model of the observable universe from the earliest known periods through its subsequent large-scale evolution. The model describes how the universe expanded from an initial state of very high density and high temperature, and offers a comprehensive explanation for a broad range of observed phenomena, including the abundance of light elements, the cosmic microwave background (CMB) radiation, large-scale structure, and Hubble's law – the farther away galaxies are, the faster they are moving away from Earth. If the observed conditions are extrapolated backwards in time using the known laws of physics, the prediction is that just before a period of very high density there was a singularity. Current knowledge is insufficient to determine if anything existed prior to the singularity.
Georges Lemaître first noted in 1927 that an expanding universe could be traced back in time to an originating single point, calling his theory that of the "primeval atom". For much of the rest of the 20th century scientific community was divided between supporters of the Big Bang and the rival steady-state model, but a wide range of empirical evidence has strongly favored the Big Bang, which is now universally accepted. Edwin Hubble concluded from analysis of galactic redshifts in 1929 that galaxies are drifting apart; this is important observational evidence for an expanding universe. In 1964, the CMB was discovered, which was crucial evidence in favor of the hot Big Bang model, since that theory predicted the existence of a background radiation throughout the universe.
The known laws of physics can be used to calculate the characteristics of the universe in detail back in time to an initial state of extreme density and temperature. Detailed measurements of the expansion rate of the universe place the Big Bang at around 13.8 billion years ago, which is thus considered the age of the universe. After its initial expansion, the universe cooled sufficiently to allow the formation of subatomic particles, and later atoms. Giant clouds of these primordial elements – mostly hydrogen, with some helium and lithium – later coalesced through gravity, forming early stars and galaxies, the descendants of which are visible today. Besides these primordial building materials, astronomers observe the gravitational effects of an unknown dark matter surrounding galaxies. Most of the gravitational potential in the universe seems to be in this form, and the Big Bang theory and various observations indicate that it is not conventional baryonic matter that forms atoms. Measurements of the redshifts of supernovae indicate that the expansion of the universe is accelerating, an observation attributed to dark energy's existence.
The Big Bang theory offers a comprehensive explanation for a broad range of observed phenomena, including the abundance of light elements, the CMB, large-scale structure, and Hubble's law. The theory depends on two major assumptions: the universality of physical laws and the cosmological principle. The universality of physical laws is one of the underlying principles of the theory of relativity. The cosmological principle states that on large scales the universe is homogeneous and isotropic.
These ideas were initially taken as postulates, but later efforts were made to test each of them. For example, the first assumption has been tested by observations showing that largest possible deviation of the fine-structure constant over much of the age of the universe is of order 10−5. Also, general relativity has passed stringent tests on the scale of the Solar System and binary stars.
The large-scale universe appears isotropic as viewed from Earth. If it is indeed isotropic, the cosmological principle can be derived from the simpler Copernican principle, which states that there is no preferred (or special) observer or vantage point. To this end, the cosmological principle has been confirmed to a level of 10−5 via observations of the temperature of the CMB. At the scale of the CMB horizon, the universe has been measured to be homogeneous with an upper bound on the order of 10% inhomogeneity, as of 1995.
The expansion of the Universe was inferred from early twentieth century astronomical observations and is an essential ingredient of the Big Bang theory. Mathematically, general relativity describes spacetime by a metric, which determines the distances that separate nearby points. The points, which can be galaxies, stars, or other objects, are specified using a coordinate chart or "grid" that is laid down over all spacetime. The cosmological principle implies that the metric should be homogeneous and isotropic on large scales, which uniquely singles out the Friedmann–Lemaître–Robertson–Walker (FLRW) metric. This metric contains a scale factor, which describes how the size of the universe changes with time. This enables a convenient choice of a coordinate system to be made, called comoving coordinates. In this coordinate system, the grid expands along with the universe, and objects that are moving only because of the expansion of the universe, remain at fixed points on the grid. While their "coordinate" distance (comoving distance) remains constant, the "physical" distance between two such co-moving points expands proportionally with the scale factor of the universe.
The Big Bang is not an explosion of matter moving outward to fill an empty universe. Instead, space itself expands with time everywhere and increases the physical distances between comoving points. In other words, the Big Bang is not an explosion "in space", but rather an expansion "of space". Because the FLRW metric assumes a uniform distribution of mass and energy, it applies to our universe only on large scales—local concentrations of matter such as our galaxy do not necessarily expand with the same speed as the whole Universe.
An important feature of the Big Bang spacetime is the presence of particle horizons. Since the universe has a finite age, and light travels at a finite speed, there may be events in the past whose light has not yet had time to reach us. This places a limit or a "past horizon" on the most distant objects that can be observed. Conversely, because space is expanding, and more distant objects are receding ever more quickly, light emitted by us today may never "catch up" to very distant objects. This defines a "future horizon", which limits the events in the future that we will be able to influence. The presence of either type of horizon depends on the details of the FLRW model that describes our universe.
Our understanding of the universe back to very early times suggests that there is a past horizon, though in practice our view is also limited by the opacity of the universe at early times. So our view cannot extend further backward in time, though the horizon recedes in space. If the expansion of the universe continues to accelerate, there is a future horizon as well.
According to the Big Bang theory, the universe at the beginning was very hot and very small, and since then it is expanding and cooling down.
Extrapolation of the expansion of the universe backwards in time using general relativity yields an infinite density and temperature at a finite time in the past. This irregular behavior, known as the gravitational singularity, indicates that general relativity is not an adequate description of the laws of physics in this regime. Models based on general relativity alone can not extrapolate toward the singularity — beyond the end of the so-called Planck epoch.
This primordial singularity is itself sometimes called "the Big Bang", but the term can also refer to a more generic early hot, dense phase of the universe. In either case, "the Big Bang" as an event is also colloquially referred to as the "birth" of our universe since it represents the point in history where the universe can be verified to have entered into a regime where the laws of physics as we understand them (specifically general relativity and the Standard Model of particle physics) work. Based on measurements of the expansion using Type Ia supernovae and measurements of temperature fluctuations in the cosmic microwave background, the time that has passed since that event — known as the "age of the universe" — is 13.799 ± 0.021 billion years. The agreement of independent measurements of this age supports the Lambda-CDM (ΛCDM) model that describes in detail the characteristics of the universe.
Despite being extremely dense at this time—far denser than is usually required to form a black hole—the universe did not re-collapse into a singularity. This may be explained by considering that commonly-used calculations and limits for gravitational collapse are usually based upon objects of relatively constant size, such as stars, and do not apply to rapidly expanding space such as the Big Bang. Likewise, since the early universe did not immediately collapse into a multitude of black holes, matter at that time must have been very evenly distributed with a negligible density gradient.
The earliest phases of the Big Bang are subject to much speculation, since astronomical data about them are not available. In the most common models the universe was filled homogeneously and isotropically with a very high energy density and huge temperatures and pressures, and was very rapidly expanding and cooling. The period from 0 to 10−43 seconds into the expansion, the Planck epoch, was a phase in which the four fundamental forces — the electromagnetic force, the strong nuclear force, the weak nuclear force, and the gravitational force, were unified as one. In this stage, the universe was only about 10−35 meters wide and consequently had a temperature of approximately 1032 degrees Celsius. The Planck epoch was succeeded by the grand unification epoch beginning at 10−43 seconds, where gravitation separated from the other forces as the universe's temperature fell. The universe was pure energy at this stage, too hot for any particles to be created.
At approximately 10−37 seconds into the expansion, a phase transition caused a cosmic inflation, during which the universe grew exponentially, faster than the speed of light, and temperatures dropped by a factor of 100,000. Microscopic quantum fluctuations that occurred because of Heisenberg's uncertainty principle were amplified into the seeds that would later form the large-scale structure of the universe. At a time around 10−36 seconds, the Electroweak epoch begins when the strong nuclear force separates from the other forces, with only the electromagnetic force and weak nuclear force remaining unified.
Inflation stopped at around the 10−33 to 10−32 seconds mark, with the universe's volume having increased by a factor of at least 1078. Reheating occurred until the universe obtained the temperatures required for the production of a quark–gluon plasma as well as all other elementary particles. Temperatures were so high that the random motions of particles were at relativistic speeds, and particle–antiparticle pairs of all kinds were being continuously created and destroyed in collisions. At some point, an unknown reaction called baryogenesis violated the conservation of baryon number, leading to a very small excess of quarks and leptons over antiquarks and antileptons—of the order of one part in 30 million. This resulted in the predominance of matter over antimatter in the present universe.
The universe continued to decrease in density and fall in temperature, hence the typical energy of each particle was decreasing. Symmetry breaking phase transitions put the fundamental forces of physics and the parameters of elementary particles into their present form, with the electromagnetic force and weak nuclear force separating at about 10−12 seconds. After about 10−11 seconds, the picture becomes less speculative, since particle energies drop to values that can be attained in particle accelerators. At about 10−6 seconds, quarks and gluons combined to form baryons such as protons and neutrons. The small excess of quarks over antiquarks led to a small excess of baryons over antibaryons. The temperature was now no longer high enough to create new proton–antiproton pairs (similarly for neutrons–antineutrons), so a mass annihilation immediately followed, leaving just one in 1010 of the original protons and neutrons, and none of their antiparticles. A similar process happened at about 1 second for electrons and positrons. After these annihilations, the remaining protons, neutrons and electrons were no longer moving relativistically and the energy density of the universe was dominated by photons (with a minor contribution from neutrinos).
A few minutes into the expansion, when the temperature was about a billion kelvin and the density of matter in the universe was comparable to the current density of Earth's atmosphere, neutrons combined with protons to form the universe's deuterium and helium nuclei in a process called Big Bang nucleosynthesis (BBN). Most protons remained uncombined as hydrogen nuclei.
As the universe cooled, the rest energy density of matter came to gravitationally dominate that of the photon radiation. After about 379,000 years, the electrons and nuclei combined into atoms (mostly hydrogen), which were able to emit radiation. This relic radiation, which continued through space largely unimpeded, is known as the cosmic microwave background. The chemistry of life may have begun during a habitable epoch when the universe was only 10–17 million years old.
Over a long period of time, the slightly denser regions of the uniformly distributed matter gravitationally attracted nearby matter and thus grew even denser, forming gas clouds, stars, galaxies, and the other astronomical structures observable today. The details of this process depend on the amount and type of matter in the universe. The four possible types of matter are known as cold dark matter, warm dark matter, hot dark matter, and baryonic matter. The best measurements available, from the Wilkinson Microwave Anisotropy Probe (WMAP), show that the data is well-fit by a Lambda-CDM model in which dark matter is assumed to be cold (warm dark matter is ruled out by early reionization), and is estimated to make up about 23% of the matter/energy of the universe, while baryonic matter makes up about 4.6%. In an "extended model" which includes hot dark matter in the form of neutrinos, then if the "physical baryon density" formula_1 is estimated at about 0.023 (this is different from the 'baryon density' formula_2 expressed as a fraction of the total matter/energy density, which is about 0.046), and the corresponding cold dark matter density formula_3 is about 0.11, the corresponding neutrino density formula_4 is estimated to be less than 0.0062.
Independent lines of evidence from Type Ia supernovae and the CMB imply that the universe today is dominated by a mysterious form of energy known as dark energy, which apparently permeates all of space. The observations suggest 73% of the total energy density of today's universe is in this form. When the universe was very young, it was likely infused with dark energy, but with less space and everything closer together, gravity predominated, and it was slowly braking the expansion. But eventually, after numerous billion years of expansion, the growing abundance of dark energy caused the expansion of the universe to slowly begin to accelerate.
Dark energy in its simplest formulation takes the form of the cosmological constant term in Einstein field equations of general relativity, but its composition and mechanism are unknown and, more generally, the details of its equation of state and relationship with the Standard Model of particle physics continue to be investigated both through observation and theoretically.
All of this cosmic evolution after the inflationary epoch can be rigorously described and modeled by the ΛCDM model of cosmology, which uses the independent frameworks of quantum mechanics and general relativity. There are no easily testable models that would describe the situation prior to approximately 10−15 seconds. Apparently a new unified theory of quantum gravitation is needed to break this barrier. Understanding this earliest of eras in the history of the universe is currently one of the greatest unsolved problems in physics.
English astronomer Fred Hoyle is credited with coining the term "Big Bang" during a talk for a March 1949 BBC Radio broadcast, saying: "These theories were based on the hypothesis that all the matter in the universe was created in one big bang at a particular time in the remote past."
It is popularly reported that Hoyle, who favored an alternative "steady-state" cosmological model, intended this to be pejorative, but Hoyle explicitly denied this and said it was just a striking image meant to highlight the difference between the two models.
The Big Bang theory developed from observations of the structure of the universe and from theoretical considerations. In 1912, Vesto Slipher measured the first Doppler shift of a "spiral nebula" (spiral nebula is the obsolete term for spiral galaxies), and soon discovered that almost all such nebulae were receding from Earth. He did not grasp the cosmological implications of this fact, and indeed at the time it was highly controversial whether or not these nebulae were "island universes" outside our Milky Way. Ten years later, Alexander Friedmann, a Russian cosmologist and mathematician, derived the Friedmann equations from Einstein field equations, showing that the universe might be expanding in contrast to the static universe model advocated by Albert Einstein at that time. | https://en.wikipedia.org/wiki?curid=4116 |
Bantu languages
The Bantu languages (English: , Proto-Bantu: *bantʊ̀) are a large family of languages spoken by the Bantu peoples throughout Sub-Saharan Africa.
As part of the Bantoid group, they are part of the Benue–Congo language family, which in turn is part of the large Niger–Congo phylum.
The total number of Bantu languages ranges in the hundreds, depending on the definition of "language" versus "dialect", and is estimated at between 440 and 680 distinct languages. | https://en.wikipedia.org/wiki?curid=4124 |
CIM-10 Bomarc
The Boeing CIM-10 Bomarc (IM-99 Weapon System prior to September 1962) was a supersonic ramjet powered long-range surface-to-air missile (SAM) used during the Cold War for the air defense of North America. In addition to being the first operational long-range SAM and the first operational pulse doppler aviation radar, it was the only SAM deployed by the United States Air Force.
Stored horizontally in a launcher shelter with movable roof, the missile was erected, fired vertically using rocket boosters to high altitude, and then tipped over into a horizontal Mach 2.5 cruise powered by ramjet engines. This lofted trajectory allowed the missile to operate at a maximum range as great as 430 mi (700 km). Controlled from the ground for most of its flight, when it reached the target area it was commanded to begin a dive, activating an onboard active radar homing seeker for terminal guidance. A radar proximity fuse detonated the warhead, either a large conventional explosive or the W40 nuclear warhead.
The Air Force originally planned for a total of 52 sites covering most of the major cities and industrial regions in the US. The US Army was deploying their own systems at the same time, and the two services fought constantly both in political circles and in the press. Development dragged on, and by the time it was ready for deployment in the late 1950s, the nuclear threat had moved from manned bombers to the intercontinental ballistic missile (ICBM). By this time the Army had successfully deployed the much shorter range Nike Hercules that they claimed filled any possible need through the 1960s, in spite of Air Force claims to the contrary.
As testing continued, the Air Force reduced its plans to sixteen sites, and then again to eight with an additional two sites in Canada. The first US site was declared operational in 1959, but with only a single working missile. Bringing the rest of the missiles into service took years, by which time the system was obsolete. Deactivations began in 1969 and by 1972 all Bomarc sites had been shut down. A small number were used as target drones, and only a few remain on display today.
In 1946, Boeing started to study surface-to-air guided missiles under the United States Army Air Forces project MX-606. By 1950, Boeing had launched more than 100 test rockets in various configurations, all under the designator XSAM-A-1 GAPA (Ground-to-Air Pilotless Aircraft). Because these tests were very promising, Boeing received a USAF contract in 1949 to develop a pilotless interceptor (a term then used by the USAF for air-defense guided missiles) under project MX-1599.
The MX-1599 missile was to be a ramjet-powered, nuclear-armed long-range surface-to-air missile to defend the Continental United States from high-flying bombers. The Michigan Aerospace Research Center (MARC) was added to the project soon afterward, and this gave the new missile its name Bomarc (for Boeing and MARC). In 1951, the USAF decided to emphasize its point of view that missiles were nothing else than pilotless aircraft by assigning aircraft designators to its missile projects, and anti-aircraft missiles received F-for-Fighter designations. The Bomarc became the F-99.
Test flights of XF-99 test vehicles began in September 1952 and continued through early 1955. The XF-99 tested only the liquid-fueled booster rocket, which would accelerate the missile to ramjet ignition speed. In February 1955, tests of the XF-99A propulsion test vehicles began. These included live ramjets, but still had no guidance system or warhead. The designation YF-99A had been reserved for the operational test vehicles. In August 1955, the USAF discontinued the use of aircraft-like type designators for missiles, and the XF-99A and YF-99A became XIM-99A and YIM-99A, respectively. Originally the USAF had allocated the designation IM-69, but this was changed (possibly at Boeing's request to keep number 99) to IM-99 in October 1955.
In October 1957, the first YIM-99A production-representative prototype flew with full guidance, and succeeded to pass the target within destructive range. In late 1957, Boeing received the production contract for the IM-99A Bomarc A interceptor missile, and in September 1959, the first IM-99A squadron became operational.
The IM-99A had an operational radius of and was designed to fly at Mach 2.5–2.8 at a cruising altitude of . It was long and weighed . Its armament was either a conventional warhead or a W40 nuclear warhead (7–10 kiloton yield). A liquid-fuel rocket engine boosted the Bomarc to Mach 2, when its Marquardt RJ43-MA-3 ramjet engines, fueled by 80-octane gasoline, would take over for the remainder of the flight. This was the same model of engine used to power the Lockheed X-7, the Lockheed AQM-60 Kingfisher drone used to test air defenses, and the Lockheed D-21 launched from the back of an M-21, although the Bomarc and Kingfisher engines used different materials due to the longer duration of their flights.
The operational IM-99A missiles were based horizontally in semi-hardened shelters, nicknamed "coffins". After the launch order, the shelter's roof would slide open, and the missile raised to the vertical. After the missile was supplied with fuel for the booster rocket, it would be launched by the Aerojet General LR59-AJ-13 booster. After sufficient speed was reached, the Marquardt RJ43-MA-3 ramjets would ignite and propel the missile to its cruise speed of Mach 2.8 at an altitude of .
When the Bomarc was within of the target, its own Westinghouse AN/DPN-34 radar guided the missile to the interception point. The maximum range of the IM-99A was , and it was fitted with either a conventional high-explosive or a 10 kiloton W-40 nuclear fission warhead.
The Bomarc relied on the Semi-Automatic Ground Environment (SAGE), an automated control system used by NORAD for detecting, tracking and intercepting enemy bomber aircraft. SAGE allowed for remote launching of the Bomarc missiles, which were housed in a constant combat-ready basis in individual launch shelters in remote areas. At the height of the program, there were 14 Bomarc sites located in the US and two in Canada.
The liquid-fuel booster of the Bomarc A had several drawbacks. It took two minutes to fuel before launch, which could be a long time in high-speed intercepts, and its hypergolic propellants (hydrazine and nitric acid) were very dangerous to handle, leading to several serious accidents.
As soon as high-thrust solid-fuel rockets became a reality in the mid-1950s, the USAF began to develop a new solid-fueled Bomarc variant, the IM-99B Bomarc B. It used a Thiokol XM51 booster, and also had improved Marquardt RJ43-MA-7 (and finally the RJ43-MA-11) ramjets. The first IM-99B was launched in May 1959, but problems with the new propulsion system delayed the first fully successful flight until July 1960, when a supersonic MQM-15A Regulus II drone was intercepted. Because the new booster took up less space in the missile, more ramjet fuel could be carried, increasing the range to . The terminal homing system was also improved, using the world's first pulse Doppler search radar, the Westinghouse AN/DPN-53. All Bomarc Bs were equipped with the W-40 nuclear warhead. In June 1961, the first IM-99B squadron became operational, and Bomarc B quickly replaced most Bomarc A missiles. On 23 March 1961, a Bomarc B successfully intercepted a Regulus II cruise missile flying at , thus achieving the highest interception in the world up to that date.
Boeing built 570 Bomarc missiles between 1957 and 1964, 269 CIM-10A, 301 CIM-10B.
In September 1958 Air Research & Development Command decided to transfer the Bomarc program from its testing at Cape Canaveral Air Force Station to a new facility on Santa Rosa Island, immediately south of Eglin AFB Hurlburt Field on the Gulf of Mexico. To operate the facility and to provide training and operational evaluation in the missile program, Air Defense Command established the 4751st Air Defense Wing (Missile) (4751st ADW) on 15 January 1958. The first launch from Santa Rosa took place on 15 January 1959.
In 1955, to support a program which called for 40 squadrons of BOMARC (120 missiles to a squadron for a total of 4,800 missiles), ADC reached a decision on the location of these 40 squadrons and suggested operational dates for each. The sequence was as follows: ... l. McGuire 1/60 2. Suffolk 2/60 3. Otis 3/60 4. Dow 4/60 5. Niagara Falls 1/61...6 . Plattsburg 1/61 7. Kinross 2/61 8. K. 1. Sawyer 2/61 9. Langley 2/61 10. Truax 3/61 11. Paine 3/61 12. Portland 3/61 ... At the end of 1958, ADC plans called for construction of the following BOMARC bases in the following order: l. McGuire 2. Suffolk 3. Otis 4. Dow 5. Langley 6. Truax 7. Kinross 8. Duluth 9. Ethan Allen 10. Niagara Falls 11. Paine 12. Adair 13. Travis 14. Vandenberg 15. San Diego 16. – Malmstrom 17. Grand Forks 18. Minot 19. Youngstown 20. Seymour-Johnson 21. Bunker Hill 22. Sioux Falls 23. Charleston 24. McConnell 25. Holloman 26. McCoy 27. Amarillo 28. Barksdale; 29. Williams.
The first USAF operational Bomarc squadron was the 46th Air Defense Missile Squadron (ADMS), organized on 1 January 1959 and activated on 25 March. The 46th ADMS was assigned to the New York Air Defense Sector at McGuire Air Force Base, New Jersey. The training program, under the 4751st ADW used technicians acting as instructors and was established for a four-month duration. Training included missile maintenance; SAGE operations and launch procedures, including the launch of an unarmed missile at Eglin. In September 1959 the squadron assembled at their permanent station, the Bomarc site near McGuire AFB, and trained for operational readiness. The first Bomarc-A were used at McGuire on 19 September 1959 with Kincheloe AFB getting the first operational IM-99Bs. While several of the squadrons replicated earlier fighter interceptor unit numbers, they were all new organizations with no previous historical counterpart.
ADC's initial plans called for some 52 Bomarc sites around the United States with 120 missiles each but as defense budgets decreased during the 1950s the number of sites dropped substantially. Ongoing development and reliability problems didn't help, nor did Congressional debate over the missile's usefulness and necessity. In June 1959, the Air Force authorized 16 Bomarc sites with 56 missiles each; the initial five would get the IM-99A with the remainder getting the IM-99B. However, in March 1960, HQ USAF cut deployment to eight sites in the United States and two in Canada.
Within a year of operations, a Bomarc A with a nuclear warhead caught fire at McGuire AFB on 7 June 1960 after its on-board helium tank exploded. While the missile's explosives did not detonate, the heat melted the warhead and released plutonium, which the fire crews spread. The Air Force and the Atomic Energy Commission cleaned up the site and covered it with concrete. This was the only major incident involving the weapon system. The site remained in operation for several years following the fire. Since its closure in 1972, the area has remained off limits, primarily due to low levels of plutonium contamination. Between 2002 and 2004, 21,998 cubic yards of contaminated debris and soils were shipped to what was then known as Envirocare, located in Utah.
In 1962, the US Air Force started using modified A-models as drones; following the October 1962 tri-service redesignation of aircraft and weapons systems they became CQM-10As. Otherwise the air defense missile squadrons maintained alert while making regular trips to Santa Rosa Island for training and firing practice. After the inactivation of the 4751st ADW(M) on 1 July 1962 and transfer of Hurlburt to Tactical Air Command for air commando operations the 4751st Air Defense Squadron (Missile) remained at Hurlburt and Santa Rosa Island for training purposes.
In 1964, the liquid-fueled Bomarc-A sites and squadrons began to be deactivated. The sites at Dow and Suffolk County closed first. The remainder continued to be operational for several more years while the government started dismantling the air defense missile network. Niagara Falls was the first BOMARC B installation to close, in December 1969; the others remained on alert through 1972. In April 1972, the last Bomarc B in U.S. Air Force service was retired at McGuire and the 46th ADMS inactivated and the base was deactivated.
In the era of the intercontinental ballistic missiles the Bomarc, designed to intercept relatively slow manned bombers, had become a useless asset. The remaining Bomarc missiles were used by all armed services as high-speed target drones for tests of other air-defense missiles. The Bomarc A and Bomarc B targets were designated as CQM-10A and CQM-10B, respectively.
Following the accident, the McGuire complex has never been sold or converted to other uses and remains in Air Force ownership, making it the most intact site of the eight in the US. It has been nominated to the National Register of Historic Sites. Although a number of IM-99/CIM-10 Bomarcs have been placed on public display, because of concerns about the possible environmental hazards of the thoriated magnesium structure of the airframe several have been removed from public view.
Russ Sneddon, director of the Air Force Armament Museum, Eglin Air Force Base, Florida provided information about missing CIM-10 exhibit airframe serial 59–2016, one of the museum's original artifacts from its founding in 1975 and donated by the 4751st Air Defense Squadron at Hurlburt Field, Eglin Auxiliary Field 9, Eglin AFB. As of December 2006, the suspect missile was stored in a secure compound behind the Armaments Museum. In December 2010, the airframe was still on premises, but partly dismantled.
The Bomarc Missile Program was highly controversial in Canada. The Progressive Conservative government of Prime Minister John Diefenbaker initially agreed to deploy the missiles, and shortly thereafter controversially scrapped the Avro Arrow, a supersonic manned interceptor aircraft, arguing that the missile program made the Arrow unnecessary.
Initially, it was unclear whether the missiles would be equipped with nuclear warheads. By 1960 it became known that the missiles were to have a nuclear payload, and a debate ensued about whether Canada should accept nuclear weapons. Ultimately, the Diefenbaker government decided that the Bomarcs should not be equipped with nuclear warheads. The dispute split the Diefenbaker Cabinet, and led to the collapse of the government in 1963. The Official Opposition and Liberal Party leader Lester B. Pearson originally was against nuclear missiles, but reversed his personal position and argued in favor of accepting nuclear warheads. He won the 1963 election, largely on the basis of this issue, and his new Liberal government proceeded to accept nuclear-armed Bomarcs, with the first being deployed on 31 December 1963. When the nuclear warheads were deployed, Pearson's wife, Maryon, resigned her honorary membership in the anti-nuclear weapons group, Voice of Women.
Canadian operational deployment of the Bomarc involved the formation of two specialized Surface/Air Missile squadrons. The first to begin operations was No. 446 SAM Squadron at RCAF Station North Bay, which was the command and control center for both squadrons. With construction of the compound and related facilities completed in 1961, the squadron received its Bomarcs in 1961, without nuclear warheads. The squadron became fully operational from 31 December 1963, when the nuclear warheads arrived, until disbanding on 31 March 1972. All the warheads were stored separately and under control of Detachment 1 of the USAF 425th Munitions Maintenance Squadron. During operational service, the Bomarcs were maintained on stand-by, on a 24-hour basis, but were never fired, although the squadron test-fired the missiles at Eglin AFB, Florida on annual winter retreats.
No. 447 SAM Squadron operating out of RCAF Station La Macaza, Quebec, was activated on 15 September 1962 although warheads were not delivered until late 1963. The squadron followed the same operational procedures as No. 446, its sister squadron. With the passage of time the operational capability of the 1950s-era Bomarc system no longer met modern requirements; the Department of National Defence deemed that the Bomarc missile defense was no longer a viable system, and ordered both squadrons to be stood down in 1972. The bunkers and ancillary facilities remain at both former sites.
Locations under construction but not activated. Each site was programmed for 28 IM-99B missiles:
Reference for BOMARC units and locations:
Below is a list of museums or sites which have a Bomarc missile on display:
The Bomarc missile captured the imagination of the American and Canadian popular music industry, giving rise to a pop music group, the Bomarcs (composed mainly of servicemen stationed on a Florida radar site that tracked Bomarcs), a record label, Bomarc Records, and a moderately successful Canadian pop group, The Beau Marks. | https://en.wikipedia.org/wiki?curid=4130 |
Branco River
The Branco River (; Engl: "White River") is the principal affluent of the Rio Negro from the north.
The river drains the Guayanan Highlands moist forests ecoregion.
It is enriched by many streams from the Tepui highlands which separate Venezuela and Guyana from Brazil. Its two upper main tributaries are the Uraricoera and the Takutu. The latter almost links its sources with those of the Essequibo; during floods headwaters of the Branco and those of the Essequibo are connected, allowing a level of exchange in the aquatic fauna (such as fish) between the two systems.
The Branco flows nearly south, and finds its way into the Negro through several channels and a chain of lagoons similar to those of the latter river. It is long, up to its Uraricoera confluence. It has numerous islands, and, above its mouth, it is broken by a bad series of rapids.
As suggested by its name, the Branco (literally "white" in Portuguese) has whitish water that may appear almost milky due to the inorganic sediments it carries. It is traditionally considered a whitewater river, although the major seasonal fluctuations in its physico-chemical characteristics makes a classification difficult and some consider it clearwater. Especially the river's upper parts at the headwaters are clear and flow through rocky country, leading to the suggestion that sediments mainly originate from the lower parts. Furthermore, its chemistry and color may contradict each other compared to the traditional Amazonian river classifications. The Branco River has pH 6–7 and low levels of dissolved organic carbon.
Alfred Russel Wallace mentioned the coloration in "On the Rio Negro", a paper read at the 13 June 1853 meeting of the Royal Geographical Society, in which he said: "[The Rio Branco] is white to a remarkable degree, its waters being actually milky in appearance". Alexander von Humboldt attributed the color to the presence of silicates in the water, principally mica and talc. There is a visible contrast with the waters of the Rio Negro at the confluence of the two rivers. The Rio Negro is a blackwater river with dark tea-colored acidic water (pH 3.5–4.5) that contains high levels of dissolved organic carbon.
Until approximately 20,000 years ago the headwaters of the Branco River flowed not into the Amazon, but via the Takutu Graben in the Rupununi area of Guyana towards the Caribbean. Currently in the rainy season much of the Rupununi area floods, with water draining both to the Amazon (via the Branco River) and the Essequibo River. | https://en.wikipedia.org/wiki?curid=4132 |
Bus
A bus (contracted from omnibus, with variants multibus, motorbus, autobus, etc.) is a road vehicle designed to carry many passengers. Buses can have a capacity as high as 300 passengers. The most common type is the single-deck rigid bus, with larger loads carried by double-decker and articulated buses, and smaller loads carried by midibuses and minibuses while coaches are used for longer-distance services. Many types of buses, such as city transit buses and inter-city coaches, charge a fare. Other types, such as elementary or secondary school buses or shuttle buses within a post-secondary education campus do not charge a fare. In many jurisdictions, bus drivers require a special licence above and beyond a regular driver's licence.
Buses may be used for scheduled bus transport, scheduled coach transport, school transport, private hire, or tourism; promotional buses may be used for political campaigns and others are privately operated for a wide range of purposes, including rock and pop band tour vehicles.
Horse-drawn buses were used from the 1820s, followed by steam buses in the 1830s, and electric trolleybuses in 1882. The first internal combustion engine buses, or motor buses, were used in 1895. Recently, interest has been growing in hybrid electric buses, fuel cell buses, and electric buses, as well as buses powered by compressed natural gas or biodiesel. As of the 2010s, bus manufacturing is increasingly globalised, with the same designs appearing around the world.
Bus is a clipped form of the Latin adjectival form "omnibus" ("for all"), the dative plural of "omnis-e" ("all"). The theoretical full name is in French "voiture omnibus" ("vehicle for all"). The name originates from a mass-transport service started in 1823 by a French corn-mill owner named in Richebourg, a suburb of Nantes. A by-product of his mill was hot water, and thus next to it he established a spa business. In order to encourage customers he started a horse-drawn transport service from the city centre of Nantes to his establishment. The first vehicles stopped in front of the shop of a hatter named Omnés, which displayed a large sign inscribed "Omnes Omnibus", a pun on his Latin-sounding surname, "omnes" being the male and female nominative, vocative and accusative form of the Latin adjective "omnis-e" ("all"), combined with "omnibus", the dative plural form meaning "for all", thus giving his shop the name "Omnés for all", or "everything for everyone". His transport scheme was a huge success, although not as he had intended as most of his passengers did not visit his spa. He turned the transport service into his principal lucrative business venture and closed the mill and spa. Nantes citizens soon gave the nickname "omnibus" to the vehicle. Having invented the successful concept Baudry moved to Paris and launched the first omnibus service there in April 1828. A similar service was introduced in London in 1829.
Regular intercity bus services by steam-powered buses were pioneered in England in the 1830s by Walter Hancock and by associates of Sir Goldsworthy Gurney, among others, running reliable services over road conditions which were too hazardous for horse-drawn transportation.
The first mechanically propelled omnibus appeared on the streets of London on 22 April 1833. Steam carriages were much less likely to overturn, they travelled faster than horse-drawn carriages, they were much cheaper to run, and caused much less damage to the road surface due to their wide tyres.
However, the heavy road tolls imposed by the turnpike trusts discouraged steam road vehicles and left the way clear for the horse bus companies, and from 1861 onwards, harsh legislation virtually eliminated mechanically propelled vehicles from the roads of Great Britain for 30 years, the Locomotive Act of that year imposing restrictive speed limits on "road locomotives" of 5 mph in towns and cities, and 10 mph in the country.
In parallel to the development of the bus was the invention of the electric trolleybus, typically fed through trolley poles by overhead wires. The Siemens brothers, William in England and Ernst Werner in Germany, collaborated on the development of the trolleybus concept. Sir William first proposed the idea in an article to the "Journal of the Society of Arts" in 1881 as an "...arrangement by which an ordinary omnibus...would have a suspender thrown at intervals from one side of the street to the other, and two wires hanging from these suspenders; allowing contact rollers to run on these two wires, the current could be conveyed to the tram-car, and back again to the dynamo machine at the station, without the necessity of running upon rails at all."
The first such vehicle, the Electromote, was made by his brother Dr. Ernst Werner von Siemens and presented to the public in 1882 in Halensee, Germany. Although this experimental vehicle fulfilled all the technical criteria of a typical trolleybus, it was dismantled in the same year after the demonstration.
Max Schiemann opened a passenger-carrying trolleybus in 1901 near Dresden, in Germany. Although this system operated only until 1904, Schiemann had developed what is now the standard trolleybus current collection system. In the early days, a few other methods of current collection were used. Leeds and Bradford became the first cities to put trolleybuses into service in Great Britain on 20 June 1911.
In Siegerland, Germany, two passenger bus lines ran briefly, but unprofitably, in 1895 using a six-passenger motor carriage developed from the 1893 Benz Viktoria. Another commercial bus line using the same model Benz omnibuses ran for a short time in 1898 in the rural area around Llandudno, Wales.
Daimler also produced one of the earliest motor-bus models in 1898, selling a double-decker bus to the Motor Traction Company which was first used on the streets of London on 23 April 1898. The vehicle had a maximum speed of and accommodated up to 20 passengers, in an enclosed area below and on an open-air platform above. With the success and popularity of this bus, Daimler expanded production, selling more buses to companies in London and, in 1899, to Stockholm and Speyer. Daimler also entered into a partnership with the British company Milnes and developed a new double-decker in 1902 that became the market standard.
The first mass-produced bus model was the B-type double-decker bus, designed by Frank Searle and operated by the London General Omnibus Company – it entered service in 1910, and almost 3,000 had been built by the end of the decade. Hundreds saw military service on the Western Front during the First World War.
The Yellow Coach Manufacturing Company, which rapidly became a major manufacturer of buses in the US, was founded in Chicago in 1923 by John D. Hertz. General Motors purchased a majority stake in 1925 and changed its name to the Yellow Truck and Coach Manufacturing Company. They then purchased the balance of the shares in 1943 to form the GM Truck and Coach Division.
Models expanded in the 20th century, leading to the widespread introduction of the contemporary recognizable form of full-sized buses from the 1950s. The AEC Routemaster, developed in the 1950s, was a pioneering design and remains an icon of London to this day. The innovative design used lightweight aluminium and techniques developed in aircraft production during World War II. As well as a novel weight-saving integral design, it also introduced for the first time on a bus independent front suspension, power steering, a fully automatic gearbox, and power-hydraulic braking.
Formats include single-decker bus, double-decker bus (both usually with a rigid chassis) and articulated bus (or 'bendy-bus') the prevalence of which varies from country to country. High-capacity bi-articulated buses are also manufactured, and passenger-carrying trailers—either towed behind a rigid bus (a bus trailer) or hauled as a trailer by a truck (a trailer bus). Smaller midibuses have a lower capacity and open-top buses are typically used for leisure purposes. In many new fleets, particularly in local transit systems, a shift to low-floor buses is occurring, primarily for easier accessibility. Coaches are designed for longer-distance travel and are typically fitted with individual high-backed reclining seats, seat belts, toilets, and audio-visual entertainment systems, and can operate at higher speeds with more capacity for luggage. Coaches may be single- or double-deckers, articulated, and often include a separate luggage compartment under the passenger floor. Guided buses are fitted with technology to allow them to run in designated guideways, allowing the controlled alignment at bus stops and less space taken up by guided lanes than conventional roads or bus lanes.
Bus manufacturing may be by a single company (an integral manufacturer), or by one manufacturer's building a bus body over a chassis produced by another manufacturer.
Transit buses used to be mainly high-floor vehicles. However, they are now increasingly of low-floor design and optionally also 'kneel' air suspension and have electrically or hydraulically extended under-floor ramps to provide level access for wheelchair users and people with baby carriages. Prior to more general use of such technology, these wheelchair users could only use specialist paratransit mobility buses.
Accessible vehicles also have wider entrances and interior gangways and space for wheelchairs. Interior fittings and destination displays may also be designed to be usable by the visually impaired. Coaches generally use wheelchair lifts instead of low-floor designs. In some countries, vehicles are required to have these features by disability discrimination laws.
Buses were initially configured with an engine in the front and an entrance at the rear. With the transition to one-man operation, many manufacturers moved to mid- or rear-engined designs, with a single door at the front or multiple doors. The move to the low-floor design has all but eliminated the mid-engined design, although some coaches still have mid-mounted engines. Front-engined buses still persist for niche markets such as American school buses, some minibuses, and buses in less developed countries, which may be derived from truck chassis, rather than purpose-built bus designs. Most buses have two axles, articulated buses have three.
Guided buses are fitted with technology to allow them to run in designated guideways, allowing the controlled alignment at bus stops and less space taken up by guided lanes than conventional roads or bus lanes. Guidance can be mechanical, optical, or electromagnetic. Extensions of the guided technology include the Guided Light Transit and Translohr systems, although these are more often termed 'rubber-tyred trams' as they have limited or no mobility away from their guideways.
Transit buses are normally painted to identify the operator or a route, function, or to demarcate low-cost or premium service buses. Liveries may be painted onto the vehicle, applied using adhesive vinyl technologies, or using decals. Vehicles often also carry bus advertising or part or all of their visible surfaces (as mobile billboard). Campaign buses may be decorated with key campaign messages; these can be to promote an event or initiative.
The most common power source since the 1920s has been the diesel engine. Early buses, known as trolleybuses, were powered by electricity supplied from overhead lines. Nowadays, electric buses often carry their own battery, which is sometimes recharged on stops/stations to keep the size of the battery small/lightweight. Currently, interest exists in hybrid electric buses, fuel cell buses, electric buses, and ones powered by compressed natural gas or biodiesel. Gyrobuses, which are powered by the momentum stored by a flywheel, were tried in the 1940s.
United Kingdom and European Union:
United States, Canada and Mexico:
Early bus manufacturing grew out of carriage coachbuilding, and later out of automobile or truck manufacturers. Early buses were merely a bus body fitted to a truck chassis. This body+chassis approach has continued with modern specialist manufacturers, although there also exist integral designs such as the Leyland National where the two are practically inseparable. Specialist builders also exist and concentrate on building buses for special uses or modifying standard buses into specialised products.
Integral designs have the advantages that they have been well-tested for strength and stability, and also are off-the-shelf. However, two incentives cause use of the chassis+body model. First, it allows the buyer and manufacturer both to shop for the best deal for their needs, rather than having to settle on one fixed design—the buyer can choose the body and the chassis separately. Second, over the lifetime of a vehicle (in constant service and heavy traffic), it will likely get minor damage now and again, and being able easily to replace a body panel or window etc. can vastly increase its service life and save the cost and inconvenience of removing it from service.
As with the rest of the automotive industry, into the 20th century, bus manufacturing increasingly became globalized, with manufacturers producing buses far from their intended market to exploit labour and material cost advantages. As with the cars, new models are often exhibited by manufacturers at prestigious industry shows to gain new orders. A typical city bus costs almost US$450,000.
Transit buses, used on public transport bus services, have utilitarian fittings designed for efficient movement of large numbers of people, and often have multiple doors. Coaches are used for longer-distance routes. High-capacity bus rapid transit services may use the bi-articulated bus or tram-style buses such as the Wright StreetCar and the Irisbus Civis.
Buses and coach services often operate to a predetermined published public transport timetable defining the route and the timing, but smaller vehicles may be used on more flexible demand responsive transport services.
Buses play a major part in the tourism industry. Tour buses around the world allow tourists to view local attractions or scenery. These are often open-top buses, but can also be regular buses or coaches.
In local sightseeing, City Sightseeing is the largest operator of local tour buses, operating on a franchised basis all over the world. Specialist tour buses are also often owned and operated by safari parks and other theme parks or resorts. Longer-distance tours are also carried out by bus, either on a turn up and go basis or through a tour operator, and usually allow disembarkation from the bus to allow touring of sites of interest on foot. These may be day trips or longer excursions incorporating hotel stays. Tour buses often carry a tour guide, although the driver or a recorded audio commentary may also perform this function. The tour operator may be a subsidiary of a company that operates buses and coaches for other uses or an independent company that charters buses or coaches. Commuter transport operators may also use their coaches to conduct tours within the target city between the morning and evening commuter transport journey.
Buses and coaches are also a common component of the wider package holiday industry, providing private airport transfers (in addition to general airport buses) and organised tours and day trips for holidaymakers on the package.
Tour buses can also be hired as chartered buses by groups for sightseeing at popular holiday destinations. These private tour buses may offer specific stops, such as all the historical sights, or allow the customers to choose their own itineraries. Tour buses come with professional and informed staff and insurance, and maintain state governed safety standards. Some provide other facilities like entertainment units, luxurious reclining seats, large scenic windows, and even lavatories.
Public long-distance coach networks are also often used as a low-cost method of travel by students or young people travelling the world. Some companies such as Topdeck Travel were set up specifically to use buses to drive the hippie trail or travel to places such as North Africa.
In many tourist or travel destinations, a bus is part of the tourist attraction, such as the North American tourist trolleys, London's AEC Routemaster heritage routes, or the customised buses of Malta, Asia, and the Americas. Another example of tourist stops is the homes of celebrities, such as tours based near Hollywood. There are several such services between 6000 and 7000 Hollywood Boulevard in Los Angeles.
In some countries, particularly the US and Canada, buses used to transport schoolchildren have evolved into a specific design with specified mandatory features. American states have also adopted laws regarding motorist conduct around school buses, including large fines and possibly prison for passing a stopped school bus in the process of loading or offloading children passengers. These school buses may have school bus yellow livery and crossing guards. Other countries may mandate the use of seat belts. As a minimum, many countries require a bus carrying students to display a , and may also adopt yellow liveries. Student transport often uses older buses cascaded from service use, retrofitted with more seats or seatbelts. Student transport may be operated by local authorities or private contractors. Schools may also own and operate their own buses for other transport needs, such as class field trips, or transport to associated sports, music, or other school events.
Due to the costs involved in owning, operating, and driving buses and coaches, many bus and coach use a private hire of vehicles from charter bus companies, either for a day or two or a longer contract basis, where the charter company provides the vehicles and qualified drivers.
Charter bus operators may be completely independent businesses, or charter hire may be a subsidiary business of a public transport operator that might maintain a separate fleet or use surplus buses, coaches, and dual-purpose coach-seated buses. Many private taxicab companies also operate larger minibus vehicles to cater for group fares. Companies, private groups, and social clubs may hire buses or coaches as a cost-effective method of transporting a group to an event or site, such as a group meeting, racing event, or organised recreational activity such as a summer camp. Schools often hire charter bus services on regular basis for transportation of children to and from their homes. Chartered buses are also used by education institutes for transport to conventions, exhibitions, and field trips. Entertainment or event companies may also hire temporary shuttles buses for transport at events such as festivals or conferences. Party buses are used by companies in a similar manner to limousine hire, for luxury private transport to social events or as a touring experience. Sleeper buses are used by bands or other organisations that tour between entertainment venues and require mobile rest and recreation facilities. Some couples hire preserved buses for their wedding transport, instead of the traditional car. Buses are often hired for parades or processions. Victory parades are often held for triumphant sports teams, who often tour their home town or city in an open-top bus. Sports teams may also contract out their transport to a team bus, for travel to away games, to a competition or to a final event. These buses are often specially decorated in a livery matching the team colours. Private companies often contract out private shuttle bus services, for transport of their customers or patrons, such as hotels, amusement parks, university campuses, or private airport transfer services. This shuttle usage can be as transport between locations, or to and from parking lots. High specification luxury coaches are often chartered by companies for executive or VIP transport. Charter buses may also be used in tourism and for promotion (See Tourism and Promotion sections).
Many organisations, including the police, not for profit, social or charitable groups with a regular need for group transport may find it practical or cost-effective to own and operate a bus for their own needs. These are often minibuses for practical, tax and driver licensing reasons, although they can also be full-size buses. Cadet or scout groups or other youth organizations may also own buses. Companies such as railroads, construction contractors, and agricultural firms may own buses to transport employees to and from remote jobsites. Specific charities may exist to fund and operate bus transport, usually using specially modified mobility buses or otherwise accessible buses (See Accessibility section). Some use their contributions to buy vehicles and provide volunteer drivers.
Airport operators make use of special airside airport buses for crew and passenger transport in the secure airside parts of an airport. Some public authorities, police forces, and military forces make use of armoured buses where there is a special need to provide increased passenger protection. The United States Secret Service acquired two in 2010 for transporting dignitaries needing special protection. Police departments make use of police buses for a variety of reasons, such as prisoner transport, officer transport, temporary detention facilities, and as command and control vehicles. Some fire departments also use a converted bus as a command post while those in cold climates might retain a bus as a heated shelter at fire scenes. Many are drawn from retired school or service buses.
Buses are often used for advertising, political campaigning, , public relations, or promotional purposes. These may take the form of temporary charter hire of service buses, or the temporary or permanent conversion and operation of buses, usually of second-hand buses. Extreme examples include converting the bus with displays and decorations or awnings and fittings. Interiors may be fitted out for exhibition or information purposes with special equipment or audio visual devices.
Bus advertising takes many forms, often as interior and exterior adverts and all-over advertising liveries. The practice often extends into the exclusive private hire and use of a bus to promote a brand or product, appearing at large public events, or touring busy streets. The bus is sometimes staffed by promotions personnel, giving out free gifts. Campaign buses are often specially decorated for a political campaign or other social awareness information campaign, designed to bring a specific message to different areas, or used to transport campaign personnel to local areas/meetings. Exhibition buses are often sent to public events such as fairs and festivals for purposes such as recruitment campaigns, for example by private companies or the armed forces. Complex urban planning proposals may be organised into a mobile exhibition bus for the purposes of public consultation.
In some sparsely populated areas, it is common to use brucks, buses with a cargo area to transport both passengers and cargo at the same time. They are especially common in the Nordic countries.
Historically, the types and features of buses have developed according to local needs. Buses were fitted with technology appropriate to the local climate or passenger needs, such as air conditioning in Asia, or cycle mounts on North American buses. The bus types in use around the world where there was little mass production were often sourced second hand from other countries, such as the Malta bus, and buses in use in Africa. Other countries such as Cuba required novel solutions to import restrictions, with the creation of the "camellos" (camel bus), a specially manufactured trailer bus.
After the Second World War, manufacturers in Europe and the Far East, such as Mercedes-Benz buses and Mitsubishi Fuso expanded into other continents influencing the use of buses previously served by local types. Use of buses around the world has also been influenced by colonial associations or political alliances between countries. Several of the Commonwealth nations followed the British lead and sourced buses from British manufacturers, leading to a prevalence of double-decker buses. Several Eastern Bloc countries adopted trolleybus systems, and their manufacturers such as Trolza exported trolleybuses to other friendly states. In the 1930s, Italy designed the world's only triple decker bus for the busy route between Rome and Tivoli that could carry eighty-eight passengers. It was unique not only in being a triple decker but having a separate smoking compartment on the third level.
Knight Bus in Harry Potter and the Prisoner of Azkaban is a full-sized triple decker bus used only as a prop for the movie.
The buses to be found in countries around the world often reflect the quality of the local road network, with high floor resilient truck-based designs prevalent in several less developed countries where buses are subject to tough operating conditions. Population density also has a major impact, where dense urbanisation such as in Japan and the far east has led to the adoption of high capacity long multi-axle buses, often double-deckers while South America and China are implementing large numbers of articulated buses for bus rapid transit schemes.
Euro Bus Expo is a trade show, which is held biennially at the UK's National Exhibition Centre in Birmingham. As the official show of the Confederation of Passenger Transport, the UK's trade association for the bus, coach and light rail industry, the three-day event offers visitors from Europe and beyond the chance to see and experience the very latest vehicles and product and service innovations right across the industry.
Busworld Kortrijk in Kortrijk, Belgium, is the leading bus trade fair in Europe. It is also held biennially.
Most public or private buses and coaches, once they have reached the end of their service with one or more operators, are sent to the wrecking yard for breaking up for scrap and spare parts. Some buses which are not economical to keep running as service buses are often converted for use other than revenue-earning transport. Much like old cars and trucks, buses often pass through a dealership where they can be bought privately or at auction.
Bus operators often find it economical to convert retired buses to use as permanent training buses for driver training, rather than taking a regular service bus out of use. Some large operators have also converted retired buses into tow bus vehicles, to act as tow trucks. With the outsourcing of maintenance staff and facilities, the increase in company health and safety regulations, and the increasing curb weights of buses, many operators now contract their towing needs to a professional vehicle recovery company.
Some retired buses have been converted to static or mobile cafés, often using historic buses as a tourist attraction. There are also catering buses: buses converted into a mobile canteen and break room. These are commonly seen at external filming locations to feed the cast and crew, and at other large events to feed staff. Another use is as an emergency vehicle, such as high-capacity ambulance bus or mobile command center.
Some organisations adapt and operate playbuses or learning buses to provide a playground or learning environments to children who might not have access to proper play areas. An ex-London AEC Routemaster bus has been converted to a mobile theatre and catwalk fashion show.
Some buses meet a destructive end by being entered in banger races or at demolition derbys. A larger number of old retired buses have also been converted into mobile holiday homes and campers.
Rather than being scrapped or converted for other uses, sometimes retired buses are saved for preservation. This can be done by individuals, volunteer preservation groups or charitable trusts, museums, or sometimes by the operators themselves as part of a heritage fleet. These buses often need to be restored to their original condition and will have their livery and other details such as internal notices and rollsigns restored to be authentic to a specific time in the bus's history. Some buses that undergo preservation are rescued from a state of great disrepair, but others enter preservation with very little wrong with them. As with other historic vehicles, many preserved buses either in a working or static state form part of the collections of transport museums. Working buses will often be exhibited at rallies and events, and they are also used as charter buses. While many preserved buses are quite old or even vintage, in some cases relatively new examples of a bus type can enter restoration. In-service examples are still in use by other operators. This often happens when a change in design or operating practice, such as the switch to one person operation or low floor technology, renders some buses redundant while still relatively new. | https://en.wikipedia.org/wiki?curid=4146 |
Bali
Bali () is a province of Indonesia and the westernmost of the Lesser Sunda Islands. Located east of Java and west of Lombok, the province includes the island of Bali and a few smaller neighbouring islands, notably Nusa Penida, Nusa Lembongan, and Nusa Ceningan. The provincial capital, Denpasar, is the most populous city in the Lesser Sunda Islands and the second-largest, after Makassar, in Eastern Indonesia. Bali is the only Hindu-majority province in Indonesia, with 82.5% of the population adhering to Balinese Hinduism.
Bali is Indonesia's main tourist destination, with a significant rise in tourism since the 1980s. Tourism-related business makes up 80% of its economy. It is renowned for its highly developed arts, including traditional and modern dance, sculpture, painting, leather, metalworking, and music. The Indonesian International Film Festival is held every year in Bali. Other international events held in Bali include the Miss World 2013 and 2018 Annual Meetings of the International Monetary Fund and the World Bank Group. In March 2017, TripAdvisor named Bali as the world's top destination in its Traveller's Choice award.
Bali is part of the Coral Triangle, the area with the highest biodiversity of marine species especially fish and turtles. In this area alone, over 500 reef-building coral species can be found. For comparison, this is about seven times as many as in the entire Caribbean. Bali is the home of the Subak irrigation system, a UNESCO World Heritage Site. It is also home to a unified confederation of kingdoms composed of 10 traditional royal Balinese houses, each house ruling a specific geographic area. The confederation is the successor of the Bali Kingdom. The royal houses are not recognised by the government of Indonesia; however, they originated before Dutch colonisation.
Bali was inhabited around 2000 BCE by Austronesian people who migrated originally from the island of Taiwan to Southeast Asia and Oceania through Maritime Southeast Asia. Culturally and linguistically, the Balinese are closely related to the people of the Indonesian archipelago, Malaysia, the Philippines and Oceania. Stone tools dating from this time have been found near the village of Cekik in the island's west.
In ancient Bali, nine Hindu sects existed, namely Pasupata, Bhairawa, Siwa Shidanta, Vaishnava, Bodha, Brahma, Resi, Sora and Ganapatya. Each sect revered a specific deity as its personal Godhead.
Inscriptions from 896 and 911 do not mention a king, until 914, when Sri Kesarivarma is mentioned. They also reveal an independent Bali, with a distinct dialect, where Buddhism and Sivaism were practised simultaneously. Mpu Sindok's great-granddaughter, Mahendradatta (Gunapriyadharmapatni), married the Bali king Udayana Warmadewa (Dharmodayanavarmadeva) around 989, giving birth to Airlangga around 1001. This marriage also brought more Hinduism and Javanese culture to Bali. Princess Sakalendukirana appeared in 1098. Suradhipa reigned from 1115 to 1119, and Jayasakti from 1146 until 1150. Jayapangus appears on inscriptions between 1178 and 1181, while Adikuntiketana and his son Paramesvara in 1204.
Balinese culture was strongly influenced by Indian, Chinese, and particularly Hindu culture, beginning around the 1st century AD. The name "Bali dwipa" ("Bali island") has been discovered from various inscriptions, including the Blanjong pillar inscription written by Sri Kesari Warmadewa in 914 AD and mentioning Walidwipa. It was during this time that the people developed their complex irrigation system "subak" to grow rice in wet-field cultivation. Some religious and cultural traditions still practised today can be traced to this period.
The Hindu Majapahit Empire (1293–1520 AD) on eastern Java founded a Balinese colony in 1343. The uncle of Hayam Wuruk is mentioned in the charters of 1384–86. Mass Javanese immigration to Bali occurred in the next century when the Majapahit Empire fell in 1520. Bali's government then became an independent collection of Hindu kingdoms which led to a Balinese national identity and major enhancements in culture, arts, and economy. The nation with various kingdoms became independent for up to 386 years until 1906, when the Dutch subjugated and repulsed the natives for economic control and took it over.
The first known European contact with Bali is thought to have been made in 1512, when a Portuguese expedition led by Antonio Abreu and Francisco Serrão sighted its northern shores. It was the first expedition of a series of bi-annual fleets to the Moluccas, that throughout the 16th century usually travelled along the coasts of the Sunda Islands. Bali was also mapped in 1512, in the chart of Francisco Rodrigues, aboard the expedition. In 1585, a ship foundered off the Bukit Peninsula and left a few Portuguese in the service of Dewa Agung.
In 1597, the Dutch explorer Cornelis de Houtman arrived at Bali, and the Dutch East India Company was established in 1602. The Dutch government expanded its control across the Indonesian archipelago during the second half of the 19th century (see Dutch East Indies). Dutch political and economic control over Bali began in the 1840s on the island's north coast, when the Dutch pitted various competing for Balinese realms against each other. In the late 1890s, struggles between Balinese kingdoms in the island's south were exploited by the Dutch to increase their control.
In June 1860, the famous Welsh naturalist, Alfred Russel Wallace, travelled to Bali from Singapore, landing at Buleleng on the north coast of the island. Wallace's trip to Bali was instrumental in helping him devise his Wallace Line theory. The Wallace Line is a faunal boundary that runs through the strait between Bali and Lombok. It is a boundary between species. In his travel memoir "The Malay Archipelago," Wallace wrote of his experience in Bali, of which has a strong mention of the unique Balinese irrigation methods:
I was both astonished and delighted; for as my visit to Java was some years later, I had never beheld so beautiful and well-cultivated a district out of Europe. A slightly undulating plain extends from the seacoast about inland, where it is bounded by a fine range of wooded and cultivated hills. Houses and villages, marked out by dense clumps of coconut palms, tamarind and other fruit trees, are dotted about in every direction; while between them extend luxurious rice-grounds, watered by an elaborate system of irrigation that would be the pride of the best-cultivated parts of Europe.
The Dutch mounted large naval and ground assaults at the Sanur region in 1906 and were met by the thousands of members of the royal family and their followers who rather than yield to the superior Dutch force committed ritual suicide ("puputan") to avoid the humiliation of surrender. Despite Dutch demands for surrender, an estimated 200 Balinese killed themselves rather than surrender. In the Dutch intervention in Bali, a similar mass suicide occurred in the face of a Dutch assault in Klungkung. Afterwards, the Dutch governours exercised administrative control over the island, but local control over religion and culture generally remained intact. Dutch rule over Bali came later and was never as well established as in other parts of Indonesia such as Java and Maluku.
In the 1930s, anthropologists Margaret Mead and Gregory Bateson, artists Miguel Covarrubias and Walter Spies, and musicologist Colin McPhee all spent time here. Their accounts of the island and its peoples created a western image of Bali as "an enchanted land of aesthetes at peace with themselves and nature". Western tourists began to visit the island. The sensuous image of Bali was enhanced in the West by a quasi-pornographic 1932 documentary "Virgins of Bali" about a day in the lives of two teenage Balinese girls whom the film's narrator Deane Dickason notes in the first scene "bathe their shamelessly nude bronze bodies". Under the looser version of the Hays code that existed up to 1934, nudity involving "civilised" (i.e. white) women was banned, but permitted with "uncivilised" (i.e. all non-white women), a loophole that was exploited by the producers of "Virgins of Bali". The film, which mostly consisted of scenes of topless Balinese women was a great success in 1932, and almost single-handedly made Bali into a popular spot for tourists.
Imperial Japan occupied Bali during World War II. It was not originally a target in their Netherlands East Indies Campaign, but as the airfields on Borneo were inoperative due to heavy rains, the Imperial Japanese Army decided to occupy Bali, which did not suffer from comparable weather. The island had no regular Royal Netherlands East Indies Army (KNIL) troops. There was only a Native Auxiliary Corps "Prajoda" (Korps Prajoda) consisting of about 600 native soldiers and several Dutch KNIL officers under the command of KNIL Lieutenant Colonel W.P. Roodenburg. On 19 February 1942, the Japanese forces landed near the town of Sanoer [Sanur]. The island was quickly captured.
During the Japanese occupation, a Balinese military officer, Gusti Ngurah Rai, formed a Balinese 'freedom army'. The harshness of Japanese occupation forces made them more resented than the Dutch colonial rulers.
In 1945, Bali was liberated by the British 5th infantry Division under the command of Major-General Robert Mansergh who took the Japanese surrender. Once the Japanese forces had been repatriated the island was handed over to the Dutch the following year.
In 1946, the Dutch constituted Bali as one of the 13 administrative districts of the newly proclaimed State of East Indonesia, a rival state to the Republic of Indonesia, which was proclaimed and headed by Sukarno and Hatta. Bali was included in the "Republic of the United States of Indonesia" when the Netherlands recognised Indonesian independence on 29 December 1949. The first governor of Bali, Anak Agung Bagus Suteja, was appointed by President Sukarno in 1958, when Bali became a province.
The 1963 eruption of Mount Agung killed thousands, created economic havoc and forced many displaced Balinese to be transmigrated to other parts of Indonesia. Mirroring the widening of social divisions across Indonesia in the 1950s and early 1960s, Bali saw conflict between supporters of the traditional caste system, and those rejecting this system. Politically, the opposition was represented by supporters of the Indonesian Communist Party (PKI) and the Indonesian Nationalist Party (PNI), with tensions and ill-feeling further increased by the PKI's land reform programs. An attempted coup in Jakarta was put down by forces led by General Suharto.
The army became the dominant power as it instigated a violent anti-communist purge, in which the army blamed the PKI for the coup. Most estimates suggest that at least 500,000 people were killed across Indonesia, with an estimated 80,000 killed in Bali, equivalent to 5% of the island's population. With no Islamic forces involved as in Java and Sumatra, upper-caste PNI landlords led the extermination of PKI members.
As a result of the 1965–66 upheavals, Suharto was able to manoeuvre Sukarno out of the presidency. His "New Order" government reestablished relations with western countries. The pre-War Bali as "paradise" was revived in a modern form. The resulting large growth in tourism has led to a dramatic increase in Balinese standards of living and significant foreign exchange earned for the country. A bombing in 2002 by militant Islamists in the tourist area of Kuta killed 202 people, mostly foreigners. This attack, and another in 2005, severely reduced tourism, producing much economic hardship to the island.
The island of Bali lies east of Java, and is approximately 8 degrees south of the equator. Bali and Java are separated by the Bali Strait. East to west, the island is approximately wide and spans approximately north to south; administratively it covers , or without Nusa Penida District; its population density is roughly .
Bali's central mountains include several peaks over in elevation and active volcanoes such as Mount Batur. The highest is Mount Agung (), known as the "mother mountain", which is an active volcano rated as one of the world's most likely sites for a massive eruption within the next 100 years. In late 2017 Mount Agung started erupting and large numbers of people were evacuated, temporarily closing the island's airport. Mountains range from centre to the eastern side, with Mount Agung the easternmost peak. Bali's volcanic nature has contributed to its exceptional fertility and its tall mountain ranges provide the high rainfall that supports the highly productive agriculture sector. South of the mountains is a broad, steadily descending area where most of Bali's large rice crop is grown. The northern side of the mountains slopes more steeply to the sea and is the main coffee-producing area of the island, along with rice, vegetables and cattle. The longest river, Ayung River, flows approximately (see List of rivers of Bali).
The island is surrounded by coral reefs. Beaches in the south tend to have white sand while those in the north and west have black sand. Bali has no major waterways, although the Ho River is navigable by small "sampan" boats. Black sand beaches between Pasut and Klatingdukuh are being developed for tourism, but apart from the seaside temple of Tanah Lot, they are not yet used for significant tourism.
The largest city is the provincial capital, Denpasar, near the southern coast. Its population is around 491,500 (2002). Bali's second-largest city is the old colonial capital, Singaraja, which is located on the north coast and is home to around 100,000 people. Other important cities include the beach resort, Kuta, which is practically part of Denpasar's urban area, and Ubud, situated at the north of Denpasar, is the island's cultural centre.
Three small islands lie to the immediate south-east and all are administratively part of the Klungkung regency of Bali: Nusa Penida, Nusa Lembongan and Nusa Ceningan. These islands are separated from Bali by the Badung Strait.
To the east, the Lombok Strait separates Bali from Lombok and marks the biogeographical division between the fauna of the Indomalayan realm and the distinctly different fauna of Australasia. The transition is known as the Wallace Line, named after Alfred Russel Wallace, who first proposed a transition zone between these two major biomes. When sea levels dropped during the Pleistocene ice age, Bali was connected to Java and Sumatra and to the mainland of Asia and shared the Asian fauna, but the deep water of the Lombok Strait continued to keep Lombok Island and the Lesser Sunda archipelago isolated.
Being just 8 degrees south of the equator, Bali has a fairly even climate all year round. Average year-round temperature stands at around with a humidity level of about 85%.
Day time temperatures at low elevations vary between , but the temperatures decrease significantly with increasing elevation.
The west monsoon is in place from approximately October to April, and this can bring significant rain, particularly from December to March. During the rainy season, there are comparatively fewer tourists seen in Bali. During the Easter and Christmas holidays, the weather is very unpredictable. Outside of the monsoon period, humidity is relatively low and any rain is unlikely in lowland areas.
Bali lies just to the west of the Wallace Line, and thus has a fauna that is Asian in character, with very little Australasian influence, and has more in common with Java than with Lombok. An exception is the yellow-crested cockatoo, a member of a primarily Australasian family. There are around 280 species of birds, including the critically endangered Bali myna, which is endemic. Others include barn swallow, black-naped oriole, black racket-tailed treepie, crested serpent-eagle, crested treeswift, dollarbird, Java sparrow, lesser adjutant, long-tailed shrike, milky stork, Pacific swallow, red-rumped swallow, sacred kingfisher, sea eagle, woodswallow, savanna nightjar, stork-billed kingfisher, yellow-vented bulbul and great egret.
Until the early 20th century, Bali was home to several large mammals: the wild banteng, leopard and the endemic Bali tiger. The banteng still occurs in its domestic form, whereas leopards are found only in neighbouring Java, and the Bali tiger is extinct. The last definite record of a tiger on Bali dates from 1937, when one was shot, though the subspecies may have survived until the 1940s or 1950s.
Squirrels are quite commonly encountered, less often is the Asian palm civet, which is also kept in coffee farms to produce kopi luwak. Bats are well represented, perhaps the most famous place to encounter them remaining is the Goa Lawah (Temple of the Bats) where they are worshipped by the locals and also constitute a tourist attraction. They also occur in other cave temples, for instance at Gangga Beach. Two species of monkey occur. The crab-eating macaque, known locally as "kera", is quite common around human settlements and temples, where it becomes accustomed to being fed by humans, particularly in any of the three "monkey forest" temples, such as the popular one in the Ubud area. They are also quite often kept as pets by locals. The second monkey, endemic to Java and some surrounding islands such as Bali, is far rarer and more elusive and is the Javan langur, locally known as "lutung". They occur in a few places apart from the West Bali National Park. They are born an orange colour, though by their first year they would have already changed to a more blackish colouration. In Java, however, there is more of a tendency for this species to retain its juvenile orange colour into adulthood, and a mixture of black and orange monkeys can be seen together as a family. Other rarer mammals include the leopard cat, Sunda pangolin and black giant squirrel.
Snakes include the king cobra and reticulated python. The water monitor can grow to at least in length and and can move quickly.
The rich coral reefs around the coast, particularly around popular diving spots such as Tulamben, Amed, Menjangan or neighbouring Nusa Penida, host a wide range of marine life, for instance hawksbill turtle, giant sunfish, giant manta ray, giant moray eel, bumphead parrotfish, hammerhead shark, reef shark, barracuda, and sea snakes. Dolphins are commonly encountered on the north coast near Singaraja and Lovina.
A team of scientists conducted a survey from 29 April 2011 to 11 May 2011 at 33 sea sites around Bali. They discovered 952 species of reef fish of which 8 were new discoveries at Pemuteran, Gilimanuk, Nusa Dua, Tulamben and Candidasa, and 393 coral species, including two new ones at Padangbai and between Padangbai and Amed. The average coverage level of healthy coral was 36% (better than in Raja Ampat and Halmahera by 29% or in Fakfak and Kaimana by 25%) with the highest coverage found in Gili Selang and Gili Mimpang in Candidasa, Karangasem regency.
Among the larger trees the most common are: banyan trees, jackfruit, coconuts, bamboo species, acacia trees and also endless rows of coconuts and banana species. Numerous flowers can be seen: hibiscus, frangipani, bougainvillea, poinsettia, oleander, jasmine, water lily, lotus, roses, begonias, orchids and hydrangeas exist. On higher grounds that receive more moisture, for instance around Kintamani, certain species of fern trees, mushrooms and even pine trees thrive well. Rice comes in many varieties. Other plants with agricultural value include: salak, mangosteen, corn, kintamani orange, coffee and water spinach.
Over-exploitation by the tourist industry has led to 200 out of 400 rivers on the island dry up. Research suggests that the southern part of Bali would face a water shortage. To ease the shortage, the central government plans to build a water catchment and processing facility at Petanu River in Gianyar. The 300 litres capacity of water per second will be channelled to Denpasar, Badung and Gianyar in 2013.
A 2010 Environment Ministry report on its environmental quality index gave Bali a score of 99.65, which was the highest score of Indonesia's 33 provinces. The score considers the level of total suspended solids, dissolved oxygen and chemical oxygen demand in water.
Erosion at Lebih Beach has seen of land lost every year. Decades ago, this beach was used for holy pilgrimages with more than 10,000 people, but they have now moved to Masceti Beach.
In 2017, a year when Bali received nearly 5.7 million tourists, government officials declared a “garbage emergency” in response to the covering of 3.6 mile stretch of coastline in plastic waste brought in by the tide, amid concerns that the pollution could dissuade visitors from returning. Indonesia is one of the world's worst plastic polluters, with some estimates suggesting the country is the source of around 10 per cent of the world's plastic waste. Indonesia's capital city Jakarta features several large rubbish dumps and it is common to see swaths of plastics bobbing on the city's few waterways.
The province is divided into eight regencies ("kabupaten") and one city ("kota"). These are, with their areas and populations:
In 1970s, the Balinese economy was largely agriculture-based in terms of both output and employment. Tourism is now the largest single industry in terms of income, and as a result, Bali is one of Indonesia's wealthiest regions. In 2003, around 80% of Bali's economy was tourism related. By end of June 2011, the rate of non-performing loans of all banks in Bali were 2.23%, lower than the average of Indonesian banking industry non-performing loan rates (about 5%). The economy, however, suffered significantly as a result of the Islamists' terrorist bombings in 2002 and 2005. The tourism industry has since recovered from these events.
Although tourism produces the GDP's largest output, agriculture is still the island's biggest employer. Fishing also provides a significant number of jobs. Bali is also famous for its artisans who produce a vast array of handicrafts, including batik and ikat cloth and clothing, wooden carvings, stone carvings, painted art and silverware. Notably, individual villages typically adopt a single product, such as wind chimes or wooden furniture.
The Arabica coffee production region is the highland region of Kintamani near Mount Batur. Generally, Balinese coffee is processed using the wet method. This results in a sweet, soft coffee with good consistency. Typical flavours include lemon and other citrus notes. Many coffee farmers in Kintamani are members of a traditional farming system called Subak Abian, which is based on the Hindu philosophy of "Tri Hita Karana". According to this philosophy, the three causes of happiness are good relations with God, other people, and the environment. The Subak Abian system is ideally suited to the production of fair trade and organic coffee production. Arabica coffee from Kintamani is the first product in Indonesia to request a geographical indication.
In 1963 the Bali Beach Hotel in Sanur was built by Sukarno and boosted tourism in Bali. Before the construction of the Bali Beach Hotel, there were only three significant tourist-class hotels on the island. Construction of hotels and restaurants began to spread throughout Bali. Tourism further increased on Bali after the Ngurah Rai International Airport opened in 1970. The Buleleng regency government encouraged the tourism sector as one of the mainstays for economic progress and social welfare.
The tourism industry is primarily focused in the south, while also significant in the other parts of the island. The main tourist locations are the town of Kuta (with its beach), and its outer suburbs of Legian and Seminyak (which were once independent townships), the east coast town of Sanur (once the only tourist hub), Ubud towards the centre of the island, to the south of the Ngurah Rai International Airport, Jimbaran and the newer developments of Nusa Dua and Pecatu.
The United States government lifted its travel warnings in 2008. The Australian government issued an advisory on Friday, 4 May 2012, with the overall level of this advisory lowered to 'Exercise a high degree of caution'. The Swedish government issued a new warning on Sunday, 10 June 2012 because of one tourist who died from methanol poisoning. Australia last issued an advisory on Monday, 5 January 2015 due to new terrorist threats.
An offshoot of tourism is the growing real estate industry. Bali's real estate has been rapidly developing in the main tourist areas of Kuta, Legian, Seminyak and Oberoi. Most recently, high-end 5-star projects are under development on the Bukit peninsula, on the south side of the island. Expensive villas are being developed along the cliff sides of south Bali, with commanding panoramic ocean views. Foreign and domestic, many Jakarta individuals and companies are fairly active, investment into other areas of the island also continues to grow. Land prices, despite the worldwide economic crisis, have remained stable.
In the last half of 2008, Indonesia's currency had dropped approximately 30% against the US dollar, providing many overseas visitors improved value for their currencies.
Bali's tourism economy survived the Islamists terrorist bombings of 2002 and 2005, and the tourism industry has slowly recovered and surpassed its pre terrorist bombing levels; the longterm trend has been a steady increase of visitor arrivals. In 2010, Bali received 2.57 million foreign tourists, which surpassed the target of 2.0–2.3 million tourists. The average occupancy of starred hotels achieved 65%, so the island still should be able to accommodate tourists for some years without any addition of new rooms/hotels, although at the peak season some of them are fully booked.
Bali received the Best Island award from Travel and Leisure in 2010. Bali won because of its attractive surroundings (both mountain and coastal areas), diverse tourist attractions, excellent international and local restaurants, and the friendliness of the local people. The Balinese culture and its religion are also considered as the main factor of the award. One of the most prestigious events that symbolize a strong relationship between a god and its followers is Kecak dance. According to BBC Travel released in 2011, Bali is one of the World's Best Islands, ranking second after Santorini, Greece.
In 2006, Elizabeth Gilbert's memoir "Eat, Pray, Love" was published, and in August 2010 it was adapted into the film "Eat Pray Love". It took place at Ubud and Padang-Padang Beach at Bali. Both the book and the film fuelled a boom in tourism in Ubud, the hill town and cultural and tourist centre that was the focus of Gilbert's quest for balance and love through traditional spirituality and healing.
In January 2016, after musician David Bowie died, it was revealed that in his will, Bowie asked for his ashes to be scattered in Bali, conforming to Buddhist rituals. He had visited and performed in several Southeast Asian cities early in his career, including Bangkok and Singapore.
Since 2011, China has displaced Japan as the second-largest supplier of tourists to Bali, while Australia still tops the list while India has also emerged as a greater supply of tourists.
Chinese tourists increased by 17% from last year due to the impact of ACFTA and new direct flights to Bali.
In January 2012, Chinese tourists increased by 222.18% compared to January 2011, while Japanese tourists declined by 23.54% year on year.
Bali authorities reported the island had 2.88 million foreign tourists and 5 million domestic tourists in 2012, marginally surpassing the expectations of 2.8 million foreign tourists.
Based on a Bank Indonesia survey in May 2013, 34.39 per cent of tourists are upper-middle class, spending between $1,286 to $5,592, and are dominated by Australia, India, France, China, Germany and the UK. Some Chinese tourists have increased their levels of spending from previous years. 30.26 percent of tourists are middle class, spending between $662 to $1,285. In 2017 it was expected that Chinese tourists would outnumber Australian tourists.
In January 2020, 10,000 Chinese tourists canceled trips to Bali due to the COVID-19 pandemic.
The Ngurah Rai International Airport is located near Jimbaran, on the isthmus at the southernmost part of the island. Lt. Col. Wisnu Airfield is on the north-west Bali.
A coastal road circles the island, and three major two-lane arteries cross the central mountains at passes reaching to 1,750 m in height (at Penelokan). The Ngurah Rai Bypass is a four-lane expressway that partly encircles Denpasar. Bali has no railway lines.
In December 2010 the Government of Indonesia invited investors to build a new Tanah Ampo Cruise Terminal at Karangasem, Bali with a projected worth of $30 million. On 17 July 2011 the first cruise ship (Sun Princess) anchored about away from the wharf of Tanah Ampo harbour. The current pier is only but will eventually be extended to to accommodate international cruise ships. The harbour is safer than the existing facility at Benoa and has a scenic backdrop of east Bali mountains and green rice fields. The tender for improvement was subject to delays, and as of July 2013 the situation was unclear with cruise line operators complaining and even refusing to use the existing facility at Tanah Ampo.
A Memorandum of Understanding has been signed by two ministers, Bali's Governor and Indonesian Train Company to build of railway along the coast around the island. As of July 2015, no details of this proposed railways have been released. In 2019 it was reported in "Gapura Bali" that Wayan Koster, governor of Bali, "is keen to improve Bali's transportation infrastructure and is considering plans to build an electric rail network across the island".
On 16 March 2011 (Tanjung) Benoa port received the "Best Port Welcome 2010" award from London's "Dream World Cruise Destination" magazine. Government plans to expand the role of Benoa port as export-import port to boost Bali's trade and industry sector. In 2013, The Tourism and Creative Economy Ministry advised that 306 cruise liners were scheduled visit Indonesia, an increase of 43 per cent compared to the previous year.
In May 2011, an integrated Aerial Traffic Control System (ATCS) was implemented to reduce traffic jams at four crossing points: Ngurah Rai statue, Dewa Ruci Kuta crossing, Jimbaran crossing and Sanur crossing. ATCS is an integrated system connecting all traffic lights, CCTVs and other traffic signals with a monitoring office at the police headquarters. It has successfully been implemented in other ASEAN countries and will be implemented at other crossings in Bali.
On 21 December 2011 construction started on the Nusa Dua-Benoa-Ngurah Rai International Airport toll road which will also provide a special lane for motorcycles. This has been done by seven state-owned enterprises led by PT Jasa Marga with 60% of shares. PT Jasa Marga Bali Tol will construct the toll road (totally with access road). The construction is estimated to cost Rp.2.49 trillion ($273.9 million). The project goes through of mangrove forest and through of beach, both within area. The elevated toll road is built over the mangrove forest on 18,000 concrete pillars which occupied 2 hectares of mangroves forest. This was compensated by the planting of 300,000 mangrove trees along the road. On 21 December 2011 the Dewa Ruci underpass has also started on the busy Dewa Ruci junction near Bali Kuta Galeria with an estimated cost of Rp136 billion ($14.9 million) from the state budget. On 23 September 2013, the Bali Mandara Toll Road was opened, with the Dewa Ruci Junction (Simpang Siur) underpass being opened previously.
To solve chronic traffic problems, the province will also build a toll road connecting Serangan with Tohpati, a toll road connecting Kuta, Denpasar and Tohpati and a flyover connecting Kuta and Ngurah Rai Airport.
The population of Bali was 3,890,757 as of the 2010 Census, and 4,148,588 at the 2015 Intermediate Census; the latest estimate (for mid 2019) is 4,362,000. There are an estimated 30,000 expatriates living in Bali.
A DNA study in 2005 by Karafet et al. found that 12% of Balinese Y-chromosomes are of likely Indian origin, while 84% are of likely Austronesian origin, and 2% of likely Melanesian origin.
Pre-modern Bali had four castes, as Jeff Lewis and Belinda Lewis state, but with a "very strong tradition of communal decision-making and interdependence". The four castes have been classified as Soedra (Shudra), Wesia (Vaishyas), Satrias (Kshatriyas) and Brahmana (Brahmin).
The 19th-century scholars such as Crawfurd and Friederich suggested that the Balinese caste system had Indian origins, but Helen Creese states that scholars such as Brumund who had visited and stayed on the island of Bali suggested that his field observations conflicted with the "received understandings concerning its Indian origins". In Bali, the Shudra (locally spelt "Soedra") have typically been the temple priests, though depending on the demographics, a temple priest may also be from the other three castes. In most regions, it has been the Shudra who typically make offerings to the gods on behalf of the Hindu devotees, chant prayers, recite "meweda" (Vedas), and set the course of Balinese temple festivals.
Unlike most of Muslim-majority Indonesia, about 83.5% of Bali's population adheres to Balinese Hinduism, formed as a combination of existing local beliefs and Hindu influences from mainland Southeast Asia and South Asia. Minority religions include Islam (13.37%), Christianity (2.47%), and Buddhism (0.5%).
The general beliefs and practices of "Agama Hindu Dharma" mix ancient traditions and contemporary pressures placed by Indonesian laws that permit only monotheist belief under the national ideology of "panca sila". Traditionally, Hinduism in Indonesia had a pantheon of deities and that tradition of belief continues in practice; further, Hinduism in Indonesia granted freedom and flexibility to Hindus as to when, how and where to pray. However, officially, Indonesian government considers and advertises Indonesian Hinduism as a monotheistic religion with certain officially recognised beliefs that comply with its national ideology. Indonesian school text books describe Hinduism as having one supreme being, Hindus offering three daily mandatory prayers, and Hinduism as having certain common beliefs that in part parallel those of Islam. Scholars contest whether these Indonesian government recognised and assigned beliefs reflect the traditional beliefs and practices of Hindus in Indonesia before Indonesia gained independence from Dutch colonial rule.
Balinese Hinduism has roots in Indian Hinduism and Buddhism, that arrived through Java. Hindu influences reached the Indonesian Archipelago as early as the first century. Historical evidence is unclear about the diffusion process of cultural and spiritual ideas from India. Java legends refer to Saka-era, traced to 78 AD. Stories from the Mahabharata Epic have been traced in Indonesian islands to the 1st century; however, the versions mirror those found in southeast Indian peninsular region (now Tamil Nadu and southern Karnataka Andhra Pradesh).
The Bali tradition adopted the pre-existing animistic traditions of the indigenous people. This influence strengthened the belief that the gods and goddesses are present in all things. Every element of nature, therefore, possesses its power, which reflects the power of the gods. A rock, tree, dagger, or woven cloth is a potential home for spirits whose energy can be directed for good or evil. Balinese Hinduism is deeply interwoven with art and ritual. Ritualising states of self-control are a notable feature of religious expression among the people, who for this reason have become famous for their graceful and decorous behaviour.
Apart from the majority of Balinese Hindus, there also exist Chinese immigrants whose traditions have melded with that of the locals. As a result, these Sino-Balinese not only embrace their original religion, which is a mixture of Buddhism, Christianity, Taoism and Confucianism but also find a way to harmonise it with the local traditions. Hence, it is not uncommon to find local Sino-Balinese during the local temple's "odalan". Moreover, Balinese Hindu priests are invited to perform rites alongside a Chinese priest in the event of the death of a Sino-Balinese. Nevertheless, the Sino-Balinese claim to embrace Buddhism for administrative purposes, such as their Identity Cards.
Balinese and Indonesian are the most widely spoken languages in Bali, and the vast majority of Balinese people are bilingual or trilingual. The most common spoken language around the tourist areas is Indonesian, as many people in the tourist sector are not solely Balinese, but migrants from Java, Lombok, Sumatra, and other parts of Indonesia. There are several indigenous Balinese languages, but most Balinese can also use the most widely spoken option: modern common Balinese. The usage of different Balinese languages was traditionally determined by the Balinese caste system and by clan membership, but this tradition is diminishing. Kawi and Sanskrit are also commonly used by some Hindu priests in Bali, as Hindu literature was mostly written in Sanskrit.
English and Chinese are the next most common languages (and the primary foreign languages) of many Balinese, owing to the requirements of the tourism industry, as well as the English-speaking community and huge Chinese-Indonesian population. Other foreign languages, such as Japanese, Korean, French, Russian or German are often used in multilingual signs for foreign tourists.
Bali is renowned for its diverse and sophisticated art forms, such as painting, sculpture, woodcarving, handcrafts, and performing arts. Balinese cuisine is also distinctive. Balinese percussion orchestra music, known as "gamelan", is highly developed and varied. Balinese performing arts often portray stories from Hindu epics such as the Ramayana but with heavy Balinese influence. Famous Balinese dances include "pendet", "legong", "baris", "topeng", "barong", "gong keybar", and "kecak" (the monkey dance). Bali boasts one of the most diverse and innovative performing arts cultures in the world, with paid performances at thousands of temple festivals, private ceremonies, or public shows.
Throughout the year, there are a number of festivals celebrated locally or island-wide according to the traditional calendars.
The Hindu New Year, "Nyepi", is celebrated in the spring by a day of silence. On this day everyone stays at home and tourists are encouraged (or required) to remain in their hotels. On the day before New Year, large and colourful sculptures of "Ogoh-ogoh" monsters are paraded and burned in the evening to drive away evil spirits. Other festivals throughout the year are specified by the Balinese "pawukon" calendrical system.
Celebrations are held for many occasions such as a tooth-filing (coming-of-age ritual), cremation or "odalan" (temple festival). One of the most important concepts that Balinese ceremonies have in common is that of "désa kala patra", which refers to how ritual performances must be appropriate in both the specific and general social context. Many of the ceremonial art forms such as "wayang kulit" and "topeng" are highly improvisatory, providing flexibility for the performer to adapt the performance to the current situation. Many celebrations call for a loud, boisterous atmosphere with much activity and the resulting aesthetic, "ramé", is distinctively Balinese. Often two or more "gamelan" ensembles will be performing well within earshot, and sometimes compete with each other to be heard. Likewise, the audience members talk amongst themselves, get up and walk around, or even cheer on the performance, which adds to the many layers of activity and the liveliness typical of "ramé".
"Kaja" and "kelod" are the Balinese equivalents of North and South, which refer to one's orientation between the island's largest mountain Gunung Agung ("kaja"), and the sea ("kelod"). In addition to spatial orientation, "kaja" and "kelod" have the connotation of good and evil; gods and ancestors are believed to live on the mountain whereas demons live in the sea. Buildings such as temples and residential homes are spatially oriented by having the most sacred spaces closest to the mountain and the unclean places nearest to the sea.
Most temples have an inner courtyard and an outer courtyard which are arranged with the inner courtyard furthest "kaja". These spaces serve as performance venues since most Balinese rituals are accompanied by any combination of music, dance and drama. The performances that take place in the inner courtyard are classified as "wali", the most sacred rituals which are offerings exclusively for the gods, while the outer courtyard is where "bebali" ceremonies are held, which are intended for gods and people. Lastly, performances meant solely for the entertainment of humans take place outside the walls of the temple and are called "bali-balihan". This three-tiered system of classification was standardised in 1971 by a committee of Balinese officials and artists to better protect the sanctity of the oldest and most sacred Balinese rituals from being performed for a paying audience.
Tourism, Bali's chief industry, has provided the island with a foreign audience that is eager to pay for entertainment, thus creating new performance opportunities and more demand for performers. The impact of tourism is controversial since before it became integrated into the economy, the Balinese performing arts did not exist as a capitalist venture, and were not performed for entertainment outside of their respective ritual context. Since the 1930s sacred rituals such as the "barong" dance have been performed both in their original contexts, as well as exclusively for paying tourists. This has led to new versions of many of these performances which have developed according to the preferences of foreign audiences; some villages have a "barong" mask specifically for non-ritual performances as well as an older mask which is only used for sacred performances.
Balinese society continues to revolve around each family's ancestral village, to which the cycle of life and religion is closely tied. Coercive aspects of traditional society, such as customary law sanctions imposed by traditional authorities such as village councils (including "kasepekang", or shunning) have risen in importance as a consequence of the democratisation and decentralisation of Indonesia since 1998.
Other than Balinese sacred rituals and festivals, the government presents Bali Arts Festival to showcase Bali's performing arts and various artworks produced by the local talents that they have. It is held once a year, from the second week of June until the end of July. Southeast Asia's biggest annual festival of words and ideas Ubud Writers and Readers Festival is held at Ubud in October, which is participated by the world's most celebrated writers, artists, thinkers and performers.
Bali was the host of Miss World 2013 (63rd edition of the Miss World pageant). It was the first time Indonesia hosted an international beauty pageant.
Bali is a major world surfing destination with popular breaks dotted across the southern coastline and around the offshore island of Nusa Lembongan.
As part of the Coral Triangle, Bali, including Nusa Penida, offers a wide range of dive sites with varying types of reefs, and tropical aquatic life.
Bali was the host of 2008 Asian Beach Games. It was the second time Indonesia hosted an Asia-level multi-sport event, after Jakarta held the 1962 Asian Games.
In football, Bali is home to Bali United football club, which plays in Liga 1.
The team was relocated from Samarinda, East Kalimantan to Gianyar, Bali. Harbiansyah Hanafiah, the main commissioner of Bali United explained that he changed the name and moved the home base because there was no representative from Bali in the highest football tier in Indonesia. Another reason was due to local fans in Samarinda preferring to support Pusamania Borneo F.C. rather than Persisam.
In June 2012, Subak, the irrigation system for paddy fields in Jatiluwih, central Bali was enlisted as a Natural UNESCO world heritage site. | https://en.wikipedia.org/wiki?curid=4147 |
Bulgarian language
Bulgarian (, ; , ) is a South Slavic language spoken in Southeastern Europe, primarily in Bulgaria. It is the language of Bulgarians.
Along with the closely related Macedonian language (collectively forming the East South Slavic languages), it is a member of the Balkan sprachbund. The two languages have several characteristics that set them apart from all other Slavic languages: changes include the elimination of case declension, the development of a suffixed definite article and the lack of a verb infinitive, but it retains and has further developed the Proto-Slavic verb system. One such major development is the innovation of evidential verb forms to encode for the source of information: witnessed, inferred, or reported.
It is the official language of Bulgaria, and since 2007 has been among the official languages of the European Union. It is also spoken by minorities in several other countries.
One can divide the development of the Bulgarian language into several periods.
"Bulgarian" was the first "Slavic" language attested in writing. As Slavic linguistic unity lasted into late antiquity, the oldest manuscripts initially referred to this language as языкъ словяньскъ, "the Slavic language". In the Middle Bulgarian period this name was gradually replaced by the name , the "Bulgarian language". In some cases, this name was used not only with regard to the contemporary Middle Bulgarian language of the copyist but also to the period of Old Bulgarian. A most notable example of anachronism is the Service of Saint Cyril from Skopje (Скопски миней), a 13th-century Middle Bulgarian manuscript from northern Macedonia according to which St. Cyril preached with "Bulgarian" books among the Moravian Slavs. The first mention of the language as the "Bulgarian language" instead of the "Slavonic language" comes in the work of the Greek clergy of the Archbishopric of Ohrid in the 11th century, for example in the Greek hagiography of Clement of Ohrid by Theophylact of Ohrid (late 11th century).
During the Middle Bulgarian period, the language underwent dramatic changes, losing the Slavonic case system, but preserving the rich verb system (while the development was exactly the opposite in other Slavic languages) and developing a definite article. It was influenced by its non-Slavic neighbors in the Balkan language area (mostly grammatically) and later also by Turkish, which was the official language of the Ottoman Empire, in the form of the Ottoman Turkish language, mostly lexically. As a national revival occurred toward the end of the period of Ottoman rule (mostly during the 19th century), a modern Bulgarian literary language gradually emerged that drew heavily on Church Slavonic/Old Bulgarian (and to some extent on literary Russian, which had preserved many lexical items from Church Slavonic) and later reduced the number of Turkish and other Balkan loans. Today one difference between Bulgarian dialects in the country and literary spoken Bulgarian is the significant presence of Old Bulgarian words and even word forms in the latter. Russian loans are distinguished from Old Bulgarian ones on the basis of the presence of specifically Russian phonetic changes, as in оборот (turnover, rev), непонятен (incomprehensible), ядро (nucleus) and others. Many other loans from French, English and the classical languages have subsequently entered the language as well.
Modern Bulgarian was based essentially on the Eastern dialects of the language, but its pronunciation is in many respects a compromise between East and West Bulgarian (see especially the phonetic sections below). Following the efforts of some figures of the National awakening of Bulgaria (most notably Neofit Rilski and Ivan Bogorov), there had been many attempts to codify a standard Bulgarian language; however, there was much argument surrounding the choice of norms. Between 1835 and 1878 more than 25 proposals were put forward and "linguistic chaos" ensued. Eventually the eastern dialects prevailed,
and in 1899 the Bulgarian Ministry of Education officially codified a standard Bulgarian language based on the Drinov-Ivanchev orthography.
The language is mainly split into two broad dialect areas, based on the different reflexes of the Common Slavic yat vowel (Ѣ). This split, which occurred at some point during the Middle Ages, led to the development of Bulgaria's:
The literary language norm, which is generally based on the Eastern dialects, also has the Eastern alternating reflex of "yat". However, it has not incorporated the general Eastern umlaut of "all" synchronic or even historic "ya" sounds into "e" before front vowels – e.g. поляна ("polyana") vs. полени ("poleni") "meadow – meadows" or even жаба ("zhaba") vs. жеби ("zhebi") "frog – frogs", even though it co-occurs with the yat alternation in almost all Eastern dialects that have it (except a few dialects along the yat border, e.g. in the Pleven region).
More examples of the "yat" umlaut in the literary language are:
Until 1945, Bulgarian orthography did not reveal this alternation and used the original Old Slavic Cyrillic letter "yat" (Ѣ), which was commonly called двойно е ("dvoyno e") at the time, to express the historical "yat" vowel or at least root vowels displaying the "ya – e" alternation. The letter was used in each occurrence of such a root, regardless of the actual pronunciation of the vowel: thus, both "mlyako" and "mlekar" were spelled with (Ѣ). Among other things, this was seen as a way to "reconcile" the Western and the Eastern dialects and maintain language unity at a time when much of Bulgaria's Western dialect area was controlled by Serbia and Greece, but there were still hopes and occasional attempts to recover it. With the 1945 orthographic reform, this letter was abolished and the present spelling was introduced, reflecting the alternation in pronunciation.
This had implications for some grammatical constructions:
Sometimes, with the changes, words began to be spelled as other words with different meanings, e.g.:
In spite of the literary norm regarding the yat vowel, many people living in Western Bulgaria, including the capital Sofia, will fail to observe its rules. While the norm requires the realizations "vidyal" vs. "videli" (he has seen; they have seen), some natives of Western Bulgaria will preserve their local dialect pronunciation with "e" for all instances of "yat" (e.g. "videl", "videli"). Others, attempting to adhere to the norm, will actually use the "ya" sound even in cases where the standard language has "e" (e.g. "vidyal", "vidyali"). The latter hypercorrection is called свръхякане ("svrah-yakane" ≈"over-"ya"-ing").
Bulgarian is the only Slavic language whose literary standard does not naturally contain the iotated sound (or its palatalized variant , except in non-Slavic foreign-loaned words). The sound is common in all modern Slavic languages (e.g. Czech "medvěd" "bear", Polish "pięć" "five", Serbo-Croatian jelen" "deer", Ukrainian "немає "there is not...", Macedonian "пишување" "writing", etc.), as well as some Western Bulgarian dialectal forms – e.g. "ора̀н’е" (standard Bulgarian: "оране" , "ploughing"), however it is not represented in standard Bulgarian speech or writing. Even where occurs in other Slavic words, in Standard Bulgarian it is usually transcribed and pronounced as pure – e.g. Boris Yeltsin is "Eltsin" (), Yekaterinburg is "Ekaterinburg" () and Sarajevo is "Saraevo" (), although - because the sound is contained in a stressed syllable at the beginning of the word - Jelena Janković is "Yelena" – .
Until the period immediately following the Second World War, all Bulgarian and the majority of foreign linguists referred to the South Slavic dialect continuum spanning the area of modern Bulgaria, North Macedonia and parts of Northern Greece as a group of Bulgarian dialects. In contrast, Serbian sources tended to label them "south Serbian" dialects. Some local naming conventions included "bolgárski", "bugárski" and so forth. The codifiers of the standard Bulgarian language, however, did not wish to make any allowances for a pluricentric "Bulgaro-Macedonian" compromise. In 1870 Marin Drinov, who played a decisive role in the standardization of the Bulgarian language, rejected the proposal of Parteniy Zografski and Kuzman Shapkarev for a mixed eastern and western Bulgarian/Macedonian foundation of the standard Bulgarian language, stating in his article in the newspaper Makedoniya: "Such an artificial assembly of written language is something impossible, unattainable and never heard of."
After 1944 the People's Republic of Bulgaria and the Socialist Federal Republic of Yugoslavia began a policy of making Macedonia into the connecting link for the establishment of a new Balkan Federative Republic and stimulating here a development of distinct Macedonian consciousness. With the proclamation of the Socialist Republic of Macedonia as part of the Yugoslav federation, the new authorities also started measures that would overcome the pro-Bulgarian feeling among parts of its population and in 1945 a separate Macedonian language was codified. After 1958, when the pressure from Moscow decreased, Sofia reverted to the view that the Macedonian language did not exist as a separate language. Nowadays, Bulgarian and Greek linguists, as well as some linguists from other countries, still consider the various Macedonian dialects as part of the broader Bulgarian pluricentric dialectal continuum. Outside Bulgaria and Greece, Macedonian is generally considered an autonomous language within the South Slavic dialect continuum. Sociolinguists agree that the question whether Macedonian is a dialect of Bulgarian or a language is a political one and cannot be resolved on a purely linguistic basis, because dialect continua do not allow for either/or judgments.
In 886 AD, the Bulgarian Empire introduced the Glagolitic alphabet which was devised by the Saints Cyril and Methodius in the 850s. The Glagolitic alphabet was gradually superseded in later centuries by the Cyrillic script, developed around the Preslav Literary School, Bulgaria in the 9th century.
Several Cyrillic alphabets with 28 to 44 letters were used in the beginning and the middle of the 19th century during the efforts on the codification of Modern Bulgarian until an alphabet with 32 letters, proposed by Marin Drinov, gained prominence in the 1870s. The alphabet of Marin Drinov was used until the orthographic reform of 1945, when the letters yat (uppercase Ѣ, lowercase ѣ) and yus (uppercase Ѫ, lowercase ѫ) were removed from its alphabet, reducing the number of letters to 30.
With the accession of Bulgaria to the European Union on 1 January 2007, Cyrillic became the third official script of the European Union, following the Latin and Greek scripts.
Bulgarian possesses a phonology similar to that of the rest of the South Slavic languages, notably lacking Serbo-Croatian's phonemic vowel length and tones and alveo-palatal affricates. The eastern dialects exhibit palatalization of consonants before front vowels ( and ) and reduction of vowel phonemes in unstressed position (causing mergers of and , and , and ) - both patterns have partial parallels in Russian and lead to a partly similar sound. The western dialects are like Macedonian and Serbo-Croatian in that they do not have allophonic palatalization and have only little vowel reduction.
Bulgarian has six vowel phonemes, but at least eight distinct phones can be distinguished when reduced allophones are taken into consideration.
The parts of speech in Bulgarian are divided in ten types, which are categorized in two broad classes: mutable and immutable. The difference is that mutable parts of speech vary grammatically, whereas the immutable ones do not change, regardless of their use. The five classes of mutables are: "nouns", "adjectives", "numerals", "pronouns" and "verbs". Syntactically, the first four of these form the group of the noun or the nominal group. The immutables are: "adverbs", "prepositions", "conjunctions", "particles" and "interjections". Verbs and adverbs form the group of the verb or the verbal group.
Nouns and adjectives have the categories grammatical gender, number, case (only vocative) and definiteness in Bulgarian. Adjectives and adjectival pronouns agree with nouns in number and gender. Pronouns have gender and number and retain (as in nearly all Indo-European languages) a more significant part of the case system.
There are three grammatical genders in Bulgarian: "masculine", "feminine" and "neuter". The gender of the noun can largely be inferred from its ending: nouns ending in a consonant ("zero ending") are generally masculine (for example, 'city', 'son', 'man'; those ending in –а/–я (-a/-ya) ( 'woman', 'daughter', 'street') are normally feminine; and nouns ending in –е, –о are almost always neuter ( 'child', 'lake'), as are those rare words (usually loanwords) that end in –и, –у, and –ю ( 'tsunami', 'taboo', 'menu'). Perhaps the most significant exception from the above are the relatively numerous nouns that end in a consonant and yet are feminine: these comprise, firstly, a large group of nouns with zero ending expressing quality, degree or an abstraction, including all nouns ending on –ост/–ест -{ost/est} ( 'wisdom', 'vileness', 'loveliness', 'sickness', 'love'), and secondly, a much smaller group of irregular nouns with zero ending which define tangible objects or concepts ( 'blood', 'bone', 'evening', 'night'). There are also some commonly used words that end in a vowel and yet are masculine: 'father', 'grandfather', / 'uncle', and others.
The plural forms of the nouns do not express their gender as clearly as the singular ones, but may also provide some clues to it: the ending (-i) is more likely to be used with a masculine or feminine noun ( 'facts', 'sicknesses'), while one in belongs more often to a neuter noun ( 'lakes'). Also, the plural ending occurs only in masculine nouns.
Two numbers are distinguished in Bulgarian–singular and plural. A variety of plural suffixes is used, and the choice between them is partly determined by their ending in singular and partly influenced by gender; in addition, irregular declension and alternative plural forms are common. Words ending in (which are usually feminine) generally have the plural ending , upon dropping of the singular ending. Of nouns ending in a consonant, the feminine ones also use , whereas the masculine ones usually have for polysyllables and for monosyllables (however, exceptions are especially common in this group). Nouns ending in (most of which are neuter) mostly use the suffixes (both of which require the dropping of the singular endings) and .
With cardinal numbers and related words such as ('several'), masculine nouns use a special count form in , which stems from the Proto-Slavonic dual: ('two/three chairs') versus ('these chairs'); cf. feminine ('two/three/these books') and neuter ('two/three/these beds'). However, a recently developed language norm requires that count forms should only be used with masculine nouns that do not denote persons. Thus, ('two/three students') is perceived as more correct than , while the distinction is retained in cases such as ('two/three pencils') versus ('these pencils').
Cases exist only in the personal and some other pronouns (as they do in many other modern Indo-European languages), with nominative, accusative, dative and vocative forms. Vestiges are present in a number of phraseological units and sayings. The major exception are vocative forms, which are still in use for masculine (with the endings -е, -о and -ю) and feminine nouns (-[ь/й]о and -е) in the singular.
In modern Bulgarian, definiteness is expressed by a definite article which is postfixed to the noun, much like in the Scandinavian languages or Romanian (indefinite: , 'person'; definite: , ""the" person") or to the first nominal constituent of definite noun phrases (indefinite: , 'a good person'; definite: , ""the" good person"). There are four singular definite articles. Again, the choice between them is largely determined by the noun's ending in the singular. Nouns that end in a consonant and are masculine use –ът/–ят, when they are grammatical subjects, and –а/–я elsewhere. Nouns that end in a consonant and are feminine, as well as nouns that end in –а/–я (most of which are feminine, too) use –та. Nouns that end in –е/–о use –то.
The plural definite article is –те for all nouns except for those whose plural form ends in –а/–я; these get –та instead. When postfixed to adjectives the definite articles are –ят/–я for masculine gender (again, with the longer form being reserved for grammatical subjects), –та for feminine gender, –то for neuter gender, and –те for plural.
Both groups agree in gender and number with the noun they are appended to. They may also take the definite article as explained above.
Pronouns may vary in gender, number, and definiteness, and are the only parts of speech that have retained case inflections. Three cases are exhibited by some groups of pronouns – nominative, accusative and dative. The distinguishable types of pronouns include the following: personal, relative, reflexive, interrogative, negative, indefinitive, summative and possessive.
The Bulgarian verb can take up to 3,000 distinct forms, as it varies in person, number, voice, aspect, mood, tense and in some cases gender.
Finite verbal forms are "simple" or "compound" and agree with subjects in person (first, second and third) and number (singular, plural). In addition to that, past compound forms using participles vary in gender (masculine, feminine, neuter) and voice (active and passive) as well as aspect (perfective/aorist and imperfective).
Bulgarian verbs express lexical aspect: perfective verbs signify the completion of the action of the verb and form past perfective (aorist) forms; imperfective ones are neutral with regard to it and form past imperfective forms. Most Bulgarian verbs can be grouped in perfective-imperfective pairs (imperfective/perfective: "come", "arrive"). Perfective verbs can be usually formed from imperfective ones by suffixation or prefixation, but the resultant verb often deviates in meaning from the original. In the pair examples above, aspect is stem-specific and therefore there is no difference in meaning.
In Bulgarian, there is also grammatical aspect. Three grammatical aspects are distinguishable: neutral, perfect and pluperfect. The neutral aspect comprises the three simple tenses and the future tense. The pluperfect is manifest in tenses that use double or triple auxiliary "be" participles like the past pluperfect subjunctive. Perfect constructions use a single auxiliary "be".
The traditional interpretation is that in addition to the four moods (наклонения ) shared by most other European languages – indicative (изявително, ) imperative (повелително ), subjunctive ( ) and conditional (условно, ) – in Bulgarian there is one more to describe a general category of unwitnessed events – the inferential (преизказно ) mood. However, most contemporary Bulgarian linguists usually exclude the subjunctive mood and the inferential mood from the list of Bulgarian moods (thus placing the number of Bulgarian moods at a total of 3: indicative, imperative and conditional) and don't consider them to be moods but view them as verbial morphosyntactic constructs or separate gramemes of the verb class. The possible existence of a few other moods has been discussed in the literature. Most Bulgarian school grammars teach the traditional view of 4 Bulgarian moods (as described above, but excluding the subjunctive and including the inferential).
There are three grammatically distinctive positions in time – present, past and future – which combine with aspect and mood to produce a number of formations. Normally, in grammar books these formations are viewed as separate tenses – i. e. "past imperfect" would mean that the verb is in past tense, in the imperfective aspect, and in the indicative mood (since no other mood is shown). There are more than 40 different tenses across Bulgarian's two aspects and five moods.
In the indicative mood, there are three simple tenses:
In the indicative there are also the following compound tenses:
The four perfect constructions above can vary in aspect depending on the aspect of the main-verb participle; they are in fact pairs of imperfective and perfective aspects. Verbs in forms using past participles also vary in voice and gender.
There is only one simple tense in the imperative mood, the present, and there are simple forms only for the second-person singular, -и/-й (-i, -y/i), and plural, -ете/-йте (-ete, -yte), e.g. уча ('to study'): , sg., , pl.; 'to play': , . There are compound imperative forms for all persons and numbers in the present compound imperative (, ), the present perfect compound imperative (, ) and the rarely used present pluperfect compound imperative (, ).
The conditional mood consists of five compound tenses, most of which are not grammatically distinguishable. The present, future and past conditional use a special past form of the stem би- (bi – "be") and the past participle (, , 'I would study'). The past future conditional and the past future perfect conditional coincide in form with the respective indicative tenses.
The subjunctive mood is rarely documented as a separate verb form in Bulgarian, (being, morphologically, a sub-instance of the quasi-infinitive construction with the particle да and a normal finite verb form), but nevertheless it is used regularly. The most common form, often mistaken for the present tense, is the present subjunctive ( , 'I had better go'). The difference between the present indicative and the present subjunctive tense is that the subjunctive can be formed by "both" perfective and imperfective verbs. It has completely replaced the infinitive and the supine from complex expressions (see below). It is also employed to express opinion about "possible" future events. The past perfect subjunctive ( , 'I'd had better be gone') refers to "possible" events in the past, which "did not" take place, and the present pluperfect subjunctive ( ), which may be used about both past and future events arousing feelings of incontinence, suspicion, etc. and has no perfect English translation.
The inferential mood has five pure tenses. Two of them are simple – "past aorist inferential" and "past imperfect inferential" – and are formed by the past participles of perfective and imperfective verbs, respectively. There are also three compound tenses – "past future inferential", "past future perfect inferential" and "past perfect inferential". All these tenses' forms are gender-specific in the singular. There are also conditional and compound-imperative crossovers. The existence of inferential forms has been attributed to Turkic influences by most Bulgarian linguists. Morphologically, they are derived from the perfect.
Bulgarian has the following participles:
The participles are inflected by gender, number, and definiteness, and are coordinated with the subject when forming compound tenses (see tenses above). When used in attributive role the inflection attributes are coordinated with the noun that is being attributed.
Bulgarian uses reflexive verbal forms (i.e. actions which are performed by the agent onto him- or herself) which behave in a similar way as they do in many other Indo-European languages, such as French and Spanish. The reflexive is expressed by the invariable particle se, originally a clitic form of the accusative reflexive pronoun. Thus –
When the action is performed on others, other particles are used, just like in any normal verb, e.g. –
Sometimes, the reflexive verb form has a similar but not necessarily identical meaning to the non-reflexive verb –
In other cases, the reflexive verb has a completely different meaning from its non-reflexive counterpart –
When the action is performed on an indirect object, the particles change to si and its derivatives –
In some cases, the particle "si" is ambiguous between the indirect object and the possessive meaning –
The difference between transitive and intransitive verbs can lead to significant differences in meaning with minimal change, e.g. –
The particle "si" is often used to indicate a more personal relationship to the action, e.g. –
The most productive way to form adverbs is to derive them from the neuter singular form of the corresponding adjective—e.g. (fast), (hard), (strange)—but adjectives ending in use the masculine singular form (i.e. ending in ), instead—e.g. (heroically), (bravely, like a man), (skillfully). The same pattern is used to form adverbs from the (adjective-like) ordinal numerals, e.g. (firstly), (secondly), (thirdly), and in some cases from (adjective-like) cardinal numerals, e.g. (twice as/double), (three times as), (five times as).
The remaining adverbs are formed in ways that are no longer productive in the language. A small number are original (not derived from other words), for example: (here), (there), (inside), (outside), (very/much) etc. The rest are mostly fossilized case forms, such as:
Adverbs can sometimes be reduplicated to emphasize the qualitative or quantitative properties of actions, moods or relations as performed by the subject of the sentence: "" ("rather slowly"), "" ("with great difficulty"), "" ("quite", "thoroughly").
Bulgarian employs clitic doubling, mostly for emphatic purposes. For example, the following constructions are common in colloquial Bulgarian:
The phenomenon is practically obligatory in the spoken language in the case of inversion signalling information structure (in writing, clitic doubling may be skipped in such instances, with a somewhat bookish effect):
Sometimes, the doubling signals syntactic relations, thus:
This is contrasted with:
In this case, clitic doubling can be a colloquial alternative of the more formal or bookish passive voice, which would be constructed as follows:
Clitic doubling is also fully obligatory, both in the spoken and in the written norm, in clauses including several special expressions that use the short accusative and dative pronouns such as "" (I feel like playing), студено ми е (I am cold), and боли ме ръката (my arm hurts):
Except the above examples, clitic doubling is considered inappropriate in a formal context.
Questions in Bulgarian which do not use a question word (such as who? what? etc.) are formed with the particle ли after the verb; a subject is not necessary, as the verbal conjugation suggests who is performing the action:
While the particle generally goes after the verb, it can go after a noun or adjective if a contrast is needed:
A verb is not always necessary, e.g. when presenting a choice:
Rhetorical questions can be formed by adding to a question word, thus forming a "double interrogative" –
The same construction +не ('no') is an emphasized positive –
The verb – 'to be' is also used as an auxiliary for forming the perfect, the passive and the conditional:
Two alternate forms of exist:
The impersonal verb (lit. 'it wants') is used to for forming the (positive) future tense:
The negative future is formed with the invariable construction (see below):
The past tense of this verb – щях is conjugated to form the past conditional ('would have' – again, with да, since it is "irrealis"):
The verbs ('to have') and ('to not have'):
In Bulgarian, there are several conjunctions all translating into English as "but", which are all used in distinct situations. They are (), (), (), (), and () (and () – "however", identical in use to ).
While there is some overlapping between their uses, in many cases they are specific. For example, is used for a choice – – "not this one, but that one" (compare Spanish ), while is often used to provide extra information or an opinion – – "I said it, but I was wrong". Meanwhile, provides contrast between two situations, and in some sentences can even be translated as "although", "while" or even "and" – – "I'm working, and he's daydreaming".
Very often, different words can be used to alter the emphasis of a sentence – e.g. while and both mean "I smoke, but I shouldn't", the first sounds more like a statement of fact ("...but I mustn't"), while the second feels more like a "judgement" ("...but I oughtn't"). Similarly, and both mean "I don't want to, but he does", however the first emphasizes the fact that "he" wants to, while the second emphasizes the "wanting" rather than the person.
Some common expressions use these words, and some can be used alone as interjections:
Bulgarian has several abstract particles which are used to strengthen a statement. These have no precise translation in English. The particles are strictly informal and can even be considered rude by some people and in some situations. They are mostly used at the end of questions or instructions.
These are "tagged" on to the beginning or end of a sentence to express the mood of the speaker in relation to the situation. They are mostly interrogative or slightly imperative in nature. There is no change in the grammatical mood when these are used (although they may be expressed through different grammatical moods in other languages).
These express intent or desire, perhaps even pleading. They can be seen as a sort of cohortative side to the language. (Since they can be used by themselves, they could even be considered as verbs in their own right.) They are also highly informal.
These particles can be combined with the vocative particles for greater effect, e.g. (let me see), or even exclusively in combinations with them, with no other elements, e.g. (come on!); (I told you not to!).
Bulgarian has several pronouns of quality which have no direct parallels in English – "kakav" (what sort of); "takuv" (this sort of); "onakuv" (that sort of – colloq.); "nyakakav" (some sort of); "nikakav" (no sort of); "vsyakakav" (every sort of); and the relative pronoun "kakavto" (the sort of ... that ... ). The adjective "ednakuv" ("the same") derives from the same radical.
Example phrases include:
An interesting phenomenon is that these can be strung along one after another in quite long constructions, e.g.
An extreme (colloquial) sentence, with almost no "physical" meaning in it whatsoever – yet which "does" have perfect meaning to the Bulgarian ear – would be :
—Note: the subject of the sentence is simply the pronoun "taya" (lit. "this one here"; colloq. "she").
Another interesting phenomenon that is observed in colloquial speech is the use of "takova" (neuter of "takyv") not only as a substitute for an adjective, but also as a substitute for a verb. In that case the base form "takova" is used as the third person singular in the present indicative and all other forms are formed by analogy to other verbs in the language. Sometimes the "verb" may even acquire a derivational prefix that changes its meaning. Examples:
Another use of "takova" in colloquial speech is the word "takovata", which can be used as a substitution for a noun, but also, if the speaker doesn't remember or is not sure how to say something, they might say "takovata" and then pause to think about it:
Similar "meaningless" expressions are extremely common in spoken Bulgarian, especially when the speaker is finding it difficult to describe something.
Most of the vocabulary of modern Bulgarian consists of terms inherited from Proto-Slavic and local Bulgarian innovations and formations of those through the mediation of Old and Middle Bulgarian. The native terms in Bulgarian account for 70% to 80% of the lexicon.
The remaining 25% to 30% are loanwords from a number of languages, as well as derivations of such words. Bulgarian adopted also a few words of Thracian and Bulgar origin. The languages which have contributed most to Bulgarian are Russian, French and to a lesser extent English and Ottoman Turkish. Also Latin and Greek are the source of many words, used mostly in international terminology. Many Latin terms entered the language through Romanian, Aromanian, and Megleno-Romanian during Bulgarian Empires (present-day Bulgaria was part of Roman Empire), loanwords of Greek origin in Bulgarian are a product of the influence of the liturgical language of the Orthodox Church. Many of the numerous loanwords from another Turkic language, Ottoman Turkish (and, via Ottoman Turkish, from Arabic and Persian) which were adopted into Bulgarian during the long period of Ottoman rule, have been replaced with native terms. In addition, both specialized (usually coming from the field of science) and commonplace English words (notably abstract, commodity/service-related or technical terms) have also penetrated Bulgarian since the second half of the 20th century, especially since 1989. A noteworthy portion of this English-derived terminology has attained some unique features in the process of its introduction to native speakers, and this has resulted in peculiar derivations that set the newly formed loanwords apart from the original words (mainly in pronunciation), although many loanwords are completely identical to the source words. A growing number of international neologisms are also being widely adopted, causing controversy between younger generations who, in general, are raised in the era of digital globalization, and the older, more conservative educated purists. Prior to standardization in the 19th century, after a period of Ottoman Turkish as a lingua franca for about 5 centuries, vernacular Bulgarian is estimated to have been consisted of 50% Ottoman vocabulary, which contained predominantly (up to 80%) Arabic and Persian words.
Linguistic reports
Dictionaries
Courses | https://en.wikipedia.org/wiki?curid=4149 |
Bipyramid
An "n"-gonal bipyramid or dipyramid is a polyhedron formed by joining an "n"-gonal pyramid and its mirror image base-to-base. An "n"-gonal bipyramid has 2"n" triangle faces, 3"n" edges, and 2 + "n" vertices.
The referenced "n"-gon in the name of the bipyramids is not an external face but an internal one, existing on the primary symmetry plane which connects the two pyramid halves.
A right bipyramid has two points above and below the centroid of its base. Nonright bipyramids are called oblique bipyramids. A regular bipyramid has a regular polygon internal face and is usually implied to be a "right bipyramid". A right bipyramid can be represented as for internal polygon P, and a regular "n"-bipyramid
A concave bipyramid has a concave interior polygon.
The face-transitive regular bipyramids are the dual polyhedra of the uniform prisms and will generally have isosceles triangle faces.
A bipyramid can be projected on a sphere or globe as "n" equally spaced lines of longitude going from pole to pole, and bisected by a line around the equator.
Bipyramid faces, projected as spherical triangles, represent the fundamental domains in the dihedral symmetry D"n"h. Indeed, an n-tonal bipyramid can be seen as the Kleetope of the respective n-gonal dihedron.
The volume of a bipyramid is "V" ="Bh" where "B" is the area of the base and "h" the height from the base to the apex. This works for any location of the apex, provided that "h" is measured as the perpendicular distance from the plane which contains the base.
The volume of a bipyramid whose base is a regular "n"-sided polygon with side length "s" and whose height is "h" is therefore:
Only three kinds of bipyramids can have all edges of the same length (which implies that all faces are equilateral triangles, and thus the bipyramid is a deltahedron): the triangular, tetragonal, and pentagonal bipyramids. The tetragonal bipyramid with identical edges, or regular octahedron, counts among the Platonic solids, while the triangular and pentagonal bipyramids with identical edges count among the Johnson solids (J12 and J13).
If the base is regular and the line through the apexes intersects the base at its center, the symmetry group of the "n"-gonal bipyramid has dihedral symmetry D"n"h of order 4"n", except in the case of a regular octahedron, which has the larger octahedral symmetry group Oh of order 48, which has three versions of D4h as subgroups. The rotation group is D"n" of order 2"n", except in the case of a regular octahedron, which has the larger symmetry group O of order 24, which has three versions of D4 as subgroups.
The digonal faces of a spherical 2"n"-bipyramid represents the fundamental domains of dihedral symmetry in three dimensions: D"n"h, ["n",2], (*"n"22), order 4"n". The reflection domains can be shown as alternately colored triangles as mirror images.
An asymmetric right bipyramid joins two unequal height pyramids. An "inverted" form can also have both pyramids on the same side. A regular "n"-gonal asymmetry right pyramid has symmetry C"n"v, order 2"n". The dual polyhedron of an asymmetric bipyramid is a frustum.
A scalenohedron is topologically identical to a 2"n"-bipyramid, but contains congruent scalene triangles.
There are two types. In one type the 2"n" vertices around the center alternate in rings above and below the center. In the other type, the 2"n" vertices are on the same plane, but alternate in two radii.
The first has 2-fold rotation axes mid-edge around the sides, reflection planes through the vertices, and n-fold rotation symmetry on its axis, representing symmetry D"n"d, [2+,2"n"], (2*"n"), order 2"n". In crystallography, 8-sided and 12-sided scalenohedra exist. All of these forms are isohedra.
The second has symmetry D"n", [2,"n"], (*"nn"2), order 2"n".
The smallest scalenohedron has 8 faces and is topologically identical to the regular octahedron. The second type is a "rhombic bipyramid". The first type has 6 vertices can be represented as (0,0,±1), (±1,0,"z"), (0,±1,−"z"), where "z" is a parameter between 0 and 1, creating a regular octahedron at "z" = 0, and becoming a disphenoid with merged coplanar faces at "z" = 1. For "z" > 1, it becomes concave.
Self-intersecting bipyramids exist with a star polygon central figure, defined by triangular faces connecting each polygon edge to these two points. A {} bipyramid has Coxeter diagram .
isohedral even-sided stars can also be made with zig-zag offplane vertices, in-out isotoxal forms, or both, like this {} form:
The dual of the rectification of each convex regular 4-polytopes is a cell-transitive 4-polytope with bipyramidal cells. In the following, the apex vertex of the bipyramid is A and an equator vertex is E. The distance between adjacent vertices on the equator EE = 1, the apex to equator edge is AE and the distance between the apices is AA. The bipyramid 4-polytope will have "V"A vertices where the apices of "N"A bipyramids meet. It will have "V"E vertices where the type E vertices of "N"E bipyramids meet. "N"AE bipyramids meet along each type AE edge. "N"EE bipyramids meet along each type EE edge. "C"AE is the cosine of the dihedral angle along an AE edge. "C"EE is the cosine of the dihedral angle along an EE edge. As cells must fit around an edge,
In general, a "bipyramid" can be seen as an "n"-polytope constructed with a ("n" − 1)-polytope in a hyperplane with two points in opposite directions, equal distance perpendicular from the hyperplane. If the ("n" − 1)-polytope is a regular polytope, it will have identical pyramidal facets. An example is the 16-cell, which is an octahedral bipyramid, and more generally an "n"-orthoplex is an ("n" − 1)-orthoplex bypyramid.
A two-dimensional bipyramid is a square. | https://en.wikipedia.org/wiki?curid=4153 |
Brown University
Brown University is a private Ivy League research university in Providence, Rhode Island. Founded in 1764 as the "College in the English Colony of Rhode Island and Providence Plantations", it is the seventh-oldest institution of higher education in the United States and one of the nine colonial colleges chartered before the American Revolution.
At its foundation, Brown was the first college in the U.S. to accept students regardless of their religious affiliation. Its engineering program was established in 1847. It was one of the early doctoral-granting U.S. institutions in the late 19th century, adding masters and doctoral studies in 1887. In 1969, Brown adopted a New Curriculum sometimes referred to as the Brown Curriculum after a period of student lobbying. The New Curriculum eliminated mandatory "general education" distribution requirements, made students "the architects of their own syllabus" and allowed them to take any course for a grade of satisfactory or unrecorded no-credit. In 1971, Brown's coordinate women's institution, Pembroke College, was fully merged into the university; Pembroke Campus now includes dormitories and classrooms used by all of Brown.
Undergraduate admissions is highly selective, with an acceptance rate of 6.9 percent for the class of 2024. The university comprises the College, the Graduate School, Alpert Medical School, the School of Engineering, the School of Public Health and the School of Professional Studies (which includes the IE Brown Executive MBA program). Brown's international programs are organized through the Watson Institute for International and Public Affairs, and the university is academically affiliated with the Marine Biological Laboratory and the Rhode Island School of Design. The Brown/RISD Dual Degree Program, offered in conjunction with the Rhode Island School of Design, is a five-year course that awards degrees from both institutions.
Brown's main campus is located in the College Hill Historic District in the city of Providence, Rhode Island. The university's neighborhood is a federally listed architectural district with a dense concentration of Colonial-era buildings. Benefit Street, on the western edge of the campus, contains "one of the finest cohesive collections of restored seventeenth- and eighteenth-century architecture in the United States".
, 8 Nobel Prize winners have been affiliated with Brown University as alumni, faculty, or researchers. In addition, Brown's faculty and alumni include 5 National Humanities Medalists and 10 National Medal of Science laureates. Other notable alumni include 8 billionaire graduates, a U.S. Supreme Court Chief Justice, 4 U.S. Secretaries of State and other Cabinet officials, 54 members of the United States Congress, 56 Rhodes Scholars, 52 Gates Cambridge Scholars, 49 Marshall Scholars, 14 MacArthur Genius Fellows, 24 Pulitzer Prize winners, various royals and nobles, as well as leaders and founders of Fortune 500 companies.
The origin of Brown University can be dated to 1761, when three residents of Newport, Rhode Island, drafted a petition to the General Assembly of the colony:
Your Petitioners propose to open a literary institution or School for instructing young Gentlemen in the Languages, Mathematics, Geography & History, & such other branches of Knowledge as shall be desired. That for this End ... it will be necessary ... to erect a public Building or Buildings for the boarding of the youth & the Residence of the Professors.
The three petitioners were Ezra Stiles, pastor of Newport's Second Congregational Church and future president of Yale; William Ellery, Jr., future signer of the United States Declaration of Independence; and Josias Lyndon, future governor of the colony. Stiles and Ellery were co-authors of the Charter of the College two years later. The editor of Stiles's papers observes, "This draft of a petition connects itself with other evidence of Dr. Stiles's project for a Collegiate Institution in Rhode Island, before the charter of what became Brown University."
There is further documentary evidence that Stiles was making plans for a college in 1762. On January 20, Chauncey Whittelsey, pastor of the First Church of New Haven, answered a letter from Stiles:
The week before last I sent you the Copy of Yale College Charter ... Should you make any Progress in the Affair of a Colledge, I should be glad to hear of it; I heartily wish you Success therein.
The Philadelphia Association of Baptist Churches also had an eye on Rhode Island, home of the mother church of their denomination: the First Baptist Church in America, founded in Providence in 1638 by Roger Williams. The Baptists were as yet unrepresented among colonial colleges; the Congregationalists had Harvard and Yale, the Presbyterians had the College of New Jersey (later Princeton), and the Episcopalians had the College of William and Mary and King's College (later Columbia). Isaac Backus was the historian of the New England Baptists and an inaugural Trustee of Brown, writing in 1784. He described the October 1762 resolution taken at Philadelphia:
The Philadelphia Association obtained such an acquaintance with our affairs, as to bring them to an apprehension that it was practicable and expedient to erect a college in the Colony of Rhode-Island, under the chief direction of the Baptists; ... Mr. James Manning, who took his first degree in New-Jersey college in September, 1762, was esteemed a suitable leader in this important work.
Manning arrived at Newport in July 1763 and was introduced to Stiles, who agreed to write the Charter for the College. Stiles's first draft was read to the General Assembly in August 1763 and rejected by Baptist members who worried that the College Board of Fellows would under-represent the Baptists. A revised Charter written by Stiles and Ellery was adopted by the Assembly on March 3, 1764.
In September 1764, the inaugural meeting of the College Corporation was held at Newport. Governor Stephen Hopkins was chosen chancellor, former and future governor Samuel Ward was vice chancellor, John Tillinghast treasurer, and Thomas Eyres secretary. The Charter stipulated that the Board of Trustees be composed of 22 Baptists, five Quakers, five Episcopalians, and four Congregationalists. Of the 12 Fellows, eight should be Baptists—including the College president—"and the rest indifferently of any or all Denominations."
The Charter was not the grant of King George III, as is sometimes supposed, but rather an Act of the colonial General Assembly. In two particulars, the Charter may be said to be a uniquely progressive document. First, other colleges had curricular strictures against opposing doctrines, while Brown's Charter asserted, "Sectarian differences of opinions, shall not make any Part of the Public and Classical Instruction." Second, according to Brown University historian Walter Bronson, "the instrument governing Brown University recognized more broadly and fundamentally than any other the principle of denominational cooperation." The oft-repeated statement is inaccurate that Brown's Charter alone prohibited a religious test for College membership; other college charters were also liberal in that particular.
James Manning was sworn in as the College's first president in 1765 and served until 1791. In 1770, the College moved from Warren, Rhode Island, to the crest of College Hill overlooking Providence. Solomon Drowne, a freshman in the class of 1773, wrote in his diary on March 26, 1770:
This day the Committee for settling the spot for the College, met at the New-Brick School House, when it was determined it should be set on ye Hill opposite Mr. John Jenkes; up the Presbyterian Lane.
Presbyterian Lane is the present College Street. The eight-acre site had been purchased in two parcels by the Corporation for £219, mainly from Moses Brown and John Brown, the parcels having "formed a part of the original home lots of their ancestor, Chad Brown, and of George Rickard, who bought them from the Indians." University Hall was known as "The College Edifice" until 1823; it was modelled on Nassau Hall at the College of New Jersey. Its construction was managed by the firm of Nicholas Brown and Company, which spent £2,844 in the first year building the College Edifice and the adjacent President's House.
Nicholas Brown, his son Nicholas Brown, Jr. (class of 1786), John Brown, Joseph Brown, and Moses Brown were all instrumental in moving the College to Providence and securing its endowment. Joseph became a professor of natural philosophy at the College; John served as its treasurer from 1775 to 1796; and Nicholas Junior succeeded his uncle as treasurer from 1796 to 1825.
On September 8, 1803, the Corporation voted, "That the donation of $5000 Dollars, if made to this College within one Year from the late Commencement, shall entitle the donor to name the College." That appeal was answered by College treasurer Nicholas Brown, Junior, in a letter dated September 6, 1804, and the Corporation honored its promise. "In gratitude to Mr. Brown, the Corporation at the same meeting voted, 'That this College be called and known in all future time by the Name of Brown University'." Over the years, the benefactions of Nicholas Brown, Jr., totaled nearly $160,000, an enormous sum for that period, and included the buildings Hope College (1821–22) and Manning Hall (1834-35).
It is sometimes erroneously supposed that Brown University was named after John Brown, whose commercial activity included the transportation of African slaves. In fact, Brown University was named for Nicholas Brown, Jr., philanthropist, founder of the Providence Athenaeum, co-founder of Butler Hospital, and an abolitionist. Nicholas Brown, Jr., became a financier of the movement under the guidance of his uncle Moses Brown, one of the leading abolitionists of his day.
The College library was moved out of Providence for safekeeping in the fall of 1776, with British vessels patrolling Narragansett Bay. On December 7, 1776, six thousand British and Hessian troops sailed into Newport harbor under the command of Sir Peter Parker. College President Manning said in a letter written after the war:
The royal Army landed on Rhode Island & took possession of the same: This brought their Camp in plain View from the College with the naked Eye; upon which the Country flew to Arms & marched for Providence, there, unprovided with Barracks they marched into the College & dispossessed the Students, about 40 in Number.
"In the claim for damages presented by the Corporation to the United States government," says the university historian, "it is stated that the American troops used it for barracks and hospital from December 10, 1776, to April 20, 1780, and that the French troops used it for a hospital from June 26, 1780, to May 27, 1782." The French troops were those of the Comte de Rochambeau.
In 1966, the first Group Independent Study Project (GISP) at Brown was formed, involving 80 students and 15 professors. The GISP was inspired by student-initiated experimental schools, especially San Francisco State College, and sought ways to "put students at the center of their education" and "teach students how to think rather than just teaching facts."
Members of the GISP, Ira Magaziner and Elliot Maxwell published a paper of their findings entitled, "Draft of a Working Paper for Education at Brown University." The paper made proposals for the new curriculum, including interdisciplinary freshman-year courses that would introduce "modes of thought," with instruction from faculty from different disciplines as well as for an end to letter grades. The following year Magaziner began organizing the student body to press for the reforms, organizing discussions and protests.
In 1969, University President Ray Heffner Special Committee on Curricular Philosophy in response to student rallies held support of curriculum reform. The committee was tasked with developing specific reforms and the resulting report was called the Maeder Report after the committee's chairman. The report was presented to the faculty, which voted the New Curriculum into existence on May 7, 1969. Its key features included:
The Modes of Thought course was discontinued early on, but the other elements are still in place. In 2006, the reintroduction of plus/minus grading was broached by persons concerned about grade inflation. The idea was rejected by the College Curriculum Council after canvassing alumni, faculty, and students, including the original authors of the Magaziner-Maxwell Report. However, President Christina Paxson has noted that grade inflation clearly exists at Brown, with 53.4% of grades given at Brown being As during the 2012-2013 academic year . Another unique feature of the grading system at Brown is that failures are erased from the student's transcript; as a result, some students have asked a professor for a failing grade, rather than having a C on his or her transcript. While erasure of failing grades from external transcripts is unique to Brown, it is important to note that grade inflation also exists at other U.S. universities.
In 2003, then-University president Ruth Simmons launched a steering committee to study the school's eighteenth-century ties to slavery with a report released four years later. It prompted self-examination at other US institutions of higher learning and got the school to establish a Center for the Study of Slavery and Justice.
Brown University's coat of arms is a white field divided into four sectors by a red cross; within each sector is an open book. Above the shield is a crest consisting of the upper half of a sun in splendor among the clouds atop a red and white torse.
The sun and clouds represent "learning piercing the clouds of ignorance." The cross is believed to be a Saint George's Cross, and the open books represent learning.
Brown is the largest institutional landowner in Providence, with properties on College Hill and in the Jewelry District. The College Hill campus was built contemporarily with the eighteenth- and nineteenth-century precincts that surround it, so that university buildings blend with the architectural fabric of the city. The only indicator of "campus" is a brick and wrought-iron fence on Prospect, George, and Waterman streets, enclosing the College Green and Front Green. The character of Brown's urban campus is then European organic rather than American landscaped.
The main campus, comprising 235 buildings and , is on College Hill in Providence's East Side. It is reached from downtown principally by three extremely steep streets—College, Waterman, and Angell—which run through the Benefit Street historic district and the campus of the Rhode Island School of Design. College Street, culminating with Van Wickle Gates at the top of the hill, is especially beautiful and is the setting for the Convocation and Commencement processions.
Van Wickle Gates
The Van Wickle Gates, dedicated on June 18, 1901, have a pair of smaller side gates that are open year-round, and a large central gate that is opened two days a year for Convocation and Commencement. At Convocation the gate opens inward to admit the procession of new students. At Commencement, the gate opens outward for the procession of graduates. A Brown superstition is that students who walk through the central gate a second time prematurely will not graduate, although walking backward is said to cancel the hex. Members of the Brown University Band famously flout the superstition by walking through the gate three times too many, as they annually play their role in the Commencement parade.
The core green spaces of the main campus are the Front (or "Quiet") Green, the College (or "Main") Green, and the Ruth J. Simmons Quadrangle (until 2012 called Lincoln Field). The old buildings on these three greens are the most photographed. The College Green includes sculptures by noted artists Henry Moore and Giuseppe Penone.
Adjacent to this older campus are, to the south, academic buildings and residential quadrangles, including Wriston, Keeney, and Gregorian quadrangles; to the east, Sciences Park occupying two city blocks; to the north, connected to Simmons Quadrangle by The Walk, academic and residential precincts, including the life sciences complex and the Pembroke Campus; and to the west, on the slope of College Hill, academic buildings, including List Art Center and the Hay and Rockefeller libraries. The perimeter of the old campus contains the university’s four significant examples of Brutalist architecture, the John D. Rockefeller Jr. Library, the Sciences Library, the List Art Building, and the Graduate Center. Also on the slope of College Hill, contiguous with Brown, is the campus of the Rhode Island School of Design.
John Hay Library
The John Hay Library is the second oldest library on campus. It was opened in 1910 and named for John Hay (class of 1858, private secretary to Abraham Lincoln and Secretary of State under two Presidents) at the request of his friend Andrew Carnegie, who contributed half of the $300,000 cost of the building. It is now the repository of the university's archives, rare books and manuscripts, and special collections. Noteworthy among the latter are the Anne S. K. Brown Military Collection (described as "the foremost American collection of material devoted to the history and iconography of soldiers and soldiering"), the Harris Collection of American Poetry and Plays (described as "the largest and most comprehensive collection of its kind in any research library"), the Lownes Collection of the History of Science (described as "one of the three most important private collections of books of science in America"), and (for popularity of requests) the papers of H. P. Lovecraft. The Hay Library is home to one of the broadest collections of incunabula (15th-century printed books) in the Americas, as well as such rarities as the manuscript of Orwell's "Nineteen Eighty-Four" and a Shakespeare First Folio. There are also three books bound in human skin.
John Carter Brown Library
The John Carter Brown Library, founded in 1846, is administered separately from the university but has been located on the Main Green of the campus since 1904. It is generally regarded as the world's leading collection of primary historical sources about the Americas before 1825. It houses a very large percentage of the titles published before that date about the discovery, settlement, history, and natural history of the New World. The "JCB", as it is known, published the 29-volume "Bibliotheca Americana", a principal bibliography in the field. Typical of its noteworthy holdings is the best preserved of the eleven surviving copies of the Bay Psalm Book the earliest extant book printed in British North America and the most expensive printed book in the world. There is also a very fine Shakespeare First Folio, added to the collection by John Carter Brown's widow (a Shakespeare enthusiast) on the grounds that it includes "The Tempest", a play set in the New World. The JCB holdings comprise more than 50,000 early titles and about 16,000 modern books, as well as prints, manuscripts, maps, and other items in the library's specialty. Manning now houses the Haffenreffer Museum of Anthropology.
The exhibition galleries of the Haffenreffer Museum of Anthropology, Brown's teaching museum, are located in Manning Hall on the campus's main green. Its one million artifacts, available for research and educational purposes, are located at its Collections Research Center in Bristol, RI. The museum's goal is to inspire creative and critical thinking about culture by fostering an interdisciplinary understanding of the material world. It provides opportunities for faculty and students to work with collections and the public, teaching through objects and programs in classrooms and exhibitions. The museum sponsors lectures and events in all areas of anthropology, and also runs an extensive program of outreach to local schools.
The Annmary Brown Memorial was constructed from 1903 to 1907 by the politician, Civil War veteran, and book collector General Rush Hawkins, as a mausoleum for his wife, Annmary Brown, a member of the Brown family. In addition to its crypt—the final repository for Brown and Hawkins—the Memorial includes works of art from Hawkins's private collection, including paintings by Angelica Kauffman, Peter Paul Rubens, Gilbert Stuart, Giovanni Battista Tiepolo, Benjamin West, and Eastman Johnson, among others. His collection of over 450 incunabula (materials printed in Europe before 1501) was relocated to the John Hay Library in 1990. Today the Memorial is home to Brown's Medieval Studies and Renaissance Studies programs.
The "Walk" connects Pembroke Campus to the main campus. It is a succession of green spaces extending from Ruth Simmons Quadrangle (Lincoln Field) in the south to the Pembroke College monument on Meeting Street in the north. It is bordered by departmental buildings and the Granoff Center for the Creative Arts. A focal point of The Walk is Maya Lin's water-circulating topographical sculpture of Narragansett Bay, entitled "Under the Laurentide." Installed in 2015, it is next to the Institute for the Study of Environment and Society.
The Women's College in Brown University, known as Pembroke College, was founded in October 1891. When it merged with Brown in 1971, the Pembroke Campus was absorbed into the Brown campus. The Pembroke campus is centered on a quadrangle that fronts on Meeting Street, where a garden and monument—with scale-model of the quadrangle in bronze—compose the formal entry to the campus. The Pembroke campus is among the most pleasing spaces at Brown, with noteworthy examples of Victorian and Georgian architecture. The west side of the quadrangle comprises Pembroke Hall (1897), Smith-Buonanno Hall (1907, formerly Pembroke Gymnasium), and Metcalf Hall (1919); the east side comprises Alumnae Hall (1927) and Miller Hall (1910); the quadrangle culminates on the north with Andrews Hall (1947) and its terrace and garden. Pembroke Hall, originally a classroom building and library, now houses the Cogut Center for the Humanities.
East Campus, centered on Hope and Charlesfield streets, was originally the site of Bryant University. In 1969, as Bryant was preparing to move to Smithfield, Rhode Island, Brown bought their Providence campus for $5 million. This expanded the Brown campus by and 26 buildings, included several historic houses, notably the Isaac Gifford Ladd house, built 1850 (now Brown's Orwig Music Library), and the Robert Taft House, built 1895 (now King House). The area was named East Campus in 1971.
Thayer Street runs through Brown's main campus, north to south, and is College Hill's reduced-scale counterpart to Harvard Square or Berkeley's Telegraph Avenue. Restaurants, cafes, bistros, taverns, pubs, bookstores, second-hand shops, and the like abound. Tourists, people-watchers, buskers, and students from Providence's six colleges make the scene. Half a mile south of campus is Thayer Street's hipper cousin, Wickenden Street. More picturesque and with older architecture, it features galleries, pubs, specialty shops, artist-supply stores, and a regionally famous coffee shop that doubles as a film set (for Woody Allen and others).
Brown Stadium, which was built in 1925 and is home to the football team, is located approximately a mile to the northeast of the main campus. Marston Boathouse, the home of the crew teams, lies on the Blackstone/Seekonk River, to the southeast of campus. Brown's Warren Alpert Medical School is situated in the historic Jewelry District of Providence, near the medical campus of Brown's teaching hospitals, Rhode Island Hospital, Women and Infants Hospital, and Hasbro Children's Hospital. Other university research facilities in the Jewelry District include the Laboratories for Molecular Medicine.
Brown's School of Public Health occupies a landmark modernist building overlooking Memorial Park on the Providence Riverwalk. Brown also owns the Mount Hope Grant in Bristol, Rhode Island, an important Native American and King Philip's War site. Brown's Haffenreffer Museum of Anthropology Collection Research Center, particularly strong in Native American items, is located in the Mount Hope Grant.
Brown's current president Christina Hull Paxson took office in 2012. She had previously been dean of the Woodrow Wilson School at Princeton University and a past-chair of Princeton's economics department. In 2014 and 2015, Paxson presided over the year-long celebration of the 250th anniversary of Brown's founding. Her immediate predecessor as president was Ruth Simmons, the first African American president of an Ivy League institution. Simmons announced she would remain at Brown as a professor of Comparative Literature and Africana Studies. However, Simmons is now the president of Prairie View A&M University, an HBCU in Texas.
Founded in 1764, the College is the oldest school of Brown University. About 7,200 undergraduate students are currently enrolled in the College, and 81 concentrations (majors) are offered. Completed concentrations of undergraduates by area are social sciences 42 percent, humanities 26 percent, life sciences 17 percent, and physical sciences 14 percent. The concentrations with the greatest number of students are Biology, History, and International Relations. Brown is one of the few schools in the United States with an undergraduate concentration (major) in Egyptology. Undergraduates can also design an independent concentration if the existing programs do not align with their curricular focus.
35 percent of undergraduates pursue graduate or professional study immediately, 60 percent within 5 years, and 80 percent within 10 years. For the Class of 1998, 75 percent of all graduates have since enrolled in a graduate or professional degree program. The degrees acquired were doctoral 22 percent, master's 35 percent, medicine 28 percent, and law 14 percent.
The highest fields of employment for graduates of the College are business 36 percent, education 19 percent, health/medical 6 percent, arts 6 percent, government 6 percent, and communications/media 5 percent.
Brown's near neighbor on College Hill is the Rhode Island School of Design (RISD). Brown and RISD students can cross-register at the two institutions, with Brown students permitted to take as many as four courses at RISD that count towards a Brown degree. The two institutions partner to provide various student-life services and the two student bodies compose a synergy in the College Hill cultural scene.
After several years of discussion between the two institutions and several students pursuing dual degrees unofficially, Brown and RISD formally established a five-year dual degree program in 2007, with the first class matriculating in the fall of 2008. The Brown/RISD Dual Degree Program, among the most selective in the country, offered admission to 19 of the 707 applicants for the class entering in autumn 2018, an acceptance rate of 2.7 percent. It combines the complementary strengths of the two institutions, integrating studio art and design at RISD with the entire spectrum of Brown's departmental offerings. Students are admitted to the Dual Degree Program for a course lasting five years and culminating in both the Bachelor of Arts (A.B.) or Bachelor of Science (Sc.B.) degree from Brown and the Bachelor of Fine Arts (B.F.A.) degree from RISD. Prospective students must apply to the two schools separately and be accepted by separate admissions committees. Their application must then be approved by a third Brown/RISD joint committee.
Admitted students spend the first year in residence at RISD completing its first-year Experimental and Foundation Studies curriculum, while taking up to three Brown classes. The second year is spent in residence at Brown, during which students take mainly Brown courses while starting on their RISD major requirements. In the third, fourth, and fifth years, students can elect to live at either school or off-campus, and course distribution is determined by the requirements of each student's unique combination of Brown concentration and RISD major. Program participants are noted for their creative and original approach to cross-disciplinary opportunities, combining, for example, industrial design with engineering, or anatomical illustration with human biology, or philosophy with sculpture, or architecture with urban studies. An annual "BRDD Exhibition" is a well-publicized and heavily attended event, drawing interest and attendees from the wider world of industry, design, the media, and the fine arts.
Brown's theatre and playwriting programs are among the best-regarded in the country. Since 2003, eight different Brown graduates have either won (five times) or been nominated for (six times) the Pulitzer Prize—including winners Lynn Nottage '86 (twice—2009, 2017), Ayad Akhtar '93, Nilo Cruz '94, Quiara Alegría Hudes '04, Jackie Sibblies Drury MFA '04; and nominees Sarah Ruhl '97 (twice), Gina Gionfriddo '97 (twice), Stephen Karam '02, and Jordan Harrison '03. In "American Theater" magazine's 2009 ranking of the most-produced American plays, Brown graduates occupied four of the top five places—Peter Nachtrieb '97, Rachel Sheinkin '89, Sarah Ruhl '97, and Stephen Karam '02.
The undergraduate concentration (major) encompasses programs in theatre history, performance theory, playwriting, dramaturgy, acting, directing, dance, speech, and technical production. Applications for doctoral and master's degree programs are made through the University Graduate School. Master's degrees in acting and directing are pursued in conjunction with the Brown/Trinity Rep MFA program, which partners with one of the country's great regional theatres, Trinity Repertory Company, home of the last longstanding resident acting company in the country. Trinity Rep's present artistic director Curt Columbus succeeded Oskar Eustis in 2006, when Eustis was chosen to lead New York's Public Theater.
The many performance spaces available to Brown students include the Chace and Dowling theaters at Trinity Rep; the McCormack Family, Lee Strasberg, Rites and Reason, Ashamu Dance, Stuart, and Leeds theatres in university departments; the Upstairs Space and Downstairs Space belonging to the wholly student-run Production Workshop; and Alumnae Hall, used by Brown University Gilbert & Sullivan and by Brown Opera Productions. Production design courses utilize the John Street Studio of Eugene Lee, three-time Tony Award-winner
Writing at Brown—fiction, non-fiction, poetry, playwriting, screenwriting, electronic writing, mixed media, and the undergraduate writing proficiency requirement—is catered for by various centers and degree programs, and a faculty that has long included nationally and internationally known authors. The undergraduate concentration (major) in literary arts offers courses in fiction, poetry, screenwriting, literary hypermedia, and translation. Graduate programs include the fiction and poetry MFA writing programs in the literary arts department, and the MFA playwriting program in the theatre arts and performance studies department. The non-fiction writing program is offered in the English department. Screenwriting and cinema narrativity courses are offered in the departments of literary arts and modern culture and media. The undergraduate writing proficiency requirement is supported by the Writing Center.
Alumni authors take their degrees across the spectrum of degree concentrations, but a gauge of the strength of writing at Brown is the number of major national writing prizes won. To note only winners since the year 2000: Pulitzer Prize for Fiction-winners Jeffrey Eugenides '82 (2003), Marilynne Robinson '66 (2005), and Andrew Sean Greer '92 (2018); British Orange Prize-winners Marilynne Robinson '66 (2009) and Madeline Miller '00 (2012); Pulitzer Prize for Drama-winners Nilo Cruz '94 (2003), Lynn Nottage '86 (twice, 2009, 2017), Quiara Alegría Hudes '04 (2012), and Ayad Akhtar '93 (2013); Pulitzer Prize for Biography-winners David Kertzer '69 (2015) and Benjamin Moser '98; Pulitzer Prize for Journalism-winners James Risen '77 (twice, 2002, 2006), Mark Maremont '80 (twice, 2003, 2007), Gareth Cook '91 (2005), Tony Horwitz '80 (2005), Peter Kovacs '77 (2006), Stephanie Grace '86 (2006), Mary Swerczek '98 (2006), Jane B. Spencer '99 (2006), Usha Lee McFarling '89 (2007), James Bandler '89 (2007), Amy Goldstein '75 (2009), David Rohde '90 (twice, 1996, 2009), Kathryn Schulz '96 (2016), and Alissa J. Rubin '80 (2016); Pulitzer Prize for General Nonfiction-winner James Forman Jr. '88 (2018), as well as Pulitzer Prize for Poetry-winner Peter Balakian PhD '80.
Brown began offering computer science courses through the departments of Economics and Applied Mathematics in 1956 when it acquired an IBM machine. Brown added an IBM 650 in January 1958, the only one of its type between Hartford and Boston. In 1960, Brown opened its first dedicated computer building. The building, designed by Philip Johnson and opened on George Street, received an IBM 7070 computer the next year. Brown granted computer sciences full Departmental status in 1979. In 2009, IBM and Brown announced the installation of a supercomputer (by teraflops standards), the most powerful in the southeastern New England region.
In the 1960s, Andries van Dam along with Ted Nelson, and Bob Wallace invented The Hypertext Editing Systems, HES and FRESS while at Brown. Nelson coined the word "hypertext". Van Dam's students helped originate XML, XSLT, and related Web standards. Other Brown alumni have distinguished themselves in the computer sciences. They include a principal architect of the Classic Mac OS, a principal architect of the Intel 80386 microprocessor line, the Microsoft Windows 95 project chief, a CEO of Apple, the former head of the MIT Computer Science and Artificial Intelligence Laboratory, the inaugural chair of the Computing Community Consortium, and design chiefs at Pixar and Industrial Light & Magic, protegees of graphics guru Andries van Dam. The character "Andy" in the animated film "Toy Story" is taken to be an homage to Van Dam from his students employed at Pixar. Van Dam denies this, but a copy of his book ("Computer Graphics: Principles and Practice") appears on Andy's bookshelf in the film. Brown computer science graduate and "Heroes" actor Masi Oka '97, was an animator at Industrial Light & Magic.
The department today is home to The CAVE. This project is a virtual reality room used for everything from three-dimensional drawing classes to tours of the circulatory system for medical students. In 2000, students from Brown's Technology House converted the south face of the Sciences Library into a Tetris game, the first high-rise-building Tetris ever attempted. Code named La Bastille, the game used a personal computer running Linux, a radio-frequency video game controller, eleven circuit boards, a 12-story data network, and over 10,000 Christmas lights.
In the early 2000s the department initiated a program entitled the Industry Partners Program that partners with outside companies, typically tech companies, to expose students to career opportunities.
The Joukowsky Institute for Archaeology and the Ancient World pursues fieldwork and excavations, regional surveys, and academic study of the archaeology and art of the ancient Mediterranean, Egypt, and Western Asia from the Levant to the Caucasus. The Institute has a very active fieldwork profile, with faculty-led excavations and regional surveys presently in Petra, Jordan, in West-Central Turkey, at Abydos in Egypt, and in Sudan, Italy, Mexico, Guatemala, Montserrat in the West Indies, and Providence, Rhode Island.
The Institute's faculty includes cross-appointments from the departments of Egyptology, Assyriology, Classics, Anthropology, and History of Art and Architecture. Faculty research and publication areas include Greek and Roman art and architecture, landscape archaeology, urban and religious architecture of the Levant, Roman provincial studies, the Aegean Bronze Age, and the archaeology of the Caucasus. The Institute offers visiting teaching appointments and postdoctoral fellowships which have, in recent years, included Near Eastern Archaeology and Art, Classical Archaeology and Art, Islamic Archaeology and Art, and Archaeology and Media Studies.
Egyptology and Assyriology
Facing the Joukowsky Institute, across the Front Green, is the Department of Egyptology and Assyriology, formed in 2006 by the merger of Brown's renowned departments of Egyptology and History of Mathematics. It is one of only a handful of such departments in the United States. The curricular focus is on three principal areas: Egyptology (the study of the ancient languages, history, and culture of Egypt), Assyriology (the study of the ancient lands of present-day Iraq, Syria, and Turkey), and the history of the ancient exact sciences (astronomy, astrology, and mathematics). Many courses in the department are open to all Brown undergraduates without prerequisite, and include archaeology, languages, history, and Egyptian and Mesopotamian religions, literature, and science. Students concentrating (majoring) in the department choose a track of either Egyptology or Assyriology. Graduate level study comprises three tracks to the doctoral degree: Egyptology, Assyriology, or the History of the Exact Sciences in Antiquity.
The Watson Institute for International and Public Affairs is a center for the study of global issues and public affairs and is one of the leading institutes of its type in the country. It occupies an architecturally distinctive building designed by Uruguayan architect Rafael Viñoly. The Institute was initially endowed by Thomas Watson, Jr., Brown class of 1937, former Ambassador to the Soviet Union, and longtime president of IBM. Institute faculty includes, or formerly included, Italian prime minister and European Commission president Romano Prodi, Brazilian president Fernando Henrique Cardoso, Chilean president Ricardo Lagos Escobar, Mexican novelist and statesman Carlos Fuentes, Brazilian statesman and United Nations commission head Paulo Sérgio Pinheiro, Indian foreign minister and ambassador to the United States Nirupama Rao, American diplomat and Dayton Peace Accords author Richard Holbrooke (Brown '62), and Sergei Khrushchev, editor of the papers of his father Nikita Khrushchev, leader of the Soviet Union.
The Institute's curricular interest is organized into the principal themes of development, security, and governance—with further focuses on globalization, economic uncertainty, security threats, environmental degradation, and poverty. Three Brown undergraduate concentrations (majors) are hosted by the Watson Institute—Development Studies, International Relations, and Public Policy. Graduate programs offered at the Watson Institute include the Graduate Program in Development (Ph.D.) and the Public Policy Program (M.P.A). The Institute also offers Post Doctoral, professional development and global outreach programming. In support of these programs, the Institute houses various centers, including the Brazil Initiative, Brown-India Initiative, China Initiative, Middle East Studies center, The Center for Latin American and Caribbean Studies (CLACS) and the Taubman Center for Public Policy. In recent years, the most internationally cited product of the Watson Institute has been its Costs of War Project, first released in 2011 and continuously updated. The Project comprises a team of economists, anthropologists, political scientists, legal experts, and physicians, and seeks to calculate the economic costs, human casualties, and impact on civil liberties of the wars in Iraq, Afghanistan, and Pakistan since 2001.
Established in 1847, Brown's engineering program is the oldest in the Ivy League and the third oldest civilian engineering program in the country, preceded only by Rensselaer Polytechnic Institute (1824) and Union College (1845). In 1916, the departments of electrical, mechanical, and civil engineering were merged into a Division of Engineering, and in 2010 the division was elevated to a School of Engineering.
Engineering at Brown is especially interdisciplinary. The School is organized without the traditional departments or boundaries found at most schools, and follows a model of connectivity between disciplines—including biology, medicine, physics, chemistry, computer science, the humanities and the social sciences. The School practices an innovative clustering of faculties in which engineers team with non-engineers to bring a convergence of ideas.
Since 2009, Brown has developed an Executive MBA program in conjunction with one of the leading Business Schools in Europe; IE Business School in Madrid. This relationship has since strengthened resulting in both institutions offering a dual degree program. In this partnership, Brown provides its traditional coursework while IE provides most of the business-related subjects making a differentiated alternative program to other Ivy League's EMBAs. The cohort typically consists of 25-30 EMBA candidates from some 20 countries.
Classes are held in Providence, Madrid, Cape Town and Online.
The Pembroke Center for Teaching and Research on Women was established at Brown in 1981 by Joan Wallach Scott as a research center on gender. It was named for Pembroke College, the former women's coordinate college at Brown, and is affiliated with Brown's Sarah Doyle Women's Center. It supports the undergraduate concentration in Gender and Sexuality Studies, post-doctoral research fellowships, the annual Pembroke Seminar, and other academic programs. The Center also manages various collections, archives, and resources, including the Elizabeth Weed Feminist Theory Papers and the Christine Dunlap Farnham Archive.
Established in 1887, the Graduate School has around 2,000 students studying over 50 disciplines. 20 different master's degrees are offered as well as Ph.D. degrees in over 40 subjects ranging from applied mathematics to public policy. Overall, admission to the Graduate School is most competitive with an acceptance rate of about 10 percent.
The university's medical program started in 1811, but the school was suspended by President Wayland in 1827 after the program's faculty declined to live on campus (a new requirement under Wayland). In 1975, the first M.D. degrees from the new Program in Medicine were awarded to a graduating class of 58 students. In 1991, the school was officially renamed the Brown University School of Medicine, then renamed once more to Brown Medical School in October 2000. In January 2007, Warren Alpert donated $100 million to Brown Medical School, in recognition of which its name was changed to the Warren Alpert Medical School of Brown University.
In 2020, "U.S. News & World Report" ranked Brown's medical school the 9th most selective in the country, with an acceptance rate of 2.8 percent.
"U.S. News" ranks it 38th for research and 35th for primary care.
The medical school is known especially for its eight-year Program in Liberal Medical Education (PLME), inaugurated in 1984. One of the most selective and renowned programs of its type in the country, it offered admission to 90 of the 2,290 applicants for the class entering in autumn 2015, an acceptance rate of 3.9 percent. Since 1976, the Early Identification Program (EIP) has encouraged Rhode Island residents to pursue careers in medicine by recruiting sophomores from Providence College, Rhode Island College, the University of Rhode Island, and Tougaloo College. In 2004, the school once again began to accept applications from premedical students at other colleges and universities via AMCAS like most other medical schools. The medical school also offers combined degree programs leading to the M.D./Ph.D., M.D./M.P.H. and M.D./M.P.P. degrees.
The Marine Biological Laboratory (MBL) is an independent research institution established in 1882 at Woods Hole, Massachusetts. The laboratory is linked to 54 current or past Nobel Laureates who have been research or teaching faculty. Since 2005, the MBL and Brown have collaborated in a Ph.D. program in biological and environmental sciences that combines faculty at both institutions, including the faculties of the Ecosystems Center, the Bay Paul Center, the Program in Cellular Dynamics, and the Marine Resources Center.
The Brown University School of Professional Studies currently offers blended learning Executive master's degrees in Healthcare Leadership, Cyber Security, and Science and Technology Leadership. The master's degrees are designed to help students who have a job and life outside of academia to progress in their respective fields. The students meet in Providence, RI every 6–7 weeks for a week seminar each trimester.
The university has also invested in MOOC development starting in 2013, when two courses, "Archeology's Dirty Little Secrets" and "The Fiction of Relationship", both of which received thousands of students. However, after a year of courses, the university broke its contract with Coursera and revamped its online persona and MOOC development department. By 2017, the university released new courses on edx, two of which were "The Ethics of Memory" and "Artful Medicine: Art's Power to Enrich Patient Care". In January 2018, Brown published its first "game-ified" course called " Fantastic Places, Unhuman Humans: Exploring Humanity Through Literature", which featured out of platform games to help learners understand materials, as well as a story-line that immerses users into a fictional world to help characters along their journey.
For the undergraduate class of 2022 (enrolling in Fall 2018), Brown received 35,438 applications, the largest applicant pool in the university's history. 2,566 were accepted for an acceptance rate of 7.2%, the lowest in university history. Additionally, for the academic year 2015-16 there were 1,834 transfer applicants, of whom 8.9% were accepted, with an SAT range of 2180–2330, ACT range of 31–34, and average college GPA of 3.85. In 2017, the Graduate School accepted 11% of 9,215 applicants. In 2014, "U.S. News" ranked Brown's Warren Alpert Medical School the 5th most selective in the country, with an acceptance rate of 2.9 percent.
Brown admission policy is stipulated need-blind for all domestic first-year applicants. In 2017, Brown announced that loans would be eliminated from all undergraduate financial aid awards starting in 2018–2019, as part of a new $30 million campaign called the "Brown Promise". In 2016–17, the university awarded need-based scholarships worth $120.5 million. The average need-based award for the class of 2020 was $47,940.
Brown has committed to "minimize its energy use, reduce negative environmental impacts and promote environmental stewardship." The Energy and Environmental Advisory Committee has developed a set of ambitious goals for the university to reduce its carbon emissions and eventually achieve carbon neutrality. The "Brown is Green" website collects information about Brown's progress toward greenhouse gas emissions reductions and related campus initiatives, such as student groups, courses, and research. Brown's grade of A-minus was the top one issued in the 2009 report of the Sustainable Endowments Institute (no A-grade was issued).
Brown has a number of active environmental leadership groups on campus. These groups have begun a number of campus-wide environmental initiatives—including promoting the reduction of supply and demand of bottled water and investigating a composting program.
According to the A. W. Kuchler U.S. potential natural vegetation types, Brown University would have a dominant vegetation type of Appalachian Oak ("104") with a dominant vegetation form of Eastern Hardwood Forest ("25").
Brown is a member of the Ivy League athletic conference, which is categorized as a Division I (top level) conference of the National Collegiate Athletic Association (NCAA). The Brown Bears are the third largest university sports program in the United States, sponsoring 38 varsity intercollegiate teams (Harvard sponsors 42 and Princeton 39). Brown's athletic program is one of the "U.S. News & World Report" top 20—the "College Sports Honor Roll"—based on breadth of program and athletes' graduation rates. Brown's newest varsity team is women's rugby, promoted from club-sport status in 2014.
Brown women's rowing has won 7 national titles between 1999 and 2011. Brown men's rowing perennially finishes in the top 5 in the nation, most recently winning silver, bronze, and silver in the national championship races of 2012, 2013, and 2014. The men's and women's crews have also won championship trophies at the Henley Royal Regatta and the Henley Women's Regatta. Brown's men's soccer is consistently ranked in the top 20, and has won 18 Ivy League titles overall; recent soccer graduates play professionally in Major League Soccer and overseas. Brown football, under its most successful coach historically, Phil Estes, won Ivy League championships in 1999, 2005, and 2008. (Brown football's reemergence is credited to its 1976 Ivy League championship team, "The Magnificent Andersons," so named for its coach, John Anderson.) High-profile alumni of the football program include Houston Texans head coach Bill O'Brien; former Penn State football coach Joe Paterno, Heisman Trophy namesake John W. Heisman, and Pollard Award namesake Fritz Pollard. The Men's Lacrosse team also has a long and storied history. Brown women's gymnastics won the Ivy League tournament in 2013 and 2014. Brown varsity equestrian has won the Ivy League championship several times. Brown also supports competitive intercollegiate club sports, including sailing and ultimate frisbee. The men's ultimate team, Brownian Motion, has won three national championships, in 2000, 2005 and 2019. The Brown women's sailing team has won 5 national championships, most recently in 2019 while the coed sailing team won 2 national championships in 1942 and 1948. Both teams are consistency ranked in the top 10 in the nation.
The first intercollegiate ice hockey game in America was played between Brown and Harvard on January 19, 1898. The first university rowing regatta larger than a dual-meet was held between Brown, Harvard, and Yale at Lake Quinsigamond in Massachusetts on July 26, 1859.
In 2014, Brown University tied with the University of Connecticut for the highest number of reported rapes in the nation, with its "total of reports of rape" on their main campus standing at 43.
The weekend includes an annual spring concert festival that has featured numerous famous artists.
About 12 percent of Brown students are in fraternities and sororities. There are 11 residential Greek houses: six fraternities (Beta Rho Pi, Delta Phi, Delta Tau, Phi Kappa Psi, Sigma Chi, and Theta Delta Chi; four sororities (Alpha Chi Omega, Kappa Alpha Theta, Delta Gamma, and Kappa Delta), one co-ed house (Zeta Delta Xi), and one co-ed literary society (Alpha Delta Phi). Phi Sigma Kappa fraternity was present on campus from 1906 to 1939, but was unable to reactivate after World War II due to wartime losses. All recognized Greek-letter organizations are located on campus in Wriston Quadrangle in university-owned housing. They are overseen by the Greek Council.
An alternative to Greek-letter organizations are the program houses organized by themes. As with Greek houses, the residents of program houses select their new members, usually at the start of the spring semester. Examples of program houses are St. Anthony Hall (located in King House), Buxton International House, the Machado French/Hispanic/Latinx House, Technology House, Harambee (African culture) House, Social Action House and Interfaith House.
Currently, there are three student cooperative houses at Brown. Two of them, Watermyn and Finlandia on Waterman Street, are owned by the Brown Association for Cooperative Housing (BACH), a non-profit corporation owned by its members. The third co-op, West House, is located in a Brown-owned house on Brown Street. The three organizations run a vegetarian co-op for the larger community.
All students not in program housing enter a lottery for general housing. Students form groups and are assigned time slots during which they can pick among the remaining housing options.
The earliest societies at Brown were devoted to oration and debate. The Pronouncing Society is mentioned in the diary of Solomon Drowne, class of 1773, who was voted its president in 1771. It seems to have disappeared during the American Revolutionary War. We next hear of the Misokosmian Society, founded in 1794 and renamed the Philermenian Society in 1798. This was effectively a secret society with membership limited to 45. It met fortnightly to hear speeches and debate and thrived until the Civil War; in 1821 its library held 1594 volumes. In 1799, a chapter of the Philandrian Society, also secret, was established at the College. In 1806, the United Brothers was formed as an egalitarian alternative to the Philermenian Society. "These two great rivals," says the university historian, "divided the student body between them for many years, surviving into the days of President Sears. A tincture of political controversy sharpened their rivalry, the older society inclining to the aristocratic Federals, the younger to the Republicans, the democrats of that day. ... The students continuing to increase in number, they outran the constitutional limits of both societies, and a third, the Franklin Society, was established in 1824; it never had the vitality of the other two, however, and died after ten years." Other nineteenth century clubs and societies, too numerous to treat here, are described in Bronson's history of the university.
The Cammarian Club—founded in 1893 and taking its name from the Latin for lobster, its members' favorite dinner food—was at first a semi-secret society which "tapped" 15 seniors each year. In 1915, self-perpetuating membership gave way to popular election by the student body, and thenceforward the Club served as the "de facto" undergraduate student government. In 1971, unaccountably, it voted the name Cammarian Club out of existence, thereby amputating its tradition and longevity. The successor and present-day organization is the generically-named Undergraduate Council of Students.
Societas Domi Pacificae, known colloquially as "Pacifica House," is a present-day, self-described secret society, which nonetheless publishes a website and an email address. It claims a continuous line of descent from the Franklin Society of 1824, citing a supposed intermediary "Franklin Society" traceable in the nineteenth century. But the intermediary turns out to be, on closer inspection, the well-known Providence Franklin Society, a civic organization unconnected to Brown whose origins and activity are well-documented. It was founded in 1821 by merchants William Grinnell and Joseph Balch, Jr., and chartered by the General Assembly in January 1823. The "Pacifica House" account of this (conflated) Franklin Society cites published mentions of it in 1859, 1876, and 1883. But the first of these (Rhees 1859, see footnote "infra") is merely a sketch of the 1824 Brown organization; the second (Stockwell 1876) is a reference-book article on the Providence Franklin Society itself; and the third is the Providence Franklin Society's own publication, which the "Pacifica House" reference mis-ascribes to the "Franklin Society," dropping the word "Providence."
There are over 300 registered student organizations on campus with diverse interests. The Student Activities Fair, during the orientation program, provides first-year students the opportunity to become acquainted with the wide range of organizations. A sample of organizations includes:
Brown University has several resource centers on campus. The centers often act as sources of support as well as safe spaces for students to explore certain aspects of their identity. Additionally, the centers often provide physical spaces for students to study and have meetings. Although most centers are identity-focused, some provide academic support as well.
The Brown Center for Students of Color (BCSC) is a space that provides support for students of color. Established in 1972 at the demand of student protests, the BCSC encourages students to engage in critical dialogue, develop leadership skills, and promote social justice. The center houses various programs for students to share their knowledge and engage in discussion. Programs include the Third World Transition Program, the Minority Peer Counselor Program, the Heritage Series, and other student-led initiatives. Additionally, the BCSC hopes to foster community among the students it serves by providing spaces for students to meet and study.
The Sarah Doyle Women's Center aims to provide a space for members of the Brown community to examine and explore issues surrounding gender. The center was named after one of the first women to attend Brown University, Sarah Doyle. The center emphasizes intersectionality in its conversations on gender, encouraging people to see gender as present and relevant in various aspects of life. The center hosts programs and workshops in order to facilitate dialogue and provide resources for students, faculty, and staff.
Other centers include the LGBTQ+ Center, the First-Generation College and Low-Income Student (FLi) Center, and the Curricular Resource Center.
The "Forbes" magazine annual ranking of "America's Top Colleges 2019"—which ranked 650 research universities, liberal arts colleges and service academies—ranked Brown 7th overall and 7th among universities.
"U.S. News & World Report" ranked Brown 14th among national universities in its 2020 edition. The 2020 edition also ranked Brown tied at 3rd for undergraduate teaching, 15th in Most Innovative Schools, and 16th in Best Value Schools.
"Washington Monthly" ranked Brown 28th among 395 national universities in the U.S. based on its contribution to the public good, as measured by social mobility, research, and promoting public service.
In 2017, National Science Foundation ranked Brown University 103rd in the United States by research. For 2020, "U.S. News & World Report" ranks Brown University 102nd globally.
In 2014, "Forbes" magazine ranked Brown 7th on its list of "America's Most Entrepreneurial Universities". The "Forbes" analysis looked at the ratio of "alumni and students who have identified themselves as founders and business owners on LinkedIn" and the total number of alumni and students.
LinkedIn particularized the "Forbes" rankings, placing Brown third (between MIT and Princeton) among "Best Undergraduate Universities for Software Developers at Startups." LinkedIn's methodology involved a career-path examination of "millions of alumni profiles" in its membership database.
In 2020, "U.S. News" ranked Brown's Warren Alpert Medical School the 9th most selective in the country, with an acceptance rate of 2.8 percent.
Alumni in politics include U.S. Secretary of State John Hay (1852), U.S. Secretary of State and Attorney General Richard Olney (1856), Chief Justice of the United States and U.S. Secretary of State Charles Evans Hughes (1881), Senator Maggie Hassan '80 of New Hampshire, Governor Jack Markell '82 of Delaware, Rhode Island Representative David Cicilline '83, Minnesota Representative Dean Phillips '91, 2020 Presidential candidate and entrepreneur Andrew Yang '96, and DNC Chair Tom Perez '83.
Prominent alumni in business and finance include philanthropist John D. Rockefeller Jr. (1897), Chair of the Federal Reserve Janet Yellen '67, World Bank President Jim Yong Kim '82, Bank of America CEO Brian Moynihan '81, CNN founder and America's Cup yachtsman Ted Turner '60, IBM chairman and CEO Thomas Watson, Jr. '37, Apple Inc. CEO John Sculley '61, Uber CEO Dara Khosrowshahi '91, and magazine editor John F. Kennedy, Jr. '83.
Important figures in the history of education include the father of American public school education Horace Mann (1819), civil libertarian and Amherst College president Alexander Meiklejohn, first president of the University of South Carolina Jonathan Maxcy (1787), Bates College founder Oren B. Cheney (1836), University of Michigan president (1871–1909) James Burrill Angell (1849), University of California president (1899–1919) Benjamin Ide Wheeler (1875), and Morehouse College's first African-American president John Hope (1894).
Alumni in the computer sciences and industry include architect of Intel 386, 486, and Pentium microprocessors John H. Crawford '75, and inventor of the first silicon transistor Gordon Kidd Teal '31.
Alumni in the arts and media include actress Jessica Capshaw '98, actor Daveed Diggs '04, actress Emma Watson '14, NPR program host Ira Glass '82, singer-composer Mary Chapin Carpenter '81, humorist and Marx Brothers screenwriter S.J. Perelman '25, novelists Nathanael West '24, Jeffrey Eugenides '83, Edwidge Danticat (MFA '93), and Marilynne Robinson '66; actress Jo Beth Williams '70, composer and synthesizer pioneer Wendy Carlos '62, journalist James Risen '77, political pundit Mara Liasson, MSNBC host and The Nation editor-at-large Chris Hayes '01, "New York Times, " publisher A. G. Sulzberger '04, and actress Julie Bowen '91.
Other notable alumni include "Lafayette of the Greek Revolution" and its historian Samuel Gridley Howe (1821) Governor of Wyoming Territory and Governor of Nebraska John Milton Thayer (1841), Governor of Rhode Island Augustus Bourn (1855), NASA head during first seven Apollo missions Thomas O. Paine '42, diplomat Richard Holbrooke '62, sportscaster Chris Berman '77, Houston Texans head coach Bill O'Brien '92, 2018 Miss America Cara Mund '16, Penn State football coach Joe Paterno '50, Heisman Trophy namesake John W. Heisman '91,
Olympic and world champion triathlete Joanna Zeiger, royals and nobles such as Prince Rahim Aga Khan, Prince Faisal bin Al Hussein of the Hashemite Kingdom of Jordan, Princess Leila Pahlavi of Iran '92, Prince Nikolaos of Greece and Denmark, Prince Nikita Romanov, Princess Theodora of Greece and Denmark, Prince Jaime of Bourbon-Parma, Duke of San Jaime and Count of Bardi, Prince Ra'ad bin Zeid, Lady Gabriella Windsor, Prince Alexander von Fürstenberg, Countess Cosima von Bülow Pavoncelli, and her half-brother Prince Alexander-Georg von Auersperg, David Shrier, American futurist and author, and Olympic gold ('98), silver ('02), and bronze ('06) medal-winning hockey player Katie King-Crowley '97.
Nobel Laureates Craig Mello '82 and Jerry White '87, Cooley–Tukey FFT algorithm co-originator John Wilder Tukey '36, biologist Stanley Falkow (PhD '59), and psychologist Aaron Beck '50.
Notable past or current faculty have included Nobel Laureates Michael Kosterlitz, Lars Onsager, George Stigler, Vernon L. Smith, George Snell and Leon Cooper; Fields Medal winning mathematician David Mumford, Pulitzer Prize–winning historian Gordon S. Wood, Sakurai Prize winning physicist Gerald Guralnik, computer scientist Andries van Dam, engineer Daniel C. Drucker, sociologist Lester Frank Ward, former Prime Minister of Italy and former EU chief Romano Prodi, former President of Brazil Fernando Cardoso, former President of Chile Ricardo Lagos, writers Carlos Fuentes, Chinua Achebe, and Robert Coover, philosopher Martha Nussbaum, developmental psychologist William Damon, linguist Hans Kurath, political scientist James Morone, biologist Kenneth R. Miller, and Senior Fellow Sergei Khrushchev.
Brown's reputation as an institution with a free-spirited, iconoclastic student body is portrayed in fiction and popular culture. "Family Guy" character Brian Griffin is a Brown alumnus. "The O.C."s main character Seth Cohen is denied acceptance to Brown while his girlfriend Summer Roberts is accepted. In "The West Wing", Amy Gardner is a Brown alumna. In "Gossip Girl", New York socialite Serena vies with her friends for a spot at Brown. | https://en.wikipedia.org/wiki?curid=4157 |
Bill Atkinson
Bill Atkinson (born March 17, 1951) is an American computer engineer and photographer. Atkinson worked at Apple Computer from 1978 to 1990.
Atkinson was the principal designer and developer of the graphical user interface (GUI) of the Apple Lisa and, later, one of the first thirty members of the original Apple Macintosh development team, and was the creator of the ground-breaking MacPaint application, which fulfilled the vision of using the computer as a creative tool. He also designed and implemented QuickDraw, the fundamental toolbox that the Lisa and Macintosh used for graphics. QuickDraw's performance was essential for the success of the Macintosh GUI. He also was one of the main designers of the Lisa and Macintosh user interfaces. Atkinson also conceived, designed and implemented HyperCard, the first popular hypermedia system. HyperCard put the power of computer programming and database design into the hands of nonprogrammers. In 1994, Atkinson received the EFF Pioneer Award for his contributions.
He received his undergraduate degree from the University of California, San Diego, where Apple Macintosh developer Jef Raskin was one of his professors. Atkinson continued his studies as a graduate student in neurochemistry at the University of Washington. Raskin invited Atkinson to visit him at Apple Computer; Steve Jobs persuaded him to join the company immediately as employee No. 51, and Atkinson never finished his PhD.
Around 1990, General Magic's founding, with Bill Atkinson as one of the three cofounders, met the following press in "Byte" magazine:
The obstacles to General Magic's success may appear daunting, but General Magic is not your typical start-up company. Its partners include some of the biggest players in the worlds of computing, communications, and consumer electronics, and it's loaded with top-notch engineers who have been given a clean slate to reinvent traditional approaches to ubiquitous worldwide communications.
In 2007, Atkinson began working as an outside developer with Numenta, a startup working on computer intelligence. On his work there Atkinson said, "what Numenta is doing is more fundamentally important to society than the personal computer and the rise of the Internet."
Currently, Atkinson has combined his passion for computer programming with his love of nature photography to create art images. He takes close-up photographs of stones that have been cut and polished. His works are highly regarded for their resemblance to miniature landscapes which are hidden within the stones. Atkinson's 2004 book "Within the Stone" features a collection of his close-up photographs. The highly intricate and detailed images he creates are made possible by the accuracy and creative control of the digital printing process that he helped create.
Some of Atkinson's noteworthy contributions to the field of computing include:
Atkinson now works as a nature photographer. Actor Nelson Franklin portrayed him in the 2013 film "Jobs". | https://en.wikipedia.org/wiki?curid=4158 |
Battle of Lostwithiel
The Battle of Lostwithiel took place over a 13 day period spanning 21 August – 2 September near Lostwithiel and along the River Fowey valley in Cornwall during the First English Civil War in 1644. In the battle King Charles led the Royalists to a decisive victory over the Parliamentarians commanded by the Earl of Essex.
The battle was the worse defeat suffered by the Parliamentarians in the First English Civil War and secured South-West England for the Royalists until the end of the civil war.
During April and May 1644, Parliamentarian commanders Sir William Waller and the Earl of Essex combined their armies and carried out a campaign against King Charles and the Royalist garrisons surrounding Oxford. Trusting Waller to deal with the King in Oxfordshire, Essex divided the Parliamentarian army on 6 June and headed southwest to relieve the Royalist siege of Lyme in Dorset. Lyme had been under siege by King Charles' nephew, Prince Maurice, and the Royalists for nearly two months.
South-West England at that time was largely under the control of the Royalists. The town of Lyme, however, was a Parliamentarian stronghold and served as an important seaport for the Parliamentarian fleet of the Earl of Warwick. As Essex approached Lyme in mid-June Prince Maurice ended the siege and took his troops west to Exeter.
Essex then proceeded further southwest toward Cornwall with the intent to relieve the siege of Plymouth. Plymouth was the only other significant Parliamentarian stronghold in the South-West and it was under siege by Richard Grenville and Cornish Royalists. Essex had been told by Lord Robartes, a wealthy politician and merchant from Cornwall, that the Parliamentarians would gain considerable military support if he moved against Grenville and freed Plymouth. Given Lord Robartes’ advice, Essex advanced toward Plymouth. His action caused Grenville to end the siege. Essex then advanced further west believing that he could take full control of the South-West from the Royalists.
Meanwhile in Oxfordshire, King Charles battled with the Parliamentarians and defeated Sir William Waller at the Battle of Cropredy Bridge on 29 June. On 12 July after a Royalist council of war recommended that Essex be dealt with before he could be reinforced, King Charles and his Oxford army departed Evesham. King Charles accepted the council's advice, not solely because it was good strategy, but more so because his Queen was in Exeter where she had recently given birth to the Princess Henrietta and had been denied safe conduct to Bath by Essex.
On 26 July, King Charles arrived in Exeter and joined his Oxford army with the Royalist forces commanded by Prince Maurice. On that same day, Essex and his Parliamentary force entered Cornwall. One week later, as Essex bivouacked with his army at Bodmin, he learned that King Charles had defeated Waller; brought his Oxford army to the South-West; and joined forces with Prince Maurice. Essex had also seen that he wasn’t getting the military support from the people of Cornwall as Lord Robartes asserted. At that time, Essex understood that he and his army were trapped in Cornwall and his only salvation would be reinforcements or an escape through the port of Fowey by means of the Parliamentarian fleet.
Essex immediately marched his troops eight kilometers south to the small town of Lostwithiel arriving on 2 August. He immediately deployed his men in a defensive arc with detachments on the high ground to the north at Restormel Castle and the high ground to the east at Beacon Hill. Essex also sent a small contingent of foot south to secure the port of Fowey aiming to eventually evacuate his infantry by sea. At Essex’s disposal was a force of 6,500 foot and 3,000 horse.
Aided through intelligent provided by the people of Cornwell, King Charles followed westward, slowly and deliberately cutting off the potential escape routes that Essex might attempt to utilize. On 6 August King Charles communicated with Essex, calling for him to surrender. Stalling for several days, Essex considered the offer but ultimately refused.
On 11 August, Grenville and the Cornish Royalists entered Bodmin forcing out Essex’s rear-guard cavalry. Grenville then proceeds south across Respryn Bridge to meet and join forces with King Charles and Prince Maurice. It is estimated that the Royalist forces at that time were composed of 12,000 foot and 7,000 horse. Over the next two days the Royalists deployed detachments along the east side of the River Fowey to prevent a Parliamentarian escape cross country. Finally the Royalists send 200 foot with artillery south to garrison the fort at Polruan, effectively blocking the entrance to the harbour of Fowey. At about that time, Essex learned that reinforcements under the command of Sir John Middleton were turned back by the Royalists at Bridgwater in Somerset.
At 07:00 hours on 21 August, King Charles launched his first attack on Essex and the Parliamentarians at Lostwithiel. From the north, Grenville and the Cornish Royalists attacked Restormel Castle and easily dislodged the Parliamentarians who fell back quickly. From the east, King Charles and the Oxford army captured Beacon Hill with little resistance from the Parliamentarians. Prince Maurice and his force occupied Druid Hill. Casualties were fairly low and by nightfall the fighting ended and the Royalists held the high ground on the north and east sides of Lostwithiel.
For the next couple of days the two opposing forces exchanged fire only in a number of small skirmishes. On 24 August, King Charles further tightened the noose encircling the Parliamentarians when he sent Lord Goring and Sir Thomas Bassett to secure the town of St Blazey and the area to the southwest of Lostwithiel. This reduced the foraging area for the Parliamentarians and access to the coves and inlets in the vicinity of the port of Par.
Essex and the Parliamentarians were now totally surrounded and boxed into a three kilometer by eight kilometer area spanning from Lostwithiel in the north to the port of Fowey in the south. Knowing that he wouldn’t be able to fight his way out, Essex made his final plans for an escape. Since a sea evacuation of his cavalry wouldn’t be possible, Essex ordered his cavalry commander William Balfour to attempt a breakout to Plymouth. For the infantry, Essex planned to retreat south and meet Lord Warwick and the Parliamentarian fleet at Fowey. At 03:00 hours on 31 August, Balfour and 2,000 members of his cavalry executed the first step of Essex’s plan when they successfully crossed the River Fowey and escaped intact without engaging the Royalist defenders.
Early on the morning on 31 August, the Parliamentarians ransacked and looted Lostwithiel and began their withdrawal south. At 07:00 hours, the Royalists observed the actions of the Parliamentarians and immediately proceed to attack. Grenville attacks from the north. King Charles and Prince Maurice cross the River Fowey, join up with Grenville, and enter Lostwithiel. Together the Royalists engage the Parliamentarian rear-guards and quickly take possession of the town. The Royalist also send detachments down along the east side of the River Fowey to protect against any further breakouts and to capture the town of Polruan.
The Royalists then began to pursue Essex and the Parliamentarian infantry down the river valley. At the outset the Royalist pushed the Parliamentarians about four kilometers south through the hedged fields, hills and valleys. At the narrow pass near St. Veep, Philip Skippon, Essex’s commander of the infantry, counter-attacked the Royalists and pushed them back several fields attempting to give Essex time to set up a line of defense further south. At 11:00 hours, the Royalist cavalry mounted a charge and won back the territory lost. There was a lull in the battle at 12:00 hours as King Charles waited for his full army to come up and reform.
The fighting resumed and continued through the afternoon as the Parliamentarians tried to disengage and continue south. At 16:00 hours, the Parliamentarians tried again to counter-attack with their remaining cavalry only to be driven back by King Charles’ Life Guard. About a kilometer north of Castle Dore, the Parliamentarians right flank began to give way. At 18:00 hours when the Parliamentarians were pushed back to Castle Dore they made their last attempt to rally only to be pushed back and surrounded.
About that time the fighting ended with the Royalists satisfied in their accomplishments of the day. Exhausted and discouraged, the Parliamentarians hunkered down for the night. Later that evening under the darkness of night, Essex and his command staff stole away to the seashore where they used a fishing boat to flee to Plymouth, leaving Skippon in command.
Early on 1 September, Skippon met with his officers to inform them about Essex’s escape and to discuss alternatives. It was decided that they would approach King Charles and seek terms. Concerned that Parliamentarian reinforcements might be on their way, the King quickly agreed on 2 September to generous terms. The battle was over. Six thousand Parliamentarians were taken as prisoners. Their weapons were taken away and they were marched to Southampton. They suffered the wrath of the Cornish people in route and as many as 3,000 died of exposure and disease along the way. Those that survived the journey were, however, eventually set free. Total casualties associated with the battle were extremely high especially when considering those who died on the march back to Southampton. To those numbers as many as 700 Parliamentarians are estimated to have been killed or wounded during the fighting in Cornwall along with an estimated 500 Royalists.
The Battle of Lostwithiel was a great victory for King Charles and the greatest loss that the Parliamentarians would suffer in the First English Civil War. For King Charles the victory secured the South-West for the remainder of the war and mitigated criticism for a while against the Royalist war effort.
For the Parliamentarians, the defeat resulted in recriminations with Middleton ultimately being blamed for his failure to break-through with reinforcements. The Parliamentarian failure at Lostwithiel along with the failure to defeat King Charles at the Second Battle of Newbury ultimately led Parliament to adopt the Self-denying Ordinance and led to the implementation of the New Model Army. | https://en.wikipedia.org/wiki?curid=4160 |
Bertrand Russell
Bertrand Arthur William Russell, 3rd Earl Russell, (18 May 1872 – 2 February 1970) was a British polymath, philosopher, logician, mathematician, historian, writer, social critic, political activist, and Nobel laureate. Throughout his life, Russell considered himself a liberal, a socialist and a pacifist, although he also sometimes suggested that his sceptical nature had led him to feel that he had "never been any of these things, in any profound sense." Russell was born in Monmouthshire into one of the most prominent aristocratic families in the United Kingdom.
In the early 20th century, Russell led the British "revolt against idealism". He is considered one of the founders of analytic philosophy along with his predecessor Gottlob Frege, colleague G. E. Moore and protégé Ludwig Wittgenstein. He is widely held to be one of the 20th century's premier logicians. With A. N. Whitehead he wrote "Principia Mathematica", an attempt to create a logical basis for mathematics, the quintessential work of classical logic. His philosophical essay "On Denoting" has been considered a "paradigm of philosophy". His work has had a considerable influence on mathematics, logic, set theory, linguistics, artificial intelligence, cognitive science, computer science (see type theory and type system) and philosophy, especially the philosophy of language, epistemology and metaphysics.
Russell was a prominent anti-war activist, championed anti-imperialism, and chaired the India League. Occasionally, he advocated preventive nuclear war, before the opportunity provided by the atomic monopoly had passed and he decided he would "welcome with enthusiasm" world government. He went to prison for his pacifism during World War I. Later, Russell concluded that war against Adolf Hitler's Nazi Germany was a necessary "lesser of two evils" and also criticised Stalinist totalitarianism, attacked the involvement of the United States in the Vietnam War and was an outspoken proponent of nuclear disarmament. In 1950, Russell was awarded the Nobel Prize in Literature "in recognition of his varied and significant writings in which he champions humanitarian ideals and freedom of thought".
Bertrand Arthur William Russell was born on 18 May 1872 at Ravenscroft, Trellech, Monmouthshire, Wales, into an influential and liberal family of the British aristocracy. His parents, Viscount and Viscountess Amberley, were radical for their times. Lord Amberley consented to his wife's affair with their children's tutor, the biologist Douglas Spalding. Both were early advocates of birth control at a time when this was considered scandalous. Lord Amberley was an atheist and his atheism was evident when he asked the philosopher John Stuart Mill to act as Russell's secular godfather. Mill died the year after Russell's birth, but his writings had a great effect on Russell's life.
His paternal grandfather, the Earl Russell, had been asked twice by Queen Victoria to form a government, serving her as Prime Minister in the 1840s and 1860s. The Russells had been prominent in England for several centuries before this, coming to power and the peerage with the rise of the Tudor dynasty (see: Duke of Bedford). They established themselves as one of the leading British Whig families, and participated in every great political event from the Dissolution of the Monasteries in 1536–1540 to the Glorious Revolution in 1688–1689 and the Great Reform Act in 1832.
Lady Amberley was the daughter of Lord and Lady Stanley of Alderley. Russell often feared the ridicule of his maternal grandmother, one of the campaigners for education of women.
Russell had two siblings: brother Frank (nearly seven years older than Bertrand), and sister Rachel (four years older). In June 1874 Russell's mother died of diphtheria, followed shortly by Rachel's death. In January 1876, his father died of bronchitis following a long period of depression. Frank and Bertrand were placed in the care of their staunchly Victorian paternal grandparents, who lived at Pembroke Lodge in Richmond Park. His grandfather, former Prime Minister Earl Russell, died in 1878, and was remembered by Russell as a kindly old man in a wheelchair. His grandmother, the Countess Russell (née Lady Frances Elliot), was the dominant family figure for the rest of Russell's childhood and youth.
The countess was from a Scottish Presbyterian family, and successfully petitioned the Court of Chancery to set aside a provision in Amberley's will requiring the children to be raised as agnostics. Despite her religious conservatism, she held progressive views in other areas (accepting Darwinism and supporting Irish Home Rule), and her influence on Bertrand Russell's outlook on social justice and standing up for principle remained with him throughout his life. (One could challenge the view that Bertrand stood up for his principles, based on his own well-known quotation: "I would never die for my beliefs because I might be wrong.") Her favourite Bible verse, "Thou shalt not follow a multitude to do evil" (), became his motto. The atmosphere at Pembroke Lodge was one of frequent prayer, emotional repression, and formality; Frank reacted to this with open rebellion, but the young Bertrand learned to hide his feelings.
Russell's adolescence was very lonely, and he often contemplated suicide. He remarked in his autobiography that his keenest interests were in "nature and books and (later) mathematics saved me from complete despondency;" only his wish to know more mathematics kept him from suicide. He was educated at home by a series of tutors. When Russell was eleven years old, his brother Frank introduced him to the work of Euclid, which he described in his autobiography as "one of the great events of my life, as dazzling as first love."
During these formative years he also discovered the works of Percy Bysshe Shelley. Russell wrote: "I spent all my spare time reading him, and learning him by heart, knowing no one to whom I could speak of what I thought or felt, I used to reflect how wonderful it would have been to know Shelley, and to wonder whether I should meet any live human being with whom I should feel so much sympathy." Russell claimed that beginning at age 15, he spent considerable time thinking about the validity of Christian religious dogma, which he found very unconvincing. At this age, he came to the conclusion that there is no free will and, two years later, that there is no life after death. Finally, at the age of 18, after reading Mill's "Autobiography", he abandoned the "First Cause" argument and became an atheist.
He traveled to the continent in 1890 with an American friend, Edward FitzGerald, and with FitzGerald's family he visited the Paris Exhibition of 1889 and was able to climb the Eiffel Tower soon after it was completed.
Russell won a scholarship to read for the Mathematical Tripos at Trinity College, Cambridge, and commenced his studies there in 1890, taking as coach Robert Rumsey Webb. He became acquainted with the younger George Edward Moore and came under the influence of Alfred North Whitehead, who recommended him to the Cambridge Apostles. He quickly distinguished himself in mathematics and philosophy, graduating as seventh Wrangler in the former in 1893 and becoming a Fellow in the latter in 1895.
Russell was 17 years old in the summer of 1889 when he met the family of Alys Pearsall Smith, an American Quaker five years older, who was a graduate of Bryn Mawr College near Philadelphia. He became a friend of the Pearsall Smith family—they knew him primarily as "Lord John's grandson" and enjoyed showing him off.
He soon fell in love with the puritanical, high-minded Alys, and, contrary to his grandmother's wishes, married her on 13 December 1894. Their marriage began to fall apart in 1901 when it occurred to Russell, while he was cycling, that he no longer loved her. She asked him if he loved her and he replied that he did not. Russell also disliked Alys's mother, finding her controlling and cruel. It was to be a hollow shell of a marriage. A lengthy period of separation began in 1911 with Russell's affair with Lady Ottoline Morrell, and he and Alys finally divorced in 1921 to enable Russell to remarry.
During his years of separation from Alys, Russell had passionate (and often simultaneous) affairs with a number of women, including Morrell and the actress Lady Constance Malleson. Some have suggested that at this point he had an affair with Vivienne Haigh-Wood, the English governess and writer, and first wife of T. S. Eliot.
Russell began his published work in 1896 with "German Social Democracy", a study in politics that was an early indication of a lifelong interest in political and social theory. In 1896 he taught German social democracy at the London School of Economics. He was a member of the Coefficients dining club of social reformers set up in 1902 by the Fabian campaigners Sidney and Beatrice Webb.
He now started an intensive study of the foundations of mathematics at Trinity.
In 1898 he wrote "An Essay on the Foundations of Geometry" which discussed the Cayley–Klein metrics used for non-Euclidean geometry.
He attended the International Congress of Philosophy in Paris in 1900 where he met Giuseppe Peano and Alessandro Padoa. The Italians had responded to Georg Cantor, making a science of set theory; they gave Russell their literature including the "Formulario mathematico". Russell was impressed by the precision of Peano's arguments at the Congress, read the literature upon returning to England, and came upon Russell's paradox. In 1903 he published "The Principles of Mathematics", a work on foundations of mathematics. It advanced a thesis of logicism, that mathematics and logic are one and the same.
At the age of 29, in February 1901, Russell underwent what he called a "sort of mystic illumination", after witnessing Whitehead's wife's acute suffering in an angina attack. "I found myself filled with semi-mystical feelings about beauty ... and with a desire almost as profound as that of the Buddha to find some philosophy which should make human life endurable", Russell would later recall. "At the end of those five minutes, I had become a completely different person."
In 1905 he wrote the essay "On Denoting", which was published in the philosophical journal "Mind". Russell was elected a Fellow of the Royal Society (FRS) in 1908. The three-volume "Principia Mathematica", written with Whitehead, was published between 1910 and 1913. This, along with the earlier "The Principles of Mathematics", soon made Russell world-famous in his field.
In 1910 he became a University of Cambridge lecturer at Trinity College, where he had studied. He was considered for a Fellowship, which would give him a vote in the college government and protect him from being fired for his opinions, but was passed over because he was "anti-clerical", essentially because he was agnostic. He was approached by the Austrian engineering student Ludwig Wittgenstein, who became his PhD student. Russell viewed Wittgenstein as a genius and a successor who would continue his work on logic. He spent hours dealing with Wittgenstein's various phobias and his frequent bouts of despair. This was often a drain on Russell's energy, but Russell continued to be fascinated by him and encouraged his academic development, including the publication of Wittgenstein's "Tractatus Logico-Philosophicus" in 1922. Russell delivered his lectures on logical atomism, his version of these ideas, in 1918, before the end of World War I. Wittgenstein was, at that time, serving in the Austrian Army and subsequently spent nine months in an Italian prisoner of war camp at the end of the conflict.
During World War I, Russell was one of the few people to engage in active pacifist activities. In 1916, because of his lack of a Fellowship, he was dismissed from Trinity College following his conviction under the Defence of the Realm Act 1914. He later described this as an illegitimate means the state used to violate freedom of expression, in Free Thought and Official Propaganda. Russell played a significant part in the "Leeds Convention" in June 1917, a historic event which saw well over a thousand "anti-war socialists" gather; many being delegates from the Independent Labour Party and the Socialist Party, united in their pacifist beliefs and advocating a peace settlement. The international press reported that Russell appeared with a number of Labour MPs, including Ramsay MacDonald and Philip Snowden, as well as former Liberal MP and anti-conscription campaigner, Professor Arnold Lupton. After the event, Russell told Lady Ottoline Morrell that, "to my surprise, when I got up to speak, I was given the greatest ovation that was possible to give anybody".
The Trinity incident resulted in Russell being fined £100 (), which he refused to pay in hope that he would be sent to prison, but his books were sold at auction to raise the money. The books were bought by friends; he later treasured his copy of the King James Bible that was stamped "Confiscated by Cambridge Police".
A later conviction for publicly lecturing against inviting the United States to enter the war on the United Kingdom's side resulted in six months' imprisonment in Brixton prison (see "Bertrand Russell's political views") in 1918. He later said of his imprisonment:
He found the Brixton period so much agreeable that while he was reading Strachey's Eminent Victorians chapter about Gordon he laughed out loud in his cell prompting the warden to intervene and reminding him that "prison was a place of punishment".
Russell was reinstated to Trinity in 1919, resigned in 1920, was Tarner Lecturer 1926 and became a Fellow again in 1944 until 1949.
In 1924, Russell again gained press attention when attending a "banquet" in the House of Commons with well-known campaigners, including Arnold Lupton, who had been a Member of Parliament and had also endured imprisonment for "passive resistance to military or naval service".
In 1941, G. H. Hardy wrote a 61-page pamphlet titled "Bertrand Russell and Trinity"—published later as a book by Cambridge University Press with a foreword by C. D. Broad—in which he gave an authoritative account about Russell's 1916 dismissal from Trinity College, explaining that a reconciliation between the college and Russell had later taken place and gave details about Russell's personal life. Hardy writes that Russell's dismissal had created a scandal since the vast majority of the Fellows of the College opposed the decision. The ensuing pressure from the Fellows induced the Council to reinstate Russell. In January 1920, it was announced that Russell had accepted the reinstatement offer from Trinity and would begin lecturing from October. In July 1920, Russell applied for a one year leave of absence; this was approved. He spent the year giving lectures in China and Japan. In January 1921, it was announced by Trinity that Russell had resigned and his resignation had been accepted. This resignation, Hardy explains, was completely voluntary and was not the result of another altercation.
The reason for the resignation, according to Hardy, was that Russell was going through a tumultuous time in his personal life with a divorce and subsequent remarriage. Russell contemplated asking Trinity for another one-year leave of absence but decided against it, since this would have been an "unusual application" and the situation had the potential to snowball into another controversy. Although Russell did the right thing, in Hardy's opinion, the reputation of the College suffered due to Russell's resignation since the 'world of learning' knew about Russell's altercation with Trinity but not that the rift had healed. In 1925, Russell was asked by the Council of Trinity College to give the "Tarner Lectures" on the Philosophy of the Sciences; these would later be the basis for one of Russell's best received books according to Hardy: "The Analysis of Matter", published in 1927. In the preface to the Trinity pamphlet, Hardy wrote:
In August 1920, Russell travelled to Soviet Russia as part of an official delegation sent by the British government to investigate the effects of the Russian Revolution. He wrote a four-part series of articles, titled "Soviet Russia1920", for the US magazine "The Nation". He met Vladimir Lenin and had an hour-long conversation with him. In his autobiography, he mentions that he found Lenin disappointing, sensing an "impish cruelty" in him and comparing him to "an opinionated professor". He cruised down the Volga on a steamship. His experiences destroyed his previous tentative support for the revolution. He subsequently wrote a book, "The Practice and Theory of Bolshevism", about his experiences on this trip, taken with a group of 24 others from the UK, all of whom came home thinking well of the Soviet regime, despite Russell's attempts to change their minds. For example, he told them that he had heard shots fired in the middle of the night and was sure that these were clandestine executions, but the others maintained that it was only cars backfiring.
Russell's lover Dora Black, a British author, feminist and socialist campaigner, visited Soviet Russia independently at the same time; in contrast to his reaction, she was enthusiastic about the Bolshevik revolution.
The following autumn, Russell, accompanied by Dora, visited Peking (as it was then known in the West) to lecture on philosophy for a year. He went with optimism and hope, seeing China as then being on a new path. Other scholars present in China at the time included John Dewey and Rabindranath Tagore, the Indian Nobel-laureate poet. Before leaving China, Russell became gravely ill with pneumonia, and incorrect reports of his death were published in the Japanese press. When the couple visited Japan on their return journey, Dora took on the role of spurning the local press by handing out notices reading "Mr. Bertrand Russell, having died according to the Japanese press, is unable to give interviews to Japanese journalists". Apparently they found this harsh and reacted resentfully.
Dora was six months pregnant when the couple returned to England on 26 August 1921. Russell arranged a hasty divorce from Alys, marrying Dora six days after the divorce was finalised, on 27 September 1921. Russell's children with Dora were John Conrad Russell, 4th Earl Russell, born on 16 November 1921, and Katharine Jane Russell (now Lady Katharine Tait), born on 29 December 1923. Russell supported his family during this time by writing popular books explaining matters of physics, ethics, and education to the layman.
From 1922 to 1927 the Russells divided their time between London and Cornwall, spending summers in Porthcurno. In the 1922 and 1923 general elections Russell stood as a Labour Party candidate in the Chelsea constituency, but only on the basis that he knew he was extremely unlikely to be elected in such a safe Conservative seat, and he was unsuccessful on both occasions.
Owing to the birth of his two children, he became interested in education, especially early childhood education. He was not satisfied with the old traditional education and thought that progressive education also had some flaws, as a result, together with Dora, Russell founded the experimental Beacon Hill School in 1927. The school was run from a succession of different locations, including its original premises at the Russells' residence, Telegraph House, near Harting, West Sussex. During this time, he published On Education, Especially in Early Childhood. On 8 July 1930 Dora gave birth to her third child Harriet Ruth. After he left the school in 1932, Dora continued it until 1943.
On a tour through the US in 1927, Russell met Barry Fox (later Barry Stevens), who became a well-known Gestalt therapist and writer in later years. Russell and Fox developed an intensive relationship. In Fox's words: "...for three years we were very close." Fox sent her daughter Judith to Beacon Hill School for some time. From 1927 to 1932 Russell wrote 34 letters to Fox.
Upon the death of his elder brother Frank, in 1931, Russell became the 3rd Earl Russell.
Russell's marriage to Dora grew increasingly tenuous, and it reached a breaking point over her having two children with an American journalist, Griffin Barry. They separated in 1932 and finally divorced. On 18 January 1936, Russell married his third wife, an Oxford undergraduate named Patricia ("Peter") Spence, who had been his children's governess since 1930. Russell and Peter had one son, Conrad Sebastian Robert Russell, 5th Earl Russell, who became a prominent historian and one of the leading figures in the Liberal Democrat party.
Russell returned to the London School of Economics to lecture on the science of power in 1937.
During the 1930s, Russell became a close friend and collaborator of V. K. Krishna Menon, then President of the India League, the foremost lobby in the United Kingdom for Indian self-rule. Russel was Chair of the India League from 1932-1939.
Russell's political views changed over time, mostly about war. He opposed rearmament against Nazi Germany. In 1937, he wrote in a personal letter: "If the Germans succeed in sending an invading army to England we should do best to treat them as visitors, give them quarters and invite the commander and chief to dine with the prime minister." In 1940, he changed his appeasement view that avoiding a full-scale world war was more important than defeating Hitler. He concluded that Adolf Hitler taking over all of Europe would be a permanent threat to democracy. In 1943, he adopted a stance toward large-scale warfare called "relative political pacifism": "War was always a great evil, but in some particularly extreme circumstances, it may be the lesser of two evils."
Before World War II, Russell taught at the University of Chicago, later moving on to Los Angeles to lecture at the UCLA Department of Philosophy. He was appointed professor at the City College of New York (CCNY) in 1940, but after a public outcry the appointment was annulled by a court judgment that pronounced him "morally unfit" to teach at the college due to his opinions, especially those relating to sexual morality, detailed in "Marriage and Morals" (1929). The matter was however taken to the New York Supreme Court by Jean Kay who was afraid that her daughter would be harmed by the appointment, though her daughter was not a student at CCNY. Many intellectuals, led by John Dewey, protested at his treatment. Albert Einstein's oft-quoted aphorism that "great spirits have always encountered violent opposition from mediocre minds" originated in his open letter, dated 19 March 1940, to Morris Raphael Cohen, a professor emeritus at CCNY, supporting Russell's appointment. Dewey and Horace M. Kallen edited a collection of articles on the CCNY affair in "The Bertrand Russell Case". Russell soon joined the Barnes Foundation, lecturing to a varied audience on the history of philosophy; these lectures formed the basis of "A History of Western Philosophy". His relationship with the eccentric Albert C. Barnes soon soured, and he returned to the UK in 1944 to rejoin the faculty of Trinity College.
Russell participated in many broadcasts over the BBC, particularly "The Brains Trust" and the Third Programme, on various topical and philosophical subjects. By this time Russell was world-famous outside academic circles, frequently the subject or author of magazine and newspaper articles, and was called upon to offer opinions on a wide variety of subjects, even mundane ones. En route to one of his lectures in Trondheim, Russell was one of 24 survivors (among a total of 43 passengers) of an aeroplane crash in Hommelvik in October 1948. He said he owed his life to smoking since the people who drowned were in the non-smoking part of the plane. "A History of Western Philosophy" (1945) became a best-seller and provided Russell with a steady income for the remainder of his life.
In 1942, Russell argued in favour of a moderate socialism, capable of overcoming its metaphysical principles, in an inquiry on dialectical materialism, launched by the Austrian artist and philosopher Wolfgang Paalen in his journal "DYN", saying "I think the metaphysics of both Hegel and Marx plain nonsense—Marx's claim to be 'science' is no more justified than Mary Baker Eddy's. This does not mean that I am opposed to socialism."
In 1943, Russell expressed support for Zionism: "I have come gradually to see that, in a dangerous and largely hostile world, it is essential to Jews to have some country which is theirs, some region where they are not suspected aliens, some state which embodies what is distinctive in their culture".
In a speech in 1948, Russell said that if the USSR's aggression continued, it would be morally worse to go to war after the USSR possessed an atomic bomb than before it possessed one, because if the USSR had no bomb the West's victory would come more swiftly and with fewer casualties than if there were atom bombs on both sides. At that time, only the United States possessed an atomic bomb, and the USSR was pursuing an extremely aggressive policy towards the countries in Eastern Europe which were being absorbed into the Soviet Union's sphere of influence. Many understood Russell's comments to mean that Russell approved of a first strike in a war with the USSR, including Nigel Lawson, who was present when Russell spoke of such matters. Others, including Griffin, who obtained a transcript of the speech, have argued that he was merely explaining the usefulness of America's atomic arsenal in deterring the USSR from continuing its domination of Eastern Europe.
However, just after the atomic bombs exploded over Hiroshima and Nagasaki, Russell wrote letters, and published articles in newspapers from 1945 to 1948, stating clearly that it was morally justified and better to go to war against the USSR using atomic bombs while the United States possessed them and before the USSR did. In September 1949, one week after the USSR tested its first A-bomb, but before this became known, Russell wrote that USSR would be unable to develop nuclear weapons because following Stalin's purges only science based on Marxist principles would be practiced in the Soviet Union. After it became known that the USSR carried out its nuclear bomb tests, Russell declared his position advocating for the total abolition of atomic weapons.
In 1948, Russell was invited by the BBC to deliver the inaugural Reith Lectures—what was to become an annual series of lectures, still broadcast by the BBC. His series of six broadcasts, titled "Authority and the Individual", explored themes such as the role of individual initiative in the development of a community and the role of state control in a progressive society. Russell continued to write about philosophy. He wrote a foreword to "Words and Things" by Ernest Gellner, which was highly critical of the later thought of Ludwig Wittgenstein and of ordinary language philosophy. Gilbert Ryle refused to have the book reviewed in the philosophical journal "Mind", which caused Russell to respond via "The Times". The result was a month-long correspondence in "The Times" between the supporters and detractors of ordinary language philosophy, which was only ended when the paper published an editorial critical of both sides but agreeing with the opponents of ordinary language philosophy.
In the King's Birthday Honours of 9 June 1949, Russell was awarded the Order of Merit, and the following year he was awarded the Nobel Prize in Literature. When he was given the Order of Merit, George VI was affable but slightly embarrassed at decorating a former jailbird, saying, "You have sometimes behaved in a manner that would not do if generally adopted". Russell merely smiled, but afterwards claimed that the reply "That's right, just like your brother" immediately came to mind.
In 1950, Russell attended the inaugural conference for the Congress for Cultural Freedom, a CIA-funded anti-communist organisation committed to the deployment of culture as a weapon during the Cold War. Russell was one of the best known patrons of the Congress, until he resigned in 1956.
In 1952 Russell was divorced by Spence, with whom he had been very unhappy. Conrad, Russell's son by Spence, did not see his father between the time of the divorce and 1968 (at which time his decision to meet his father caused a permanent breach with his mother). Russell married his fourth wife, Edith Finch, soon after the divorce, on 15 December 1952. They had known each other since 1925, and Edith had taught English at Bryn Mawr College near Philadelphia, sharing a house for 20 years with Russell's old friend Lucy Donnelly. Edith remained with him until his death, and, by all accounts, their marriage was a happy, close, and loving one. Russell's eldest son John suffered from serious mental illness, which was the source of ongoing disputes between Russell and his former wife Dora.
In September 1961, at the age of 89, Russell was jailed for seven days in Brixton Prison for "breach of peace" after taking part in an anti-nuclear demonstration in London. The magistrate offered to exempt him from jail if he pledged himself to "good behaviour", to which Russell replied: "No, I won't."
In 1962 Russell played a public role in the Cuban Missile Crisis: in an exchange of telegrams with Soviet leader Nikita Khrushchev, Khrushchev assured him that the Soviet government would not be reckless. Russell sent this telegram to President Kennedy:
YOUR ACTION DESPERATE. THREAT TO HUMAN SURVIVAL. NO CONCEIVABLE JUSTIFICATION. CIVILIZED MAN CONDEMNS IT. WE WILL NOT HAVE MASS MURDER. ULTIMATUM MEANS WAR... END THIS MADNESS.
According to historian Peter Knight, after JFK's assassination, Russell, "prompted by the emerging work of the lawyer Mark Lane in the US ... rallied support from other noteworthy and left-leaning compatriots to form a Who Killed Kennedy Committee in June 1964, members of which included Michael Foot MP, Caroline Benn, the publisher Victor Gollancz, the writers John Arden and J. B. Priestley, and the Oxford history professor Hugh Trevor-Roper." Russell published a highly critical article weeks before the Warren Commission Report was published, setting forth "16 Questions on the Assassination" and equating the Oswald case with the Dreyfus affair of late 19th-century France, in which the state wrongly convicted an innocent man. Russell also criticised the American press for failing to heed any voices critical of the official version.
Bertrand Russell was opposed to war from early on, his opposition to World War I being used as grounds for his dismissal from Trinity College at Cambridge. This incident fused two of his most controversial causes, as he had failed to be granted Fellow status, which would have protected him from firing, because he was not willing to either pretend to be a devout Christian, or at least avoid admitting he was agnostic.
He later described the resolution of these issues as essential to freedom of thought and expression, citing the incident in Free Thought and Official Propaganda, where he explained that the expression of any idea, even the most obviously "bad", must be protected not only from direct State intervention, but also economic leveraging and other means of being silenced:
Russell spent the 1950s and 1960s engaged in political causes primarily related to nuclear disarmament and opposing the Vietnam War. The 1955 Russell–Einstein Manifesto was a document calling for nuclear disarmament and was signed by eleven of the most prominent nuclear physicists and intellectuals of the time. In 1966–1967, Russell worked with Jean-Paul Sartre and many other intellectual figures to form the Russell Vietnam War Crimes Tribunal to investigate the conduct of the United States in Vietnam. He wrote a great many letters to world leaders during this period.
In 1956, immediately before and during the Suez Crisis, Russell expressed his opposition to European imperialism in the Middle East. He viewed the crisis as another reminder of the pressing need for a more effective mechanism for international governance, and to restrict national sovereignty to places such as the Suez Canal area "where general interest is involved". At the same time the Suez Crisis was taking place, the world was also captivated by the Hungarian Revolution and the subsequent crushing of the revolt by intervening Soviet forces. Russell attracted criticism for speaking out fervently against the Suez war while ignoring Soviet repression in Hungary, to which he responded that he did not criticise the Soviets "because there was no need. Most of the so-called Western World was fulminating". Although he later feigned a lack of concern, at the time he was disgusted by the brutal Soviet response, and on 16 November 1956, he expressed approval for a declaration of support for Hungarian scholars which Michael Polanyi had cabled to the Soviet embassy in London twelve days previously, shortly after Soviet troops had already entered Budapest.
In November 1957 Russell wrote an article addressing US President Dwight D. Eisenhower and Soviet Premier Nikita Khrushchev, urging a summit to consider "the conditions of co-existence". Khrushchev responded that peace could indeed be served by such a meeting. In January 1958 Russell elaborated his views in "The Observer", proposing a cessation of all nuclear-weapons production, with the UK taking the first step by unilaterally suspending its own nuclear-weapons program if necessary, and with Germany "freed from all alien armed forces and pledged to neutrality in any conflict between East and West". US Secretary of State John Foster Dulles replied for Eisenhower. The exchange of letters was published as "The Vital Letters of Russell, Khrushchev, and Dulles".
Russell was asked by "The New Republic", a liberal American magazine, to elaborate his views on world peace. He urged that all nuclear-weapons testing and constant flights by planes armed with nuclear weapons be halted immediately, and negotiations be opened for the destruction of all hydrogen bombs, with the number of conventional nuclear devices limited to ensure a balance of power. He proposed that Germany be reunified and accept the Oder-Neisse line as its border, and that a neutral zone be established in Central Europe, consisting at the minimum of Germany, Poland, Hungary, and Czechoslovakia, with each of these countries being free of foreign troops and influence, and prohibited from forming alliances with countries outside the zone. In the Middle East, Russell suggested that the West avoid opposing Arab nationalism, and proposed the creation of a United Nations peacekeeping force to guard Israel's frontiers to ensure that Israel was prevented from committing aggression and protected from it. He also suggested Western recognition of the People's Republic of China, and that it be admitted to the UN with a permanent seat on the UN Security Council.
He was in contact with Lionel Rogosin while the latter was filming his anti-war film "Good Times, Wonderful Times" in the 1960s. He became a hero to many of the youthful members of the New Left. In early 1963, in particular, Russell became increasingly vocal in his disapproval of the Vietnam War, and felt that the US government's policies there were near-genocidal. In 1963 he became the inaugural recipient of the Jerusalem Prize, an award for writers concerned with the freedom of the individual in society. In 1964 he was one of eleven world figures who issued an appeal to Israel and the Arab countries to accept an arms embargo and international supervision of nuclear plants and rocket weaponry. In October 1965 he tore up his Labour Party card because he suspected Harold Wilson's Labour government was going to send troops to support the United States in Vietnam.
In June 1955 Russell had leased Plas Penrhyn in Penrhyndeudraeth, Merionethshire, Wales and on 5 July of the following year it became his and Edith's principal residence.
Russell published his three-volume autobiography in 1967, 1968, and 1969. Russell made a cameo appearance playing himself in the anti-war Hindi film "Aman", by Mohan Kumar, which was released in India in 1967. This was Russell's only appearance in a feature film.
On 23 November 1969 he wrote to "The Times" newspaper saying that the preparation for show trials in Czechoslovakia was "highly alarming". The same month, he appealed to Secretary General U Thant of the United Nations to support an international war crimes commission to investigate alleged torture and genocide by the United States in South Vietnam during the Vietnam War. The following month, he protested to Alexei Kosygin over the expulsion of Aleksandr Solzhenitsyn from the Soviet Union of Writers.
On 31 January 1970 Russell issued a statement condemning "Israel's aggression in the Middle East", and in particular, Israeli bombing raids being carried out deep in Egyptian territory as part of the War of Attrition. He called for an Israeli withdrawal to the pre-Six-Day War borders. This was Russell's final political statement or act. It was read out at the International Conference of Parliamentarians in Cairo on 3 February 1970, the day after his death.
Russell died of influenza, just after 8 pm on 2 February 1970 at his home in Penrhyndeudraeth. His body was cremated in Colwyn Bay on 5 February 1970 with five people present. In accordance with his will, there was no religious ceremony but one minute's silence; his ashes were scattered over the Welsh mountains later that year. He left an estate valued at £69,423 (equivalent to £ million in ).
In 1980 a memorial to Russell was commissioned by a committee including the philosopher A. J. Ayer. It consists of a bust of Russell in Red Lion Square in London sculpted by Marcelle Quinton.
Lady Katharine Jane Tait, Russell's daughter, founded the Bertrand Russell Society in 1974 to preserve and understand his work. It publishes the "Bertrand Russell Society Bulletin", holds meetings and awards prizes for scholarship. She also authored several essays about her father; as well as a book, "My Father, Bertrand Russell", which was published in 1975. All members receive "Russell: The Journal of Bertrand Russell Studies".
Russell held throughout his life the following styles and honours:
Russell is generally credited with being one of the founders of analytic philosophy. He was deeply impressed by Gottfried Leibniz (1646–1716), and wrote on every major area of philosophy except aesthetics. He was particularly prolific in the fields of metaphysics, logic and the philosophy of mathematics, the philosophy of language, ethics and epistemology. When Brand Blanshard asked Russell why he did not write on aesthetics, Russell replied that he did not know anything about it, though he hastened to add "but that is not a very good excuse, for my friends tell me it has not deterred me from writing on other subjects".
On ethics, Russell wrote that he was a utilitarian in his youth, yet he later distanced himself from this view.
For the advancement of science and protection of the right to freedom of expression, Russell advocated The Will to Doubt, the recognition that all human knowledge is at most a best guess, that one should always remember:
Russell described himself in 1947 as an agnostic, saying: "Therefore, in regard to the Olympic gods, speaking to a purely philosophical audience, I would say that I am an Agnostic. But speaking popularly, I think that all of us would say in regard to those gods that we were Atheists. In regard to the Christian God, I should, I think, take exactly the same line." For most of his adult life, Russell maintained religion to be little more than superstition and, despite any positive effects, largely harmful to people. He believed that religion and the religious outlook serve to impede knowledge and foster fear and dependency, and to be responsible for much of our world's wars, oppression, and misery. He was a member of the Advisory Council of the British Humanist Association and President of Cardiff Humanists until his death.
Political and social activism occupied much of Russell's time for most of his life. Russell remained politically active almost to the end of his life, writing to and exhorting world leaders and lending his name to various causes.
Russell argued for a "scientific society", where war would be abolished, population growth would be limited, and prosperity would be shared. He suggested the establishment of a "single supreme world government" able to enforce peace, claiming that "the only thing that will redeem mankind is co-operation".
Russell was an active supporter of the Homosexual Law Reform Society, being one of the signatories of A. E. Dyson's 1958 letter to "The Times" calling for a change in the law regarding male homosexual practices, which were partly legalised in 1967, when Russell was still alive.
In "Reflections on My Eightieth Birthday" ("Postscript" in his "Autobiography"), Russell wrote: "I have lived in the pursuit of a vision, both personal and social. Personal: to care for what is noble, for what is beautiful, for what is gentle; to allow moments of insight to give wisdom at more mundane times. Social: to see in imagination the society that is to be created, where individuals grow freely, and where hate and greed and envy die because there is nothing to nourish them. These things I believe, and the world, for all its horrors, has left me unshaken".
Like George Orwell, Russell was a champion of freedom of opinion and an opponent of both censorship and indoctrination. In 1928, he wrote: "The fundamental argument for freedom of opinion is the doubtfulness of all our belief... when the State intervenes to ensure the indoctrination of some doctrine, it does so because there is no conclusive evidence in favour of that doctrine .. It is clear that thought is not free if the profession of certain opinions make it impossible to make a living. In 1957, he wrote: "'Free thought' means thinking freely ... to be worthy of the name freethinker he must be free of two things: the force of tradition and the tyranny of his own passions."
Below is a selected bibliography of Russell's books in English, sorted by year of first publication:
Russell was the author of more than sixty books and over two thousand articles. Additionally, he wrote many pamphlets, introductions, and letters to the editor. One pamphlet titled, "'I Appeal unto Caesar': The Case of the Conscientious Objectors", ghostwritten for Margaret Hobhouse, the mother of imprisoned peace activist Stephen Hobhouse, allegedly helped secure the release from prison of hundreds of conscientious objectors.
His works can be found in anthologies and collections, including "The Collected Papers of Bertrand Russell", which McMaster University began publishing in 1983. By March 2017 this collection of his shorter and previously unpublished works included 18 volumes, and several more are in progress. A bibliography in three additional volumes catalogues his publications. The Russell Archives held by McMaster's William Ready Division of Archives and Research Collections possess over 40,000 of his letters.
Primary sources
Secondary sources
Books about Russell's philosophy
Biographical books | https://en.wikipedia.org/wiki?curid=4163 |
Boeing 767
The Boeing 767 is a wide-body airliner developed and manufactured by Boeing Commercial Airplanes.
The airliner was launched as the 7X7 project on July 14, 1978, the prototype first flew on September 26, 1981, and it was certified on July 30, 1982.
The original 767-200 entered service on September 8 with United Airlines, and the extended-range 767-200ER in 1984.
It was stretched into the in October 1986, followed by the 767-300ER in 1988, the most popular variant.
The 767-300F, a production freighter version, debuted in October 1995.
It was stretched again into the 767-400ER from September 2000.
To complement the larger 747, it has a seven abreast cross-section, accommodating smaller LD2 ULD cargo containers.
The 767 is Boeing's first wide-body twinjet, powered by General Electric CF6, Rolls-Royce RB211, or Pratt & Whitney JT9D turbofans. JT9D engines were eventually replaced by PW4000 engines.
The aircraft has a conventional tail and a supercritical wing for reduced aerodynamic drag.
Its two-crew glass cockpit, a first for a Boeing airliner, was developed jointly for the 757 − a narrow-body aircraft, allowing a common pilot type rating.
Studies for a higher-capacity 767 in 1986 led Boeing to develop the larger 777 twinjet, introduced in June 1995.
The long 767-200 typically seats 216 passengers over 3,900 nmi (7,200 km) while the 767-200ER seats 181 over a 6,590 nautical miles (12,200 km) range.
The long 767-300 typically seats 269 passengers over 3,900 nmi (7,200 km) while the 767-300ER seats 218 over 5,980 nmi (11,070 km).
The 767-300F can haul over 3,225 nmi (6,025 km), and the long 767-400ER typically seats 245 passengers over 5,625 nmi (10,415 km).
Military derivatives include the E-767 for surveillance, the KC-767 and KC-46 aerial tankers.
Passenger 767-200s and 767-300s have been converted for cargo use.
After being initially used on U.S. transcontinental routes, that was extended with ETOPS regulations from 1985 and it is frequently used on transatlantic flights.
, Boeing has received 1,254 orders from 74 customers, with 1,161 delivered, while the remaining orders are for cargo or tanker variants. A total of 742 of these aircraft were in service in July 2018. Delta Air Lines is the largest operator with 77 aircraft. Competitors have included the Airbus A300, A310, and A330-200.
Its successor, the 787 Dreamliner, entered service in 2011.
In 1970, Boeing's 747 became the first wide-body jetliner to enter service. The 747 was the first passenger jet wide enough to feature a twin-aisle cabin. Two years later, the manufacturer began a development study, code-named 7X7, for a new wide-body aircraft intended to replace the 707 and other early generation narrow-body jets. The aircraft would also provide twin-aisle seating, but in a smaller fuselage than the existing 747, McDonnell Douglas DC-10, and Lockheed L-1011 TriStar wide-bodies. To defray the high cost of development, Boeing signed risk-sharing agreements with Italian corporation Aeritalia and the Civil Transport Development Corporation (CTDC), a consortium of Japanese aerospace companies. This marked the manufacturer's first major international joint venture, and both Aeritalia and the CTDC received supply contracts in return for their early participation. The initial 7X7 was conceived as a short take-off and landing airliner intended for short-distance flights, but customers were unenthusiastic about the concept, leading to its redefinition as a mid-size, transcontinental-range airliner. At this stage the proposed aircraft featured two or three engines, with possible configurations including over-wing engines and a T-tail.
By 1976, a twinjet layout, similar to the one which had debuted on the Airbus A300, became the baseline configuration. The decision to use two engines reflected increased industry confidence in the reliability and economics of new-generation jet powerplants. While airline requirements for new wide-body aircraft remained ambiguous, the 7X7 was generally focused on mid-size, high-density markets. As such, it was intended to transport large numbers of passengers between major cities. Advancements in civil aerospace technology, including high-bypass-ratio turbofan engines, new flight deck systems, aerodynamic improvements, and lighter construction materials were to be applied to the 7X7. Many of these features were also included in a parallel development effort for a new mid-size narrow-body airliner, code-named 7N7, which would become the 757. Work on both proposals proceeded through the airline industry upturn in the late 1970s.
In January 1978, Boeing announced a major extension of its Everett factory—which was then dedicated to manufacturing the 747—to accommodate its new wide-body family. In February 1978, the new jetliner received the 767 model designation, and three variants were planned: a with 190 seats, a with 210 seats, and a trijet 767MR/LR version with 200 seats intended for intercontinental routes. The 767MR/LR was subsequently renamed 777 for differentiation purposes. The 767 was officially launched on July 14, 1978, when United Airlines ordered 30 of the 767-200 variant, followed by 50 more 767-200 orders from American Airlines and Delta Air Lines later that year. The 767-100 was ultimately not offered for sale, as its capacity was too close to the 757's seating, while the 777 trijet was eventually dropped in favor of standardizing around the twinjet configuration.
In the late 1970s, operating cost replaced capacity as the primary factor in airliner purchases. As a result, the 767's design process emphasized fuel efficiency from the outset. Boeing targeted a 20 to 30 percent cost saving over earlier aircraft, mainly through new engine and wing technology. As development progressed, engineers used computer-aided design for over a third of the 767's design drawings, and performed 26,000 hours of wind tunnel tests. Design work occurred concurrently with the 757 twinjet, leading Boeing to treat both as almost one program to reduce risk and cost. Both aircraft would ultimately receive shared design features, including avionics, flight management systems, instruments, and handling characteristics. Combined development costs were estimated at $3.5 to $4 billion.
Early 767 customers were given the choice of Pratt & Whitney JT9D or General Electric CF6 turbofans, marking the first time that Boeing had offered more than one engine option at the launch of a new airliner. Both jet engine models had a maximum output of of thrust. The engines were mounted approximately one-third the length of the wing from the fuselage, similar to previous wide-body trijets. The larger wings were designed using an aft-loaded shape which reduced aerodynamic drag and distributed lift more evenly across their surface span than any of the manufacturer's previous aircraft. The wings provided higher-altitude cruise performance, added fuel capacity, and expansion room for future stretched variants. The initial 767-200 was designed for sufficient range to fly across North America or across the northern Atlantic, and would be capable of operating routes up to .
The 767's fuselage width was set midway between that of the 707 and the 747 at . While it was narrower than previous wide-body designs, seven abreast seating with two aisles could be fitted, and the reduced width produced less aerodynamic drag. The fuselage was not wide enough to accommodate two standard LD3 wide-body unit load devices side-by-side, so a smaller container, the LD2, was created specifically for the 767. Using a conventional tail design also allowed the rear fuselage to be tapered over a shorter section, providing for parallel aisles along the full length of the passenger cabin, and eliminating irregular seat rows toward the rear of the aircraft.
The 767 was the first Boeing wide-body to be designed with a two-crew digital glass cockpit. Cathode ray tube (CRT) color displays and new electronics replaced the role of the flight engineer by enabling the pilot and co-pilot to monitor aircraft systems directly. Despite the promise of reduced crew costs, United Airlines initially demanded a conventional three-person cockpit, citing concerns about the risks associated with introducing a new aircraft. The carrier maintained this position until July 1981, when a US presidential task force determined that a crew of two was safe for operating wide-body jets. A three-crew cockpit remained as an option and was fitted to the first production models. Ansett Australia ordered 767s with three-crew cockpits due to union demands; it was the only airline to operate 767s so configured. The 767's two-crew cockpit was also applied to the 757, allowing pilots to operate both aircraft after a short conversion course, and adding incentive for airlines to purchase both types. Although nominally similar in control design, flying the 767 feels different from the 757. The 757's controls are heavy, similar to the 727 and 747; the control yoke can be rotated to 90 degrees in each direction. The 767 has far lighter control feel in pitch and roll, and the control yoke has approximately 2/3 the rotation travel.
To produce the 767, Boeing formed a network of subcontractors which included domestic suppliers and international contributions from Italy's Aeritalia and Japan's CTDC. The wings and cabin floor were produced in-house, while Aeritalia provided control surfaces, Boeing Vertol made the leading edge for the wings, and Boeing Wichita produced the forward fuselage. The CTDC provided multiple assemblies through its constituent companies, namely Fuji Heavy Industries (wing fairings and gear doors), Kawasaki Heavy Industries (center fuselage), and Mitsubishi Heavy Industries (rear fuselage, doors, and tail). Components were integrated during final assembly at the Everett factory. For expedited production of wing spars, the main structural member of aircraft wings, the Everett factory received robotic machinery to automate the process of drilling holes and inserting fasteners. This method of wing construction expanded on techniques developed for the 747. Final assembly of the first aircraft began in July 1979.
The prototype aircraft, registered N767BA and equipped with JT9D turbofans, rolled out on August 4, 1981. By this time, the 767 program had accumulated 173 firm orders from 17 customers, including Air Canada, All Nippon Airways, Britannia Airways, Transbrasil, and Trans World Airlines (TWA). On September 26, 1981, the prototype took its maiden flight under the command of company test pilots Tommy Edmonds, Lew Wallick, and John Brit. The maiden flight was largely uneventful, save for the inability to retract the landing gear because of a hydraulic fluid leak. The prototype was used for subsequent flight tests.
The 10-month 767 flight test program utilized the first six aircraft built. The first four aircraft were equipped with JT9D engines, while the fifth and sixth were fitted with CF6 engines. The test fleet was largely used to evaluate avionics, flight systems, handling, and performance, while the sixth aircraft was used for route-proving flights. During testing, pilots described the 767 as generally easy to fly, with its maneuverability unencumbered by the bulkiness associated with larger wide-body jets. Following 1,600 hours of flight tests, the JT9D-powered 767-200 received certification from the US Federal Aviation Administration (FAA) and the UK Civil Aviation Authority (CAA) in July 1982. The first delivery occurred on August 19, 1982, to United Airlines. The CF6-powered 767-200 received certification in September 1982, followed by the first delivery to Delta Air Lines on October 25, 1982.
The 767 entered service with United Airlines on September 8, 1982. The aircraft's first commercial flight used a JT9D-powered on the Chicago-to-Denver route. The CF6-powered 767-200 commenced service three months later with Delta Air Lines. Upon delivery, early 767s were mainly deployed on domestic routes, including US transcontinental services. American Airlines and TWA began flying the 767-200 in late 1982, while Air Canada, China Airlines, and El Al began operating the aircraft in 1983. The aircraft's introduction was relatively smooth, with few operational glitches and greater dispatch reliability than prior jetliners. In its first year, the 767 logged a 96.1 percent dispatch rate, which exceeded the industry average for new aircraft. Operators reported generally favorable ratings for the twinjet's sound levels, interior comfort, and economic performance. Resolved issues were minor and included the recalibration of a leading edge sensor to prevent false readings, the replacement of an evacuation slide latch, and the repair of a tailplane pivot to match production specifications.
Seeking to capitalize on its new wide-body's potential for growth, Boeing offered an extended-range model, the 767-200ER, in its first year of service. Ethiopian Airlines placed the first order for the type in December 1982. Featuring increased gross weight and greater fuel capacity, the extended-range model could carry heavier payloads at distances up to , and was targeted at overseas customers. The 767-200ER entered service with El Al Airline on March 27, 1984. The type was mainly ordered by international airlines operating medium-traffic, long-distance flights.
In the mid-1980s, the 767 spearheaded the growth of twinjet flights across the northern Atlantic under extended-range twin-engine operational performance standards (ETOPS) regulations, the FAA's safety rules governing transoceanic flights by aircraft with two engines. Before the 767, overwater flight paths of twinjets could be no more than 90 minutes away from diversion airports. In May 1985, the FAA granted its first approval for 120-minute ETOPS flights to 767 operators, on an individual airline basis starting with TWA, provided that the operator met flight safety criteria. This allowed the aircraft to fly overseas routes at up to two hours' distance from land. The larger safety margins were permitted because of the improved reliability demonstrated by the twinjet and its turbofan engines. The FAA lengthened the ETOPS time to 180 minutes for CF6-powered 767s in 1989, making the type the first to be certified under the longer duration, and all available engines received approval by 1993. Regulatory approval spurred the expansion of transoceanic 767 flights and boosted the aircraft's sales.
Forecasting airline interest in larger-capacity models, Boeing announced the stretched in 1983 and the extended-range 767-300ER in 1984. Both models offered a 20 percent passenger capacity increase, while the extended-range version was capable of operating flights up to . Japan Airlines placed the first order for the -300 in September 1983. Following its first flight on January 30, 1986, the type entered service with Japan Airlines on October 20, 1986. The 767-300ER completed its first flight on December 9, 1986, but it was not until March 1987 that the first firm order, from American Airlines, was placed. The type entered service with American Airlines on March 3, 1988. The 767-300 and 767-300ER gained popularity after entering service, and came to account for approximately two-thirds of all 767s sold.
After the debut of the first stretched 767s, Boeing sought to address airline requests for greater capacity by proposing larger models, including a partial double-deck version informally named the "Hunchback of Mukilteo" (from a town near Boeing's Everett factory) with a 757 body section mounted over the aft main fuselage. In 1986, Boeing proposed the 767-X, a revised model with extended wings and a wider cabin, but received little interest. By 1988, the 767-X had evolved into an all-new twinjet, which revived the 777 designation. Until the 777's 1995 debut, the 767-300 and 767-300ER remained Boeing's second-largest wide-bodies behind the 747.
Buoyed by a recovering global economy and ETOPS approval, 767 sales accelerated in the mid-to-late 1980s; 1989 was the most prolific year with 132 firm orders. By the early 1990s, the wide-body twinjet had become its manufacturer's annual best-selling aircraft, despite a slight decrease due to economic recession. During this period, the 767 became the most common airliner for transatlantic flights between North America and Europe. By the end of the decade, 767s crossed the Atlantic more frequently than all other aircraft types combined. The 767 also propelled the growth of point-to-point flights which bypassed major airline hubs in favor of direct routes. Taking advantage of the aircraft's lower operating costs and smaller capacity, operators added non-stop flights to secondary population centers, thereby eliminating the need for connecting flights. The increased number of cities receiving non-stop services caused a paradigm shift in the airline industry as point-to-point travel gained prominence at the expense of the traditional hub-and-spoke model.
In February 1990, the first 767 equipped with Rolls-Royce RB211 turbofans, a , was delivered to British Airways. Six months later, the carrier temporarily grounded its entire 767 fleet after discovering cracks in the engine pylons of several aircraft. The cracks were related to the extra weight of the RB211 engines, which are heavier than other 767 engines. During the grounding, interim repairs were conducted to alleviate stress on engine pylon components, and a parts redesign in 1991 prevented further cracks. Boeing also performed a structural reassessment, resulting in production changes and modifications to the engine pylons of all 767s in service.
In January 1993, following an order from UPS Airlines, Boeing launched a freighter variant, the 767-300F, which entered service with UPS on October 16, 1995. The 767-300F featured a main deck cargo hold, upgraded landing gear, and strengthened wing structure. In November 1993, the Japanese government launched the first 767 military derivative when it placed orders for the , an Airborne Early Warning and Control (AWACS) variant based on the 767-200ER. The first two , featuring extensive modifications to accommodate surveillance radar and other monitoring equipment, were delivered in 1998 to the Japan Self-Defense Forces.
In November 1995, after abandoning development of a smaller version of the 777, Boeing announced that it was revisiting studies for a larger 767. The proposed 767-400X, a second stretch of the aircraft, offered a 12 percent capacity increase versus the , and featured an upgraded flight deck, enhanced interior, and greater wingspan. The variant was specifically aimed at Delta Air Lines' pending replacement of its aging Lockheed L-1011 TriStars, and faced competition from the A330-200, a shortened derivative of the Airbus A330. In March 1997, Delta Air Lines launched the 767-400ER when it ordered the type to replace its L-1011 fleet. In October 1997, Continental Airlines also ordered the 767-400ER to replace its McDonnell Douglas DC-10 fleet. The type completed its first flight on October 9, 1999, and entered service with Continental Airlines on September 14, 2000.
In the early 2000s, cumulative 767 deliveries approached 900, but new sales declined during an airline industry downturn. In 2001, Boeing dropped plans for a longer-range model, the 767-400ERX, in favor of the proposed Sonic Cruiser, a new jetliner which aimed to fly 15 percent faster while having comparable fuel costs to the 767. The following year, Boeing announced the KC-767 Tanker Transport, a second military derivative of the 767-200ER. Launched with an order in October 2002 from the Italian Air Force, the KC-767 was intended for the dual role of refueling other aircraft and carrying cargo. The Japanese government became the second customer for the type in March 2003. In May 2003, the United States Air Force (USAF) announced its intent to lease KC-767s to replace its aging KC-135 tankers. The plan was suspended in March 2004 amid a conflict of interest scandal, resulting in multiple US government investigations and the departure of several Boeing officials, including Philip Condit, the company's chief executive officer, and chief financial officer Michael Sears. The first KC-767s were delivered in 2008 to the Japan Self-Defense Forces.
In late 2002, after airlines expressed reservations about its emphasis on speed over cost reduction, Boeing halted development of the Sonic Cruiser. The following year, the manufacturer announced the 7E7, a mid-size 767 successor made from composite materials which promised to be 20 percent more fuel efficient. The new jetliner was the first stage of a replacement aircraft initiative called the Boeing Yellowstone Project. Customers embraced the 7E7, later renamed 787 Dreamliner, and within two years it had become the fastest-selling airliner in the company's history. In 2005, Boeing opted to continue 767 production despite record Dreamliner sales, citing a need to provide customers waiting for the 787 with a more readily available option. Subsequently, the 767-300ER was offered to customers affected by 787 delays, including All Nippon Airways and Japan Airlines. Some aging 767s, exceeding 20 years in age, were also kept in service past planned retirement dates due to the delays. To extend the operational lives of older aircraft, airlines increased heavy maintenance procedures, including D-check teardowns and inspections for corrosion, a recurring issue on aging 767s. The first 787s entered service with All Nippon Airways in October 2011, 42 months behind schedule.
In 2007, the 767 received a production boost when UPS and DHL Aviation placed a combined 33 orders for the 767-300F. Renewed freighter interest led Boeing to consider enhanced versions of the 767-200 and 767-300F with increased gross weights, 767-400ER wing extensions, and 777 avionics. Net orders for the 767 declined from 24 in 2008 to just three in 2010. During the same period, operators upgraded aircraft already in service; in 2008, the first 767-300ER retrofitted with blended winglets from Aviation Partners Incorporated debuted with American Airlines. The manufacturer-sanctioned winglets, at in height, improved fuel efficiency by an estimated 6.5 percent. Other carriers including All Nippon Airways and Delta Air Lines also ordered winglet kits.
On February 2, 2011, the 1,000th 767 rolled out, destined for All Nippon Airways. The aircraft was the 91st 767-300ER ordered by the Japanese carrier, and with its completion the 767 became the second wide-body airliner to reach the thousand-unit milestone after the 747. The 1,000th aircraft also marked the last model produced on the original 767 assembly line. Beginning with the 1,001st aircraft, production moved to another area in the Everett factory which occupied about half of the previous floor space. The new assembly line made room for 787 production and aimed to boost manufacturing efficiency by over twenty percent.
At the inauguration of its new assembly line, the 767's order backlog numbered approximately 50, only enough for production to last until 2013. Despite the reduced backlog, Boeing officials expressed optimism that additional orders would be forthcoming. On February 24, 2011, the USAF announced its selection of the KC-767 Advanced Tanker, an upgraded variant of the KC-767, for its KC-X fleet renewal program. The selection followed two rounds of tanker competition between Boeing and Airbus parent EADS, and came eight years after the USAF's original 2003 announcement of its plan to lease KC-767s. The tanker order encompassed 179 aircraft and was expected to sustain 767 production past 2013.
In December 2011, FedEx Express announced a 767-300F order for 27 aircraft to replace its DC-10 freighters, citing the USAF tanker order and Boeing's decision to continue production as contributing factors. FedEx Express agreed to buy 19 more of the −300F variant in June 2012. In June 2015, FedEx said it was accelerating retirements of planes both to reflect demand and to modernize its fleet, recording charges of $276 million. On July 21, 2015 FedEx announced an order for 50 767-300F with options on another 50, the largest order for the type. With the announcement FedEx confirmed that it has firm orders for 106 of the freighters for delivery between 2018 and 2023. In February 2018, UPS announced an order for 4 more 767-300Fs to increase the total on order to 63.
With its successor, the Boeing New Midsize Airplane, that was planned for introduction in 2025 or later, and the 787 being much larger, Boeing could restart a passenger 767-300ER production to bridge the gap. A demand for 50 to 60 aircraft could have to be satisfied. Having to replace its 40 767s, United Airlines requested a price quote for other widebodies. In November 2017, Boeing CEO Dennis Muilenburg cited interest beyond military and freighter uses. However, in early 2018 Boeing Commercial Airplanes VP of marketing Randy Tinseth stated that the company did not intend to resume production of the passenger variant.
In its first quarter of 2018 earnings report, Boeing plan to increase its production from 2.5 to 3 monthly beginning in January 2020 due to increased demand in the cargo market, as FedEx had 56 on order, UPS has four, and an unidentified customer has three on order. This rate could rise to 3.5 per month in July 2020 and 4 per month in January 2021, before decreasing to 3 per month in January 2025 and then 2 per month in July 2025.
Boeing is studying a re-engined 767-XF for around 2025, based on the 767-400ER with an extended landing gear to accommodate General Electric GEnx turbofans.
The cargo market is the main target, but a passenger version could be a cheaper alternative to the proposed New Midsize Airplane.
The 767 is a low-wing cantilever monoplane with a conventional tail unit featuring a single fin and rudder. The wings are swept at 31.5 degrees and optimized for a cruising speed of Mach 0.8 (). Each wing features a supercritical airfoil cross-section and is equipped with six-panel leading edge slats, single- and double-slotted flaps, inboard and outboard ailerons, and six spoilers. The airframe further incorporates Carbon-fiber-reinforced polymer composite material wing surfaces, Kevlar fairings and access panels, plus improved aluminum alloys, which together reduce overall weight by versus preceding aircraft.
To distribute the aircraft's weight on the ground, the 767 has a retractable tricycle landing gear with four wheels on each main gear and two for the nose gear. The original wing and gear design accommodated the stretched 767-300 without major changes. The 767-400ER features a larger, more widely spaced main gear with 777 wheels, tires, and brakes. To prevent damage if the tail section contacts the runway surface during takeoff, 767-300 and 767-400ER models are fitted with a retractable tailskid. The 767 has left-side exit doors near the front and rear of the aircraft.
In addition to shared avionics and computer technology, the 767 uses the same auxiliary power unit, electric power systems, and hydraulic parts as the 757. A raised cockpit floor and the same forward cockpit windows result in similar pilot viewing angles. Related design and functionality allows 767 pilots to obtain a common type rating to operate the 757 and share the same seniority roster with pilots of either aircraft.
The original 767 flight deck uses six Rockwell Collins CRT screens to display Electronic flight instrument system (EFIS) and engine indication and crew alerting system (EICAS) information, allowing pilots to handle monitoring tasks previously performed by the flight engineer. The CRTs replace conventional electromechanical instruments found on earlier aircraft. An enhanced flight management system, improved over versions used on early 747s, automates navigation and other functions, while an automatic landing system facilitates CAT IIIb instrument landings in low visibility situations. The 767 became the first aircraft to receive CAT IIIb certification from the FAA for landings with minimum visibility in 1984. On the 767-400ER, the cockpit layout is simplified further with six Rockwell Collins liquid crystal display (LCD) screens, and adapted for similarities with the 777 and the Next Generation 737. To retain operational commonality, the LCD screens can be programmed to display information in the same manner as earlier 767s. In 2012, Boeing and Rockwell Collins launched a further 787-based cockpit upgrade for the 767, featuring three landscape-format LCD screens that can display two windows each.
The 767 is equipped with three redundant hydraulic systems for operation of control surfaces, landing gear, and utility actuation systems. Each engine powers a separate hydraulic system, and the third system uses electric pumps. A ram air turbine provides power for basic controls in the event of an emergency. An early form of fly-by-wire is employed for spoiler operation, utilizing electric signaling instead of traditional control cables. The fly-by-wire system reduces weight and allows independent operation of individual spoilers.
The 767 features a twin-aisle cabin with a typical configuration of six abreast in business class and seven across in economy. The standard seven abreast, 2–3–2 economy class layout places approximately 87 percent of all seats at a window or aisle. As a result, the aircraft can be largely occupied before center seats need to be filled, and each passenger is no more than one seat from the aisle. It is possible to configure the aircraft with extra seats for up to an eight abreast configuration, but this is less common.
The 767 interior introduced larger overhead bins and more lavatories per passenger than previous aircraft. The bins are wider to accommodate garment bags without folding, and strengthened for heavier carry-on items. A single, large galley is installed near the aft doors, allowing for more efficient meal service and simpler ground resupply. Passenger and service doors are an overhead plug type, which retract upwards, and commonly used doors can be equipped with an electric-assist system.
In 2000, a 777-style interior, known as the Boeing Signature Interior, debuted on the 767-400ER. Subsequently, adopted for all new-build 767s, the Signature Interior features even larger overhead bins, indirect lighting, and sculpted, curved panels. The 767-400ER also received larger windows derived from the 777. Older 767s can be retrofitted with the Signature Interior. Some operators have adopted a simpler modification known as the Enhanced Interior, featuring curved ceiling panels and indirect lighting with minimal modification of cabin architecture, as well as aftermarket modifications such as the NuLook 767 package by Heath Tecna.
The 767 has been produced in three fuselage lengths. These debuted in progressively larger form as the , , and 767-400ER. Longer-range variants include the 767-200ER and 767-300ER, while cargo models include the 767-300F, a production freighter, and conversions of passenger 767-200 and 767-300 models.
When referring to different variants, Boeing and airlines often collapse the model number (767) and the variant designator, e.g. –200 or –300, into a truncated form, e.g. "762" or "763". Subsequent to the capacity number, designations may append the range identifier, though -200ER and -300ER are company marketing designations and not certificated as such. The International Civil Aviation Organization (ICAO) aircraft type designator system uses a similar numbering scheme, but adds a preceding manufacturer letter; all variants based on the 767-200 and 767-300 are classified under the codes "B762" and "B763"; the 767-400ER receives the designation of "B764".
The 767-200 was the original model and entered service with United Airlines in 1982. The type has been used primarily by mainline U.S. carriers for domestic routes between major hub centers such as Los Angeles to Washington. The 767-200 was the first aircraft to be used on transatlantic ETOPS flights, beginning with TWA on February 1, 1985 under 90-minute diversion rules. Deliveries for the variant totaled 128 aircraft. There were 52 passenger and freighter conversions of the model in commercial service . The type's competitors included the Airbus A300 and A310.
The 767-200 was produced until 1987 when production switched to the extended-range 767-200ER. Some early 767-200s were subsequently upgraded to extended-range specification. In 1998, Boeing began offering 767-200 conversions to 767-200SF (Special Freighter) specification for cargo use, and Israel Aerospace Industries has been licensed to perform cargo conversions since 2005. The conversion process entails the installation of a side cargo door, strengthened main deck floor, and added freight monitoring and safety equipment. The 767-200SF was positioned as a replacement for Douglas DC-8 freighters.
A commercial freighter version of the Boeing with wings from the -300 series and an updated flightdeck was first flown on 29 December 2014. A military tanker variant of the Boeing 767-2C is being developed for the USAF as the KC-46. Boeing is building two aircraft as commercial freighters which will be used to obtain Federal Aviation Administration certification, a further two Boeing 767-2Cs will be modified as military tankers. , Boeing does not have customers for the freighter.
The 767-200ER was the first extended-range model and entered service with El Al in 1984. The type's increased range is due to extra fuel capacity and higher maximum takeoff weight (MTOW) of up to . The additional fuel capacity is accomplished by using the center tank's dry dock to carry fuel. The non-ER variant's center tank is what is called "cheek tanks"; two interconnected halves in each wing root with a dry dock in between. The center tank is also used on the -300ER and -400ER variants.
This version was originally offered with the same engines as the , while more powerful Pratt & Whitney PW4000 and General Electric CF6 engines later became available. The 767-200ER was the first 767 to complete a non-stop transatlantic journey, and broke the flying distance record for a twinjet airliner on April 17, 1988 with an Air Mauritius flight from Halifax, Nova Scotia to Port Louis, Mauritius, covering . The 767-200ER has been acquired by international operators seeking smaller wide-body aircraft for long-haul routes such as New York to Beijing. Deliveries of the type totaled 121 with no unfilled orders. As of July 2018, 21 examples of passenger and freighter conversion versions were in airline service. The type's main competitors of the time included the Airbus A300-600R and the A310-300.
The , the first stretched version of the aircraft, entered service with Japan Airlines in 1986. The type features a fuselage extension over the , achieved by additional sections inserted before and after the wings, for an overall length of . Reflecting the growth potential built into the original 767 design, the wings, engines, and most systems were largely unchanged on the . An optional mid-cabin exit door is positioned ahead of the wings on the left, while more powerful Pratt & Whitney PW4000 and Rolls-Royce RB211 engines later became available. The 767-300's increased capacity has been used on high-density routes within Asia and Europe. The 767-300 was produced from 1986 until 2000. Deliveries for the type totaled 104 aircraft with no unfilled orders remaining. As of July 2018, 34 of the variant were in airline service. The type's main competitor was the Airbus A300.
The 767-300ER, the extended-range version of the , entered service with American Airlines in 1988. The type's increased range was made possible by greater fuel tankage and a higher MTOW of . Design improvements allowed the available MTOW to increase to by 1993. Power is provided by Pratt & Whitney PW4000, General Electric CF6, or Rolls-Royce RB211 engines. the 767-300ER comes in three exit configurations: the baseline configuration has four main cabin doors and four over-wing window exits, the second configuration has six main cabin doors and two over-wing window exits; and the third configuration has six main cabin doors, as well as two smaller doors that are located behind the wings. Typical routes for the type include Los Angeles to Frankfurt. The combination of increased capacity and range offered by the 767-300ER has been particularly attractive to both new and existing 767 operators. It is the most successful version of the aircraft, with more orders placed than all other variants combined. , 767-300ER deliveries stand at 583 with no unfilled orders. There were 376 examples in service . The type's main competitor is the Airbus A330-200.
At its 1990s peak, a new 767-300ER was valued at $85 million, dipping to around $12 million in 2018 for a 1996 build.
The 767-300F, the production freighter version of the 767-300ER, entered service with UPS Airlines in 1995. The 767-300F can hold up to 24 standard pallets on its main deck and up to 30 LD2 unit load devices on the lower deck, with a total cargo volume of . The freighter has a main deck cargo door and crew exit, while the lower deck features two starboard-side cargo doors and one port-side cargo door. A general market version with onboard freight-handling systems, refrigeration capability, and crew facilities was delivered to Asiana Airlines on August 23, 1996. , 767-300F deliveries stand at 161 with 61 unfilled orders. Airlines operated 222 examples of the freighter variant and freighter conversions in July 2018.
In June 2008, All Nippon Airways took delivery of the first 767-300BCF (Boeing Converted Freighter), a modified passenger-to-freighter model. The conversion work was performed in Singapore by ST Aerospace Services, the first supplier to offer a 767-300BCF program, and involved the addition of a main deck cargo door, strengthened main deck floor, and additional freight monitoring and safety equipment. Since then, Boeing, Israel Aerospace Industries, and Wagner Aeronautical have also offered passenger-to-freighter conversion programs for series aircraft.
The 767-400ER, the first Boeing wide-body jet resulting from two fuselage stretches, entered service with Continental Airlines in 2000. The type features a stretch over the , for a total length of . The wingspan is also increased by through the addition of raked wingtips. The exit configuration uses six main cabin doors and two smaller exit doors behind the wings, similar to certain 767-300ER's. Other differences include an updated cockpit, redesigned landing gear, and 777-style Signature Interior. Power is provided by uprated General Electric CF6 engines.
The FAA granted approval for the 767-400ER to operate 180-minute ETOPS flights before it entered service. Because its fuel capacity was not increased over preceding models, the 767-400ER has a range of , less than previous extended-range 767s. No 767-400 version was developed.
The longer-range 767-400ERX was offered in July 2000 before being cancelled a year later, leaving the 767-400ER as the sole version of the largest 767. Boeing dropped the 767-400ER and the -200ER from its pricing list in 2014.
A total of 37 767-400ERs were delivered to the variant's two airline customers, Continental Airlines (now merged with United Airlines) and Delta Air Lines, with no unfilled orders. All 37 examples of the -400ER were in service in July 2018. One additional example was produced as a military testbed, and later sold as a VIP transport. The type's closest competitor is the Airbus A330-200.
Versions of the 767 serve in a number of military and government applications, with responsibilities ranging from airborne surveillance and refueling to cargo and VIP transport. Several military 767s have been derived from the 767-200ER, the longest-range version of the aircraft.
In 1986, Boeing announced plans for a partial double-deck Boeing 767 design. The aircraft would have combined the Boeing with a Boeing 757 cross section mounted over the rear fuselage. The Boeing 767-X would have also featured extended wings and a wider cabin. The 767-X did not get enough interest from airlines to launch and the model was shelved in 1988 in favor of the Boeing 777.
In March 2000, Boeing was to launch the 259-seat 767-400ERX with an initial order for three from Kenya Airways with deliveries planned for 2004, as it was proposed to Lauda Air.
Increased gross weight and a tailplane fuel tank would have boosted its range by , and GE could offer its CF6-80C2/G2.
Rolls-Royce offered its Trent 600 for the 767-400ERX and the Boeing 747X.
Offered in July, the longer-range -400ERX would have a strengthened wing, fuselage and landing gear for a 15,000 lb (6.8 t) higher MTOW, up to 465,000 lb (210.92 t).
Thrust would rise to for better takeoff performance, with the Trent 600 or the General Electric/Pratt & Whitney Engine Alliance GP7172, also offered on the 747X.
Range would increase by 525 nmi (950 km) to 6,150 nmi (11,390 km), with an additional fuel tank of 2,145 gallons (8,120 l) in the horizontal tail.
The 767-400ERX would offer the capacity of the Airbus A330-200 with 3% lower fuel burn and costs.
Boeing cancelled the variant development in 2001.
Kenya Airways then switched its order to the 777-200ER.
In July 2018, 742 aircraft were in airline service: 73 -200s, 632 -300 and 37 -400 with 65 -300F on order; the largest operators are Delta Air Lines (77), FedEx (60; largest cargo operator), UPS Airlines (59), United Airlines (51), Japan Airlines (35), All Nippon Airways (34). The type's competitors included the Airbus A300 and A310.
The largest 767 customers by orders have been Delta Air Lines with 117, FedEx Express (128), All Nippon Airways (96), American Airlines (88), and United Airlines (82). Delta and United are the only customers of all -200, -300 and -400 passenger variants. In July 2015, FedEx placed a firm order for 50 Boeing 767 freighters with deliveries from 2018 to 2023.
Boeing 767 orders and deliveries (cumulative, by year):
, the Boeing 767 has been in 60 aviation occurrences, including 19 hull-loss accidents. Seven fatal crashes, including three hijackings, have resulted in a total of 854 occupant fatalities.
The 767's first incident was Air Canada Flight 143, a , on July 23, 1983. The airplane ran out of fuel in-flight and had to glide with both engines out for almost to an emergency landing at Gimli, Manitoba, Canada. The pilots used the aircraft's ram air turbine to power the hydraulic systems for aerodynamic control. There were no fatalities and only minor injuries. This aircraft was nicknamed "Gimli Glider" after its landing site. The aircraft, registered C-GAUN, continued flying for Air Canada until its retirement in January 2008.
The airliner's first fatal crash, Lauda Air Flight 004, occurred near Bangkok on May 26, 1991, following the in-flight deployment of the left engine thrust reverser on a 767-300ER; none of the 223 aboard survived; as a result of this accident all 767 thrust reversers were deactivated until a redesign was implemented. Investigators determined that an electronically controlled valve, common to late-model Boeing aircraft, was to blame. A new locking device was installed on all affected jetliners, including 767s.
On October 31, 1999, EgyptAir Flight 990, a 767-300ER, crashed off Nantucket, Massachusetts, in international waters killing all 217 people on board. The United States National Transportation Safety Board (NTSB) concluded "not determined", but determined the probable cause to be due to a deliberate action by the first officer; Egypt disputed this conclusion.
On April 15, 2002, Air China Flight 129, a 767-200ER, crashed into a hill amid inclement weather while trying to land at Gimhae International Airport in Busan, South Korea. The crash resulted in the death of 129 of the 166 people on board, and the cause was attributed to pilot error.
The 767 has been involved in six hijackings, three resulting in loss of life, for a combined total of 282 occupant fatalities. On November 23, 1996, Ethiopian Airlines Flight 961, a 767-200ER, was hijacked and crash-landed in the Indian Ocean near the Comoro Islands after running out of fuel, killing 125 out of the 175 persons on board; survivors have been rare among instances of land-based aircraft ditching on water. Two 767s were involved in the September 11 attacks on the World Trade Center in 2001, resulting in the collapse of its two main towers. American Airlines Flight 11, a 767-200ER, crashed into the North Tower, killing all 92 people on board, and United Airlines Flight 175, a , crashed into the South Tower, with the death of all 65 on board. In addition, more than 2,600 people were killed in the towers or on the ground. A foiled 2001 shoe bomb attempt that December involved an American Airlines 767-300ER.
On November 1, 2011, LOT Polish Airlines Flight 16, a 767-300ER, safely landed at Warsaw Chopin Airport in Warsaw, Poland after a mechanical failure of the landing gear forced an emergency landing with the landing gear retracted. There were no injuries, but the aircraft involved was damaged and subsequently written off. At the time of the incident, aviation analysts speculated that it may have been the first instance of a complete landing gear failure in the 767's service history. Subsequent investigation determined that while a damaged hose had disabled the aircraft's primary landing gear extension system, an otherwise functional backup system was inoperative due to an accidentally deactivated circuit breaker.
In January 2014, the U.S. Federal Aviation Administration issued a directive that ordered inspections of the elevators on more than 400 767s beginning in March 2014; the focus was on fasteners and other parts that can fail and cause the elevators to jam. The issue was first identified in 2000 and has been the subject of several Boeing service bulletins. The inspections and repairs are required to be completed within six years. The aircraft has also had multiple occurrences of "uncommanded escape slide inflation" during maintenance or operations, and during flight. In late 2015, the FAA issued a preliminary directive to address the issue.
On October 28, 2016, American Airlines Flight 383, a 767-300ER with 161 passengers and 9 crew members, aborted takeoff at Chicago O'Hare Airport following an uncontained failure of the right GE CF6-80C2 engine. The engine failure, which hurled fragments over a considerable distance, caused a fuel leak, resulting in a fire under the right wing. Fire and smoke entered the cabin. All passengers and crew evacuated the aircraft, with 20 passengers and one flight attendant sustaining minor injuries using the evacuation slides.
On February 23, 2019, Atlas Air Flight 3591, a Boeing 767-300ERF air freighter operating for Amazon Air, crashed into Trinity Bay near Houston, Texas, while on descent into George Bush Intercontinental Airport; both pilots and the single passenger were killed.
As new 767s roll off the assembly line, older models have been retired and stored or scrapped. One complete aircraft, N102DAthe first to operate for Delta Air Lines and the twelfth example built, is currently on display. It was withdrawn from use and stored at Hartsfield–Jackson Atlanta International Airport in 2006. The exhibition aircraft, named "The Spirit of Delta" by the employees who helped purchase it in 1982, underwent restoration at the Delta Flight Museum in Atlanta, Georgia. The restoration was completed in 2010. | https://en.wikipedia.org/wiki?curid=4165 |
Bill Walsh (American football coach)
William Ernest Walsh (November 30, 1931 – July 30, 2007) was an American professional and college football coach. He served as head coach of the San Francisco 49ers and the Stanford Cardinal, during which time he popularized the West Coast offense. After retiring from the 49ers, Walsh worked as a sports broadcaster for several years and then returned as head coach at Stanford for three seasons.
Walsh went 102–63–1 (wins-losses-ties) with the 49ers, winning 10 of his 14 postseason games along with six division titles, three NFC Championship titles, and three Super Bowls. He was named NFL Coach of the Year in 1981 and 1984. In 1993, he was elected to the Pro Football Hall of Fame.
Born in Los Angeles, Walsh played running back in the San Francisco Bay Area for Hayward High School in Hayward.
Walsh played quarterback at the College of San Mateo for two seasons. (Both John Madden and Walsh played and coached at the College of San Mateo early in their careers.) After playing at the College of San Mateo, Walsh transferred to San José State University, where he played tight end and defensive end. He also participated in intercollegiate boxing, winning the golden glove.
Walsh graduated from San Jose State with a bachelor's degree in physical education in 1955. After two years in the U.S. Army participating on their boxing team, Walsh built a championship team at Washington High School in Fremont before becoming an assistant coach at Cal, Stanford and then the Oakland Raiders in 1966.
He served under Bob Bronzan as a graduate assistant coach on the Spartans football coaching staff and graduated with a master's degree in physical education from San Jose State in 1959. His master's thesis was entitled "Flank Formation Football -- Stress:: Defense". Thesis 796.W228f.
Following graduation, Walsh coached the football and swim teams at Washington High School in Fremont, California.
Walsh was coaching in Fremont when he interviewed for an assistant coaching position with Marv Levy, who had just been hired as the head coach at the University of California, Berkeley.
"I was very impressed, individually, by his knowledge, by his intelligence, by his personality, and hired him," Levy said. Levy and Walsh, two future NFL Hall of Famers, would never produce a winning season at Cal.
After coaching at Cal, Walsh did a stint at Stanford as an assistant coach, before beginning his pro coaching career.
Walsh began his pro coaching career in 1966 as an assistant with the AFL's Oakland Raiders. As a Raider assistant, Walsh was trained in the vertical passing offense favored by Al Davis, putting Walsh in Davis's mentor Sid Gillman's coaching tree.
In 1967 Walsh was the head coach and general manager of the San Jose Apaches of the Continental Football League (CFL). Walsh led the Apaches to 2nd place in the Pacific Division. Prior to the start of the 1968 CFL season the Apaches ceased all football operations.
In 1968, Walsh moved to the AFL expansion Cincinnati Bengals, joining the staff of legendary coach Paul Brown. It was there that Walsh developed the philosophy now known as the "West Coast Offense", as a matter of necessity. Cincinnati's new quarterback, Virgil Carter, was known for his great mobility and accuracy but lacked a strong arm necessary to throw deep passes. Thus, Walsh modified the vertical passing scheme he had learned during his time with the Raiders, designing a horizontal passing system that relied on quick, short throws, often spreading the ball across the entire width of the field. The new offense was much better suited to Carter's physical abilities; he led the league in pass completion percentage in 1971.
Walsh spent eight seasons as an assistant with the Bengals. Ken Anderson eventually replaced Carter as starting quarterback, and together with star wide receiver Isaac Curtis, produced a consistent, effective offensive attack. Initially, Walsh started out as the wide receivers coach from 1968 to 1970 before also coaching the quarterbacks from 1971 to 1975.
When Brown retired as head coach following the 1975 season and appointed Bill "Tiger" Johnson as his successor, Walsh resigned and served as an assistant coach for Tommy Prothro with the San Diego Chargers in 1976. In a 2006 interview, Walsh claimed that during his tenure with the Bengals, Brown "worked against my candidacy" to be a head coach anywhere in the league. "All the way through I had opportunities, and I never knew about them," Walsh said. "And then when I left him, he called whoever he thought was necessary to keep me out of the NFL."
In 1977, Walsh was hired as the head coach at Stanford where he stayed for two seasons. His two Stanford teams were successful, posting a 9–3 record in 1977 with a win in the Sun Bowl, and 8–4 in 1978 with a win in the Bluebonnet Bowl. His notable players at Stanford included quarterbacks Guy Benjamin and Steve Dils, wide receivers James Lofton and Ken Margerum, linebacker Gordy Ceresino, in addition to running back Darrin Nelson. Walsh was the Pac-8 Conference Coach of the Year in 1977.
In 1979, Walsh was hired as head coach of the San Francisco 49ers by owner Edward J. DeBartolo, Jr. The long-suffering 49ers went 2–14 in 1978, the season before Walsh's arrival and repeated the same dismal record in his first season. But, Walsh got the entire organization to buy into his philosophy and vowed to turn around a miserable situation. He also drafted quarterback Joe Montana from Notre Dame in the third round. Despite their second consecutive 2–14 record, the 49ers were playing more competitive football.
In 1980, Steve DeBerg was the starting quarterback who got San Francisco off to a 3–0 start, but after a 59-14 blowout loss to Dallas in week 6, Walsh promoted Montana to starting QB. In a Sunday game, December 7 vs. the New Orleans Saints, Montana brought the 49ers back from a 35-7 halftime deficit to win 38–35 in overtime. The 49ers improved to 6–10, but more importantly, Walsh had them making great strides and they were getting better every week.
In 1981, key victories were two wins each over the Los Angeles Rams and the Dallas Cowboys. The Rams were only two seasons removed from a Super Bowl appearance, and had dominated the series with the 49ers since 1967 winning 23, losing 3 and tying 1. San Francisco's two wins over the Rams in 1981 marked the shift of dominance in favor of the 49ers that lasted until 1998 with 30 wins (including 17 consecutively) against only 6 defeats. The 49ers blew out the Cowboys in week 6 of the regular season. On "Monday Night Football" that week, the win was not included in the halftime highlights. Walsh felt that this was because the Cowboys were scheduled to play the Rams the next week in a Sunday night game and that showing the highlights of the 49ers' win would potentially hurt the game's ratings. However, Walsh used this as a motivating factor for his team, who felt they were disrespected. The 49ers finished the regular season with a 13–3 record.
The 49ers faced the Cowboys again that same season in the NFC title game. The game was very close, and in the fourth quarter Walsh called a series of running plays as the 49ers marched down the field against the Cowboys' prevent defense, which had been expecting the 49ers to mainly pass. The 49ers came from behind to win the game on Dwight Clark's touchdown reception, known as The Catch, propelling Walsh to his first Super Bowl. Walsh would later write that the 49ers' two wins over the Rams showed a shift of power in their division, while the wins over the Cowboys showed a shift of power in the conference.
San Francisco won its first championship a year removed from back-to-back two-win seasons. The 49ers won Super Bowl XVI defeating the Cincinnati Bengals 26–21 in Pontiac, Michigan. Under Walsh the team rose from the cellar to the top of the NFL in just two seasons.
The 49ers won Super Bowl championships in 1981, 1984 and 1988 seasons. Walsh served as 49ers head coach for 10 years, and during his tenure he and his coaching staff perfected the style of play known popularly as the West Coast offense. Walsh was nicknamed "The Genius" for both his innovative play calling and design. Walsh would regularly script the first 10-15 offensive plays before the start of each game. In the ten years during which Walsh was the 49ers' head coach, San Francisco scored 3,714 points (24.4 per game), the most of any team in the league during that span.
In addition to Joe Montana, Walsh drafted Ronnie Lott, Charles Haley, and Jerry Rice. He also traded a 2nd and 4th round pick in the 1987 draft for Steve Young. His success with the 49ers was rewarded with his election to the Professional Football Hall of Fame in 1993. Montana, Lott, Haley, Rice and Young were also elected to the Hall of Fame.
Many of Bill Walsh's assistant coaches went on to be head coaches themselves, including George Seifert, Mike Holmgren, Ray Rhodes, and Dennis Green. After Walsh's retirement from the 49ers, Seifert succeeded him as 49ers head coach, and guided San Francisco to victories in Super Bowl XXIV and Super Bowl XXIX. Holmgren won a Super Bowl with the Green Bay Packers, and made 3 Super Bowl appearances as a head coach: 2 with the Packers, and another with the Seattle Seahawks. These coaches in turn have their own disciples who have used Walsh's West Coast system, such as former Denver Broncos head coach Mike Shanahan and former Houston Texans head coach Gary Kubiak. Mike Shanahan was an offensive coordinator under George Seifert and went on to win Super Bowl XXXII and Super Bowl XXXIII during his time as head coach of the Denver Broncos. Kubiak was first a quarterback coach with the 49ers, and then offensive coordinator for Shanahan with the Broncos. In 2015, he became the Broncos' head coach and led Denver to victory in Super Bowl 50. Dennis Green trained Tony Dungy, who won a Super Bowl with the Indianapolis Colts, and Brian Billick with his brother-in law and linebackers coach Mike Smith. Billick won a Super Bowl as head coach of the Baltimore Ravens.
Mike Holmgren trained many of his assistants to become head coaches, including Jon Gruden and Andy Reid. Gruden won a Super Bowl with the Tampa Bay Buccaneers. Reid served as head coach of the Philadelphia Eagles from 1999–2012, and guided the Eagles to multiple winning seasons and numerous playoff appearances. Ever since 2013, Reid has served as head coach of the Kansas City Chiefs. He was finally able to win a Super Bowl, when his Chiefs defeated the San Francisco 49ers in Super Bowl LIV. In addition to this, Marc Trestman, former head coach of the Chicago Bears, served as Offensive Coordinator under Seifert in the 90's. Gruden himself would train Mike Tomlin, who led the Pittsburgh Steelers to their sixth Super Bowl championship, and Jim Harbaugh, whose 49ers would face his brother, John Harbaugh, whom Reid himself trained, and the Baltimore Ravens at Super Bowl XLVII, which marked the Ravens' second World Championship.
Bill Walsh was viewed as a strong advocate for African-American head coaches in the NFL and NCAA. Thus, the impact of Walsh also changed the NFL into an equal opportunity for African-American coaches. Along with Ray Rhodes and Dennis Green, Tyrone Willingham became the head coach at Stanford, then later Notre Dame and Washington. One of Mike Shanahan's assistants, Karl Dorrell, went on to be the head coach at UCLA. Walsh directly helped propel Dennis Green into the NFL head coaching ranks by offering to take on the head coaching job at Stanford.
Many former and current NFL head coaches trace their lineage back to Bill Walsh on his coaching tree, shown below. Walsh, in turn, belonged to the coaching tree of American Football League great and Hall of Fame coach Sid Gillman of the AFL's Los Angeles/San Diego Chargers and Hall of Fame coach Paul Brown.
After leaving the coaching ranks immediately following his team's victory in Super Bowl XXIII, Walsh went to work as a broadcaster for NBC, teaming with Dick Enberg to form the lead broadcasting team, replacing Merlin Olsen.
During his time with NBC, rumors began to surface that Walsh would coach again in the NFL. There were at least two known instances.
First, according to a February 2015 article by Mike Florio of NBC Sports, after a 5–11 season in 1989, the Patriots fired Raymond Berry and unsuccessfully attempted to lure Walsh to Foxborough to become head coach and general manager. When that failed, New England promoted defensive coordinator Rod Rust; the team split its first two games and then lost 14 straight in 1990.
Second, late in the 1990 season, Walsh was rumored to become Tampa Bay's next head coach and general manager after the team fired Ray Perkins and promoted Richard Williamson on an interim basis. Part of the speculation was fueled by the fact that Walsh's contract with NBC, which ran for 1989 and 1990, would soon be up for renewal, to say nothing of the pressure Hugh Culverhouse faced to increase fan support and to fill the seats at Tampa Stadium. However, less than a week after Super Bowl XXV, Walsh not only declined Tampa Bay's offer, but he and NBC agreed on a contract extension. Walsh would continue in his role with NBC for 1991. Meanwhile, after unsuccessfully courting then-recently fired Eagles coach Buddy Ryan or Giants then-defensive coordinator Bill Belichick to man the sidelines for Tampa Bay in 1991, the Bucs stuck with Williamson. Under Williamson's leadership, Tampa Bay won only three games in 1991.
Walsh did return to Stanford as head coach in 1992, leading the Cardinal to a 10–3 record and a Pacific-10 Conference co-championship. Stanford finished the season with an upset victory over Penn State in the Blockbuster Bowl on January 1, 1993 and a #9 ranking in the final AP Poll. In 1994, after consecutive losing seasons, Walsh left Stanford and retired from coaching.
In 1996 Walsh returned to the 49ers as an administrative aide Walsh was the Vice President and General Manager for the 49ers from 1999 to 2001 and was a special consultant to the team for three years afterwards.
In 2004, Walsh was appointed as special assistant to the athletic director at Stanford. In 2005, after then-athletic director Ted Leland stepped down, Walsh was named interim athletic director. He also acted as a consultant for his alma mater San Jose State University in their search for an Athletic Director and Head Football Coach in 2005.
Walsh was also the author of three books, a motivational speaker, and taught classes at the Stanford Graduate School of Business.
Walsh was a Board Member for the Lott IMPACT Trophy, which is named after Pro Football Hall of Fame defensive back Ronnie Lott, and is awarded annually to college football's Defensive IMPACT Player of the Year. Walsh served as a keynote speaker at the award's banquet.
Bill Walsh died of leukemia on July 30, 2007 at his home in Woodside, California.
Following Walsh's death, the playing field at the former Candlestick Park was renamed "Bill Walsh Field". Additionally, the regular San Jose State versus Stanford football game was renamed the "Bill Walsh Legacy Game".
Walsh is survived by his wife Geri, his son Craig and his daughter Elizabeth. Walsh had another son, Steve, who died in 2002. | https://en.wikipedia.org/wiki?curid=4166 |
Utility knife
A utility knife, sometimes generically called a Stanley knife, is a knife used for general or utility purposes. The utility knife was originally a fixed blade knife with a cutting edge suitable for general work such as cutting hides and cordage, scraping hides, butchering animals, cleaning fish, and other tasks. Craft knives are tools mostly used for crafts. Today, the term "utility knife" also includes small folding or retractable-blade knives suited for use in the general workplace or in the construction industry.
There is also a utility knife for kitchen use, which is between a chef's knife and paring knife in size.
The fixed-blade utility knife was developed some 500,000 years ago, when human ancestors began to make stone knives. These knives were general-purpose tools, designed for cutting and shaping wooden implements, scraping hides, preparing food, and for other utilitarian purposes.
By the 19th century the fixed-blade utility knife had evolved into a steel-bladed outdoors field knife capable of butchering game, cutting wood, and preparing campfires and meals. With the invention of the backspring, pocket-size utility knives were introduced with folding blades and other folding tools designed to increase the utility of the overall design. The folding pocketknife and utility tool is typified by the "Camper" or "Boy Scout" pocketknife, the U.S. folding utility knife, the Swiss Army Knife, and by multi-tools fitted with knife blades. The development of stronger locking blade mechanisms for folding knives—as with the Spanish navaja, the Opinel, and the Buck 110 Folding Hunter—significantly increased the utility of such knives when employed for heavy-duty tasks such as preparing game or cutting through dense or tough materials.
The fixed or folding blade utility knife is popular for both indoor and outdoor use. One of the most popular types of workplace utility knife is the retractable or folding utility knife (also known as a "Stanley knife", "box cutter", "X-Acto knife", or by various other names). These types of utility knives are designed as multi-purpose cutting tools for use in a variety of trades and crafts. Designed to be lightweight and easy to carry and use, utility knives are commonly used in factories, warehouses, construction projects, and other situations where a tool is routinely needed to mark cut lines, trim plastic or wood materials, or to cut tape, cord, strapping, cardboard, or other packaging material.
In British, Australian and New Zealand English, along with Dutch and Austrian German, a utility knife frequently used in the construction industry is known as a "Stanley knife". This name is a generic trademark named after Stanley Works, a manufacturer of such knives. In Israel and Switzerland, these knives are known as "Japanese knives". In Brazil they are known as "estiletes" or "cortadores Olfa" (the latter, being another genericised trademark). In Portugal and Canada they are also known as "X-Acto" (yet another genericised trademark). In India, the Philippines, France, Iraq, Italy, Egypt, and Germany, they are simply called "cutter". In the Flemish region of Belgium it is called "cuttermes(je)" (cutter knife). In general Spanish, they are known as "cortaplumas" (penknife, when it comes to folding blades); in Spain, Mexico, and Costa Rica, they are colloquially known as "cutters"; in Argentina and Uruguay the segmented fixed-blade knives are known as "Trinchetas". In Turkey, they are known as "maket bıçağı" (which literally translates as "model knife").
Other names for the tool are "box cutter" or "boxcutter", "razor blade knife", "razor knife", "carpet knife", "pen knife", "stationery knife", "sheetrock knife", or "drywall knife".
Utility knives may use fixed, folding, or retractable or replaceable blades, and come in a wide variety of lengths and styles suited to the particular set of tasks they are designed to perform. Thus, an outdoors utility knife suited for camping or hunting might use a broad fixed blade, while a utility knife designed for the construction industry might feature a replaceable utility or razor blade for cutting packaging, cutting shingles, marking cut lines, or scraping paint.
Large fixed-blade utility knives are most often employed in an outdoors context, such as fishing, camping, or hunting. Outdoor utility knives typically feature sturdy blades from in length, with edge geometry designed to resist chipping and breakage.
The term "utility knife" may also refer to small fixed-blade knives used for crafts, model-making and other artisanal projects. These small knives feature light-duty blades best suited for cutting thin, lightweight materials. The small, thin blade and specialized handle permit cuts requiring a high degree of precision and control.
The largest construction or workplace utility knives typically feature retractable and replaceable blades, made of either die-cast metal or molded plastic. Some use standard razor blades, others specialized double-ended utility blades. The user can adjust how far the blade extends from the handle, so that, for example, the knife can be used to cut the tape sealing a package without damaging the contents of the package. When the blade becomes dull, it can be quickly reversed or switched for a new one. Spare or used blades are stored in the hollow handle of some models, and can be accessed by removing a screw and opening the handle. Other models feature a quick-change mechanism that allows replacing the blade without tools, as well as a flip-out blade storage tray. The blades for this type of utility knife come in both double- and single-ended versions, and are interchangeable with many, but not all, of the later copies. Specialized blades also exist for cutting string, linoleum, and other materials.
Another style is a snap-off utility knife that contains a long, segmented blade that slides out from it. As the endmost edge becomes dull, it can be broken off the remaining blade, exposing the next section, which is sharp and ready for use. The snapping is best accomplished with a blade snapper that is often built-in, or a pair of pliers, and the break occurs at the score lines, where the metal is thinnest. When all of the individual segments are used, the knife may be thrown away, or, more often, refilled with a replacement blade. This design was introduced by Japanese manufacturer Olfa Corporation in 1956 as the world's first snap-off blade and was inspired from analyzing the sharp cutting edge produced when glass is broken and how pieces of a chocolate bar break into segments. The sharp cutting edge on these knives is not on the edge where the blade is snapped off; rather one long edge of the whole blade is sharpened, and there are scored diagonal breakoff lines at intervals down the blade. Thus each snapped-off piece is roughly a parallelogram, with each long edge being a breaking edge, and one or both of the short ends being a sharpened edge.
Another utility knife often used for cutting open boxes consists of a simple sleeve around a rectangular handle into which single-edge utility blades can be inserted. The sleeve slides up and down on the handle, holding the blade in place during use and covering the blade when not in use. The blade holder may either retract or fold into the handle, much like a folding-blade pocketknife. The blade holder is designed to expose just enough edge to cut through one layer of corrugated fibreboard, to minimize chances of damaging contents of cardboard boxes.
Most utility knives are not well suited to use as offensive weapons, with the exception of some outdoor-type utility knives employing longer blades. However, even small razor-blade type utility knives may sometimes find use as slashing weapons. The 9-11 commission report stated passengers in cell phone calls reported knives or "box-cutters" were used as weapons (also Mace or a bomb) in hi-jacking airplanes in the September 11, 2001 terrorist attacks against the United States, though the exact design of the knives used is unknown. Two of the hijackers were known to have purchased Leatherman knives, which feature a 4" slip-joint blade which were not prohibited on U.S. flights at the time. Those knives were not found in the possessions the two hijackers left behind. Similar cutters, including paper cutters, have also been known to be used as a lethal weapon.
Small work-type utility knives have also been used to commit robbery and other crimes. In June 2004, a Japanese student was slashed to death with a segmented-type utility knife.
In the United Kingdom, the law was changed (effective 1 October 2007) to raise the age limit for purchasing knives, including utility knives, from 16 to 18. | https://en.wikipedia.org/wiki?curid=4168 |
Bronze
Bronze is an alloy consisting primarily of copper, commonly with about 12–12.5% tin and often with the addition of other metals (such as aluminium, manganese, nickel or zinc) and sometimes non-metals or metalloids such as arsenic, phosphorus or silicon. These additions produce a range of alloys that may be harder than copper alone, or have other useful properties, such as stiffness, ductility, or machinability.
The archeological period in which bronze was the hardest metal in widespread use is known as the Bronze Age. The beginning of the Bronze Age in India and western Eurasia is conventionally dated to the mid-4th millennium BC, and to the early 2nd millennium BC in China; elsewhere it gradually spread across regions. The Bronze Age was followed by the Iron Age starting from about 1300 BC and reaching most of Eurasia by about 500 BC, although bronze continued to be much more widely used than it is in modern times.
Because historical pieces were often made of brasses (copper and zinc) and bronzes with different compositions, modern museum and scholarly descriptions of older objects increasingly use the generalized term "copper alloy" instead.
The word "bronze" (1730–40) is borrowed from Middle French (1511), itself borrowed from Italian 'bell metal, brass' (13th century, transcribed in Medieval Latin as ) from either:
The discovery of bronze enabled people to create metal objects that were harder and more durable than previously possible. Bronze tools, weapons, armor, and building materials such as decorative tiles were harder and more durable than their stone and copper ("Chalcolithic") predecessors. Initially, bronze was made out of copper and arsenic, forming arsenic bronze, or from naturally or artificially mixed ores of copper and arsenic, with the earliest artifacts so far known coming from the Iranian plateau in the 5th millennium BC. It was only later that tin was used, becoming the major non-copper ingredient of bronze in the late 3rd millennium BC.
Tin bronze was superior to arsenic bronze in that the alloying process could be more easily controlled, and the resulting alloy was stronger and easier to cast. Also, unlike arsenic, metallic tin and fumes from tin refining are not toxic. The earliest tin-alloy bronze dates to 4500 BC in a Vinča culture site in Pločnik (Serbia). Other early examples date to the late 4th millennium BC in Egypt, Susa (Iran) and some ancient sites in China, Luristan (Iran) and Mesopotamia (Iraq).
Ores of copper and the far rarer tin are not often found together (exceptions include Cornwall in Britain, one ancient site in Thailand and one in Iran), so serious bronze work has always involved trade. Tin sources and trade in ancient times had a major influence on the development of cultures. In Europe, a major source of tin was the British deposits of ore in Cornwall, which were traded as far as Phoenicia in the eastern Mediterranean.
In many parts of the world, large hoards of bronze artifacts are found, suggesting that bronze also represented a store of value and an indicator of social status. In Europe, large hoards of bronze tools, typically socketed axes (illustrated above), are found, which mostly show no signs of wear. With Chinese ritual bronzes, which are documented in the inscriptions they carry and from other sources, the case is clear. These were made in enormous quantities for elite burials, and also used by the living for ritual offerings.
Though bronze is generally harder than wrought iron, with Vickers hardness of 60–258 vs. 30–80, the Bronze Age gave way to the Iron Age after a serious disruption of the tin trade: the population migrations of around 1200–1100 BC reduced the shipping of tin around the Mediterranean and from Britain, limiting supplies and raising prices. As the art of working in iron improved, iron became cheaper and improved in quality. As cultures advanced from hand-wrought iron to machine-forged iron (typically made with trip hammers powered by water), blacksmiths learned how to make steel. Steel is stronger than bronze and holds a sharper edge longer.
Bronze was still used during the Iron Age, and has continued in use for many purposes to the modern day.
There are many different bronze alloys, but typically modern bronze is 88% copper and 12% tin. Alpha bronze consists of the alpha solid solution of tin in copper. Alpha bronze alloys of 4–5% tin are used to make coins, springs, turbines and blades. Historical "bronzes" are highly variable in composition, as most metalworkers probably used whatever scrap was on hand; the metal of the 12th-century English Gloucester Candlestick is bronze containing a mixture of copper, zinc, tin, lead, nickel, iron, antimony, arsenic with an unusually large amount of silver – between 22.5% in the base and 5.76% in the pan below the candle. The proportions of this mixture suggests that the candlestick was made from a hoard of old coins. The Benin Bronzes are in fact brass, and the Romanesque Baptismal font at St Bartholomew's Church, Liège is described as both bronze and brass.
In the Bronze Age, two forms of bronze were commonly used: "classic bronze", about 10% tin, was used in casting; and "mild bronze", about 6% tin, was hammered from ingots to make sheets. Bladed weapons were mostly cast from classic bronze, while helmets and armor were hammered from mild bronze.
Commercial bronze (90% copper and 10% zinc) and architectural bronze (57% copper, 3% lead, 40% zinc) are more properly regarded as brass alloys because they contain zinc as the main alloying ingredient. They are commonly used in architectural applications.
Bismuth bronze is a bronze alloy with a composition of 52% copper, 30% nickel, 12% zinc, 5% lead, and 1% bismuth. It is able to hold a good polish and so is sometimes used in light reflectors and mirrors.
Plastic bronze contains a significant quantity of lead, which makes for improved plasticity possibly used by the ancient Greeks in their ship construction.
Other bronze alloys include aluminium bronze, phosphor bronze, manganese bronze, bell metal, arsenical bronze, speculum metal and cymbal alloys.
Bronzes are typically ductile alloys, considerably less brittle than cast iron. Typically bronze oxidizes only superficially; once a copper oxide (eventually becoming copper carbonate) layer is formed, the underlying metal is protected from further corrosion. This can be seen on statues from the Hellenistic period. However, if copper chlorides are formed, a corrosion-mode called "bronze disease" will eventually completely destroy it. Copper-based alloys have lower melting points than steel or iron and are more readily produced from their constituent metals. They are generally about 10 percent denser than steel, although alloys using aluminium or silicon may be slightly less dense. Bronze is a better conductor of heat and electricity than most steels. The cost of copper-base alloys is generally higher than that of steels but lower than that of nickel-base alloys.
Copper and its alloys have a huge variety of uses that reflect their versatile physical, mechanical, and chemical properties. Some common examples are the high electrical conductivity of pure copper, low-friction properties of bearing bronze (bronze that has a high lead content— 6–8%), resonant qualities of bell bronze (20% tin, 80% copper), and resistance to corrosion by seawater of several bronze alloys.
The melting point of bronze varies depending on the ratio of the alloy components and is about . Bronze is usually nonmagnetic, but certain alloys containing iron or nickel may have magnetic properties.
Bronze, or bronze-like alloys and mixtures, were used for coins over a longer period. Bronze was especially suitable for use in boat and ship fittings prior to the wide employment of stainless steel owing to its combination of toughness and resistance to salt water corrosion. Bronze is still commonly used in ship propellers and submerged bearings.
In the 20th century, silicon was introduced as the primary alloying element, creating an alloy with wide application in industry and the major form used in contemporary statuary. Sculptors may prefer silicon bronze because of the ready availability of silicon bronze brazing rod, which allows colour-matched repair of defects in castings. Aluminium is also used for the structural metal aluminium bronze.
Bronze parts are tough and typically used for bearings, clips, electrical connectors and springs.
Bronze also has low friction against dissimilar metals, making it important for cannons prior to modern tolerancing, where iron cannonballs would otherwise stick in the barrel. It is still widely used today for springs, bearings, bushings, automobile transmission pilot bearings, and similar fittings, and is particularly common in the bearings of small electric motors. Phosphor bronze is particularly suited to precision-grade bearings and springs. It is also used in guitar and piano strings.
Unlike steel, bronze struck against a hard surface will not generate sparks, so it (along with beryllium copper) is used to make hammers, mallets, wrenches and other durable tools to be used in explosive atmospheres or in the presence of flammable vapors. Bronze is used to make bronze wool for woodworking applications where steel wool would discolour oak.
Phosphor bronze is used for ships' propellers, musical instruments, and electrical contacts. Bearings are often made of bronze for its friction properties. It can be filled with oil to make the proprietary Oilite and similar material for bearings. Aluminium bronze is hard and wear-resistant, and is used for bearings and machine tool ways.
Bronze is widely used for casting bronze sculptures. Common bronze alloys have the unusual and desirable property of expanding slightly just before they set, thus filling the finest details of a mould. Then, as the bronze cools, it shrinks a little, making it easier to separate from the mould.
The Assyrian king Sennacherib (704–681 BC) claims to have been the first to cast monumental bronze statues (of up to 30 tonnes) using two-part moulds instead of the lost-wax method.
Bronze statues were regarded as the highest form of sculpture in Ancient Greek art, though survivals are few, as bronze was a valuable material in short supply in the Late Antique and medieval periods. Many of the most famous Greek bronze sculptures are known through Roman copies in marble, which were more likely to survive.
In India, bronze sculptures from the Kushana (Chausa hoard) and Gupta periods (Brahma from Mirpur-Khas, Akota Hoard, Sultanganj Buddha) and later periods (Hansi Hoard) have been found. Indian Hindu artisans from the period of the Chola empire in Tamil Nadu used bronze to create intricate statues via the lost-wax casting method with ornate detailing depicting the deities of Hinduism. The art form survives to this day, with many silpis, craftsmen, working in the areas of Swamimalai and Chennai.
In antiquity other cultures also produced works of high art using bronze. For example: in Africa, the bronze heads of the Kingdom of Benin; in Europe, Grecian bronzes typically of figures from Greek mythology; in east Asia, Chinese ritual bronzes of the Shang and Zhou dynasty—more often ceremonial vessels but including some figurine examples. Bronze sculptures, although known for their longevity, still undergo microbial degradation; such as from certain species of yeasts.
Bronze continues into modern times as one of the materials of choice for monumental statuary.
Before it became possible to produce glass with acceptably flat surfaces, bronze was a standard material for mirrors. The reflecting surface was typically made slightly convex so that the whole face could be seen in a small mirror. Bronze was used for this purpose in many parts of the world, probably based on independent discoveries.
Bronze mirrors survive from the Egyptian Middle Kingdom (2040–1750 BC). In Europe, the Etruscans were making bronze mirrors in the sixth century BC, and Greek and Roman mirrors followed the same pattern. Although other materials such as speculum metal had come into use, bronze mirrors were still being made in Japan in the eighteenth century AD.
Bronze is the preferred metal for bells in the form of a high tin bronze alloy known colloquially as bell metal, which is about 23% tin.
Nearly all professional cymbals are made from bronze, which gives a desirable balance of durability and timbre. Several types of bronze are used, commonly B20 bronze, which is roughly 20% tin, 80% copper, with traces of silver, or the tougher B8 bronze made from 8% tin and 92% copper. As the tin content in a bell or cymbal rises, the timbre drops.
Bronze is also used for the windings of steel and nylon strings of various stringed instruments such as the double bass, piano, harpsichord, and guitar. Bronze strings are commonly reserved on pianoforte for the lower pitch tones, as they possess a superior sustain quality to that of high-tensile steel.
Bronzes of various metallurgical properties are widely used in struck idiophones around the world, notably bells, singing bowls, gongs, cymbals, and other idiophones from Asia. Examples include Tibetan singing bowls, temple bells of many sizes and shapes, gongs, Javanese gamelan, and other bronze musical instruments. The earliest bronze archeological finds in Indonesia date from 1–2 BC, including flat plates probably suspended and struck by a wooden or bone mallet. Ancient bronze drums from Thailand and Vietnam date back 2,000 years. Bronze bells from Thailand and Cambodia date back to 3,600 BC.
Some companies are now making saxophones from phosphor bronze (3.5 to 10% tin and up to 1% phosphorus content). Bell bronze/B20 is used to make the tone rings of many professional model banjos. The tone ring is a heavy (usually 3 lbs.) folded or arched metal ring attached to a thick wood rim, over which a skin, or most often, a plastic membrane (or head) is stretched – it is the bell bronze that gives the banjo a crisp powerful lower register and clear bell-like treble register.
Bronze has also been used in coins; most “copper” coins are actually bronze, with about 4 percent tin and 1 percent zinc.
As with coins, bronze has been used in the manufacture of various types of medals for centuries, and are known in contemporary times for being awarded for third place in sporting competitions and other events. The later usage was in part attributed to the choices of gold, silver and bronze to represent the first three Ages of Man in Greek mythology: the Golden Age, when men lived among the gods; the Silver age, where youth lasted a hundred years; and the Bronze Age, the era of heroes, and was first adopted at the 1904 Summer Olympics. At the 1896 event, silver was awarded to winners and bronze to runners-up, while at 1900 other prizes were given, not medals. | https://en.wikipedia.org/wiki?curid=4169 |
Benelux
The Benelux Union (; ; ), also known as simply Benelux, is a politico-economic union and formal international intergovernmental cooperation of three neighbouring states in western Europe: Belgium, the Netherlands, and Luxembourg. The name "Benelux" is a portmanteau formed from joining the first few letters of each country's name – Belgium, Netherlands, Luxembourg – and was first used to name the customs agreement that initiated the union (signed in 1944). It is now used more generally to refer to the geographic, economic, and cultural grouping of the three countries.
Cooperation among the governments of Belgium, the Netherlands, and the Grand Duchy of Luxembourg has been a firmly established practice since the introduction of a customs union in 1944 which became operative in 1948 as the Benelux Customs Union. The initial form of economic cooperation expanded steadily over time, leading in 1958 to the signing of the treaty establishing the Benelux Economic Union. Initially, the purpose of cooperation among the three partners was to put an end to customs barriers at their borders and ensure free movement of persons, goods and services among the three countries. It was the first example of international economic integration in Europe since the Second World War. The three countries therefore foreshadowed and provided the model for future European integration, such as the European Coal and Steel Community, the European Economic Community (EEC), and the European Community/European Union (EC/EU). The three partners continue to play this pioneering role. They also launched the Schengen process, which came into operation in 1985, promoting it from the outset. Benelux cooperation has been constantly adapted and now goes much further than mere economic cooperation, extending to new and topical policy areas connected with security, sustainable development, and the economy. Benelux models its cooperation on that of the European Union and is able to take up and pursue original ideas. The Benelux countries also work together in the so-called Pentalateral Energy Forum, a regional cooperation group formed of five members - the Benelux states, France, Germany, Austria, and Switzerland. Formed ten years ago, the ministers for energy from the various countries represent a total of 200 million residents and 40% of the European electricity network. As of November 2019, the Benelux Union has a population of more than 29.55 million.
On 17 June 2008, Belgium (in all its component parts), the Netherlands, and Luxembourg signed a new Benelux treaty in The Hague. The purpose of the Benelux Union is to deepen and expand cooperation among the three countries so that it can continue its role as precursor within the European Union and strengthen and improve cross-border cooperation at every level. Through better cooperation between the countries the Benelux strives to promote the prosperity and welfare of the citizens of Belgium, the Netherlands and Luxembourg.
Benelux works together on the basis of an annual plan embedded in a four-year joint work programme.
Benelux seeks region-to-region cooperation, be it with France and Germany (North Rhine-Westphalia) or beyond with the Baltic States, the Nordic Council, the Visegrad countries, or even further. In 2018 a renewed political declaration was adopted between Benelux and North Rhine-Westphalia to give cooperation a further impetus.
Some examples of recent results of Benelux cooperation: automatic level recognition of all diplomas and degrees within the Benelux, a new Benelux Treaty on Police cooperation, common road inspections and a Benelux pilot with digital consignment notes. The Benelux is also committed to working together on adaptation to climate change. On 5 June 2018 the Benelux (Treaty) celebrated its 60 years of existence. In 2018, a Benelux Youth Parliament was created.
The main institutions of the Union are the Committee of Ministers, the Council of the Union, the General Secretariat, the InterParliamentary Consultative Council and the Benelux Court of Justice while the Benelux Office for Intellectual Property cover the same territory but are not part of the Benelux Union.
The Benelux General Secretariat is located in Brussels. It is the central platform of the Benelux Union cooperation. It handles the secretariat of the Committee of Ministers, the Council of Benelux Union and the various committees and working parties. The General Secretariat provides day-to-day support for the Benelux cooperation on the substantive, procedural, diplomatic and logistical levels. The Secretary-General is Alain de Muyser. The Deputy Secretary-General NL is Frank Weekers and the Deputy Secretary-General BE is Rudolf Huygelen.
The presidency of the Benelux is held in turn by the three countries for a period of one year. The Netherlands hold the presidency in 2020.
In addition to cooperation based on a Treaty, there is also political cooperation in the Benelux context, including summits of the Benelux government leaders. In 2019 a Benelux summit was held in Luxembourg.
A Benelux Parliament (originally referred to as an "Interparliamentary Consultative Council") was created in 1955. This parliamentary assembly is composed of 21 members of the Dutch parliament, 21 members of the Belgian national and regional parliaments, and 7 members of the Luxembourg parliament. On 20 January 2015, the governments of the three countries, including, as far as Belgium is concerned, the community and regional governments, signed in Brussels the Treaty of the Benelux Interparliamentary Assembly . This treaty will enter into force on the 1st of August 2019. This means the 1955 Convention on the Consultative Interparliamentary Council for the Benelux expires. Moreover, the current official name has been largely obsolete in daily practice for a number of years now. Both internally in the Benelux and in external references, the name Benelux Parliament has been used de facto for a number of years now.
In 1944, exiled representatives of the three countries signed the London Customs Convention, the treaty that established the Benelux Customs Union. Ratified in 1947, the treaty was in force from 1948 until it was superseded by the Benelux Economic Union. The treaty establishing the Benelux Economic Union ("Benelux Economische Unie/Union Économique Benelux") was signed on 3 February 1958 in The Hague and came into force on 1 November 1960 to promote the free movement of workers, capital, services, and goods in the region. Under the Treaty the Union implies the co-operation of economic, financial and social policies.
In 2017 the members of the Benelux, the Baltic Assembly, and three members of the Nordic Council (Sweden, Denmark and Finland), all EU-member states, sought intensifying cooperation in the Digital Single Market, as well as discussing social matters, the Economic and Monetary Union of the European Union, the European migrant crisis and defence cooperation. Relations with Russia, Turkey and the United Kingdom was also on the agenda.
Since 2008 the Benelux Union works together with the German Land (state) North Rhine-Westphalia.
In 2018 Benelux Union signed a declaration with France to strengthen cross-border cooperation.
The Benelux Union involves an intergovernmental cooperation.
The Treaty establishing the Benelux Union explicitly provides that the Benelux Committee of Ministers can resort to four legal instruments (art. 6, paragraph 2, under a), f), g) and h)):
1. Decisions
Decisions are legally binding regulations for implementing the Treaty establishing the Benelux Union or other Benelux treaties.
Their legally binding force concerns the Benelux states (and their sub-state entities), which have to implement them. However, they have no direct effect towards individual citizens or companies (notwithstanding any indirect protection of their rights based on such decisions as a source of international law). Only national provisions implementing a decision can directly create rights and obligations for citizens or companies.
2. Agreements
The Committee of Ministers can draw up agreements, which are then submitted to the Benelux states (and/or their sub-state entities) for signature and subsequent parliamentary ratification. These agreements can deal with any subject matter, also in policy areas that are not yet covered by cooperation in the framework of the Benelux Union.
These are in fact traditional treaties, with the same direct legally binding force towards both authorities and citizens or companies. The negotiations do however take place in the established context of the Benelux working groups and institutions, rather than on an ad hoc basis.
3. Recommendations
Recommendations are non-binding orientations, adopted at ministerial level, which underpin the functioning of the Benelux Union. These (policy) orientations may not be legally binding, but given their adoption at the highest political level and their legal basis vested directly in the Treaty, they do entail a strong moral obligation for any authority concerned in the Benelux countries.
4. Directives
Directives of the Committee of Ministers are mere inter-institutional instructions towards the Benelux Council and/or the Secretariat-General, for which they are binding. This instrument has so far only been used occasionally, basically in order to organise certain activities within a Benelux working group or to give them impetus.
All four instruments require the unanimous approval of the members of the Committee of Ministers (and, in the case of agreements, subsequent signature and ratification at national level).
In 1965, the treaty establishing a Benelux Court of Justice was signed. It entered into force in 1974. The Court, composed of judges from the highest courts of the three States, has to guarantee the uniform interpretation of common legal rules. This international judicial institution is located in Luxembourg.
The Benelux is particularly active in the field of intellectual property. The three countries established a Benelux Trademarks Office and a Benelux Designs Office, both situated in The Hague. In 2005, they concluded a treaty establishing a "Benelux Organisation for Intellectual Property" which replaced both offices upon its entry into force on 1 September 2006. This Organisation is the official body for the registration of trademarks and designs in the Benelux. In addition, it offers the possibility to formally record the existence of ideas, concepts, designs, prototypes and the like.
In 2018 Education ministers from all three of Belgium's regions as well as from the Netherlands and Luxembourg have signed an agreement to recognise the level of all higher education diplomas between the three countries. This is unique in the EU. To continue studies or get a job in another country, applicants must get their locally earned degree recognised by the other country, which entails a lot of paperwork, fees and sometimes a months-long wait. In 2015, the Benelux countries agreed to automatically recognise each other's bachelor's and master's diplomas. Now that recognition is extended to PhDs and to so-called graduate degrees, which are earned from adult educational institutions. This means that a graduate of any of the three countries can continue their education or seek a job in the other countries without having to get their degree officially recognised.
The Belgian Minister of Security and Home Affairs, Jan Jambon, the Belgian Minister of Justice, Koen Geens, the Dutch Minister of Justice and Security, Ferdinand Grapperhaus, the Luxembourg Minister of Homeland Security, Etienne Schneider and the Luxembourg Minister of Justice, Félix Braz, signed in 2018 a new Benelux police treaty, which will improve the exchange of information, create more opportunities for cross-border action and facilitate police investigations in the neighbouring country. In 2004, a Treaty on cross-border cooperation between the Benelux police forces was concluded. This has been completely revised and expanded. The Benelux countries are at the forefront of the European Union in this respect.
This new Treaty will allow direct access to each other's police databases on the basis of hit/no hit. In addition, direct consultation of police databases will be possible during joint operations and in common police stations. It will also be possible to consult population registers within the limits of national legislation. In the future, ANPR (Automatic Number Plate Recognition) camera data, which play an increasingly important role in the fight against crime, can be exchanged between the Benelux countries in accordance with their own applicable law. Police and judicial authorities will also work more closely with local authorities to exchange information on organised crime in a more targeted way (administrative approach) in accordance with national law.
The Treaty makes cross-border pursuit a lot easier and broadens the investigative powers of Benelux police officers. For example, it will be possible to continue a lawful hot pursuit in one's own country across the border, without the thresholds for criminal offences that characterise the current regulation. Another new feature of the Treaty is that a police officer can, under certain conditions, carry out cross-border investigations.
The existing intensive cooperation in the field of police liaison officers, joint patrols and checks as well as the provision of assistance at major events will be maintained. In addition, the possibilities for cross-border escort and surveillance missions and for operating on international trains will be considerably extended.
In the event of a crisis situation, special police units will now be able to act across borders; this can also be used to support important events with a high security risk, such as a NATO Summit.
After approval by the parliaments, and the elaboration of implementation agreements, the new Benelux Police Treaty will enter into force.
The Treaty of Liège entered into force in 2017. As a result, Dutch, Belgian and Luxembourg inspectors may carry out joint inspections of trucks and buses in the three countries. This treaty was signed in 2014 in Liège (Belgium) by the three countries. In the meantime, on the basis of a transitional regime and pending the entry into force of the Treaty, several major Benelux road transport inspections have taken place. Under this transition regime, inspectors from neighbouring countries could only act as observers. Now they can exercise all of their skills.
Co-operation on the basis of this Benelux Treaty leads to a more uniform control of road transport, cost reductions, more honest competition between transport companies and better working conditions for drivers. In addition, this cooperation strengthens general road safety in the three countries.
The Benelux Treaty seeks to intensify cooperation by improving the existing situation through intensive harmonisation of controls, exchange of equipment and training of personnel in order to reduce costs and by allowing inspectors of a country to participate in Inspections in another Benelux country by exercising all their powers, which in particular enables the expertise of the specialists in each country to be obtained. In so doing, they are fully committed to road safety for citizens and create a level playing field, so that entrepreneurs inside and outside the Benelux must comply with the same rules of control.
The application of the Treaty of Liège allows the three Benelux countries to play the role of forerunners in Europe. In addition, the treaty expressly provides for the possibility of accession of other countries.
By June 2019 already a total of 922 vehicles were subject to common Benelux inspections.
The Treaty between the Benelux countries establishing the Benelux Economic Union was limited to a period of 50 years. During the following years, and even more so after the creation of the European Union, the Benelux cooperation focused on developing other fields of activity within a constantly changing international context.
At the end of the 50 years, the governments of the three Benelux countries decided to renew the agreement, taking into account the new aspects of the Benelux-cooperation – such as security – and the new federal government structure of Belgium. The original establishing treaty, set to expire in 2010, was replaced by a new legal framework (called the Treaty revising the Treaty establishing the Benelux Economic Union), which was signed on 17 June 2008.
The new treaty has no set time limit and the name of the "Benelux Economic Union" changed to "Benelux Union" to reflect the broad scope on the union. The main objectives of the treaty are the continuation and enlargement of the cooperation between the three member states within a larger European context. The renewed treaty explicitly foresees the possibility that the Benelux countries will cooperate with other European member States or with regional cooperation structures. The new Benelux cooperation focuses on three main topics: internal market and economic union, sustainability, justice and internal affairs. The number of structures in the renewed Treaty has been reduced and thus simplified. Five Benelux institutions remain: the Benelux Committee of Ministers, the Benelux Council, the Benelux Parliament, the Benelux Court of Justice, the Benelux Secretariat General. Beside these five institutions, the Benelux Organisation for Intellectual Property is also present in this Treaty as an independent organisation.
Benelux Committee of Ministers:
The Committee of Ministers is the supreme decision-making body of the Benelux. It includes at least one representative at ministerial level from the three countries. Its composition varies according to its agenda. The ministers determine the orientations and priorities of Benelux cooperation. The presidency of the Committee rotates between the three countries on an annual basis.
Benelux Council:
The Council is composed of senior officials from the relevant ministries. Its composition varies according to its agenda. The Council's main task is to prepare the dossiers for the ministers.
Benelux InterParliamentary Consultative Council:
The Benelux Parliament comprises 49 representatives from the parliaments of Belgium, the Netherlands and Luxembourg. Its members inform and advise their respective governments on all Benelux matters.
Benelux Court of Justice:
The Benelux Court of Justice is an international court. Its mission is to promote uniformity in the application of Benelux legislation. When faced with difficulty interpreting a common Benelux legal rule, national courts must seek an interpretive ruling from the Benelux Court, which subsequently renders a binding decision. The members of the Court are appointed from among the judges of the 'Cour de cassation' of Belgium, the 'Hoge Raad of the Netherlands' and the 'Cour de cassation' of Luxembourg.
Benelux General Secretariat:
The General Secretariat, which is based in Brussels, forms the cooperation platform of the Benelux Union. It acts as the secretariat of the Committee of Ministers, the Council and various commissions and working groups. Because the General Secretariat operates under strict neutrality, it is perfectly placed to build bridges between the various partners and stakeholders. The General Secretariat has years of expertise in the area of Benelux cooperation and is familiar with the policy agreements and differences between the three countries. Building on what already been achieved, the General Secretariat puts its knowledge, network and experience at the service of partners and stakeholders who endorse its mission. It initiates, supports and monitors cooperation results in the areas of economy, sustainability and security. In a greatly enlarged European Union, Benelux cooperation is a source of inspiration for Europe. | https://en.wikipedia.org/wiki?curid=4170 |
Boston Herald
The Boston Herald is an American daily newspaper whose primary market is Boston, Massachusetts and its surrounding area. It was founded in 1846 and is one of the oldest daily newspapers in the United States. It has been awarded eight Pulitzer Prizes in its history, including four for editorial writing and three for photography before it was converted to tabloid format in 1981. The "Herald" was named one of the "10 Newspapers That 'Do It Right' in 2012 by "Editor & Publisher".
In December 2017, the "Herald" filed for bankruptcy. On February 14, 2018, Digital First Media successfully bid $11.9 million to purchase the company in a bankruptcy auction; the acquisition was completed on March 19, 2018. As of August 2018, the paper had approximately 110 total employees, compared to about 225 before the sale.
The "Herald" history can be traced back through two lineages, the "Daily Advertiser" and the old "Boston Herald", and two media moguls, William Randolph Hearst and Rupert Murdoch.
The original "Boston Herald" was founded in 1846 by a group of Boston printers jointly under the name of John A. French & Company. The paper was published as a single two-sided sheet, selling for one cent. Its first editor, William O. Eaton, just 22 years old, said "The "Herald" will be independent in politics and religion; liberal, industrious, enterprising, critically concerned with literacy and dramatic matters, and diligent in its mission to report and analyze the news, local and global."
In 1847, the "Boston Herald" absorbed the Boston "American Eagle" and the Boston "Daily Times".
In October 1917, John H. Higgins, the publisher and treasurer of the Boston Herald bought out its next door neighbor "The Boston Journal" and created "The Boston Herald and Boston Journal"
Even earlier than the "Herald", the weekly "American Traveler" was founded in 1825 as a bulletin for stagecoach listings.
The "Boston Evening Traveler" was founded in 1845. The " Boston Evening Traveler" was the successor to the weekly "American Traveler" and the semi-weekly "Boston Traveler". In 1912, the "Herald" acquired the "Traveler", continuing to publish both under their own names. For many years, the newspaper was controlled by many of the investors in United Shoe Machinery Co. After a newspaper strike in 1967, Herald-Traveler Corp. suspended the afternoon "Traveler" and absorbed the evening edition into the Herald to create the "Boston Herald Traveler."
The "Boston Daily Advertiser" was established in 1813 in Boston by Nathan Hale. The paper grew to prominence throughout the 19th century, taking over other Boston area papers. In 1832 The Advertiser took over control of "The Boston Patriot", and then in 1840 it took over and absorbed "The Boston Gazette". The paper was purchased by William Randolph Hearst in 1917. In 1920 the "Advertiser" was merged with "The Boston Record", initially the combined newspaper was called the "Boston Advertiser" however when the combined newspaper became an illustrated tabloid in 1921 it was renamed "The Boston American". Hearst Corp. continued using the name "Advertiser" for its Sunday paper until the early 1970s.
On September 3, 1884, "The Boston Evening Record" was started by the "Boston Advertiser" as a campaign newspaper. The "Record" was so popular that it was made a permanent publication.
In 1904, William Randolph Hearst began publishing his own newspaper in Boston called "The American". Hearst ultimately ended up purchasing the "Daily Advertiser" in 1917. By 1938, the "Daily Advertiser" had changed to the "Daily Record", and "The American" had become the "Sunday Advertiser". A third paper owned by Hearst, called the "Afternoon Record", which had been renamed the "Evening American", merged in 1961 with the "Daily Record" to form the "Record American". The "Sunday Advertiser" and "Record American" would ultimately be merged in 1972 into "The Boston Herald Traveler" a line of newspapers that stretched back to the old "Boston Herald".
In 1946, Herald-Traveler Corporation acquired Boston radio station WHDH. Two years later, WHDH-FM was licensed, and on November 26, 1957, WHDH-TV made its début as an ABC affiliate on channel 5. In 1961, WHDH-TV's affiliation switched to CBS. Herald-Traveler Corp. operated for years beginning some time after under temporary authority from the Federal Communications Commission stemming from controversy over luncheon meetings the newspaper's chief executive purportedly had with John C. Doerfer, chairman of the FCC between 1957 and 1960, who served as a commissioner during the original licensing process. (Some Boston broadcast historians accuse "The Boston Globe" of being covertly behind the proceeding as a sort of vendetta for not getting a license—The "Herald Traveler" was Republican in sympathies, and the "Globe" then had a firm policy of not endorsing political candidates, although Doerfer's history at the FCC also lent suspicions) The FCC ordered comparative hearings, and in 1969 a competing applicant, Boston Broadcasters, Inc., was granted a construction permit to replace WHDH-TV on channel 5. Herald-Traveler Corp. fought the decision in court—by this time, revenues from channel 5 were all but keeping the newspaper afloat—but its final appeal ran out in 1972, and on March 19 WHDH-TV was forced to surrender channel 5 to the new WCVB-TV.
Without a television station to subsidize the newspaper, the "Herald Traveler" was no longer able to remain in business, and the newspaper was sold to Hearst Corporation, which published the rival all-day newspaper, the "Record American". The two papers were merged to become an all-day paper called the "Boston Herald Traveler and Record American" in the morning and "Record-American and Boston Herald Traveler" in the afternoon. The first editions published under the new combined name were those of June 19, 1972. The afternoon edition was soon dropped and the unwieldy name shortened to "Boston Herald American", with the Sunday edition called the "Sunday Herald Advertiser". The "Herald American" was printed in broadsheet format, and failed to target a particular readership; where the "Record American" had been a typical city tabloid, the "Herald Traveler" was a Republican paper.
The "Herald American" converted to tabloid format in September 1981, but Hearst faced steep declines in circulation and advertising. The company announced it would close the "Herald American"—making Boston a one-newspaper town—on December 3, 1982. When the deadline came, Australian media baron Rupert Murdoch was negotiating to buy the paper and save it. He closed on the deal after 30 hours of talks with Hearst and newspaper unions—and five hours after Hearst had sent out notices to newsroom employees telling them they were terminated. The newspaper announced its own survival the next day with a full-page headline: "You Bet We're Alive!"
Murdoch changed the paper's name back to the "Boston Herald". The "Herald" continued to grow, expanding its coverage and increasing its circulation until 2001, when nearly all newspapers fell victim to declining circulations and revenue.
In February 1994, Murdoch's News Corporation was forced to sell the paper, in order that its subsidiary Fox Television Stations could legally consummate its purchase of Fox affiliate WFXT (Channel 25) because Massachusetts Senator Ted Kennedy included language in an appropriations barring one company from owning a newspaper and television station in the same market. Patrick J. Purcell, who was the publisher of the "Boston Herald" and a former News Corporation executive, purchased the "Herald" and established it as an independent newspaper. Several years later, Purcell would give the "Herald" a suburban presence it never had by purchasing the money-losing Community Newspaper Company from Fidelity Investments. Although the companies merged under the banner of Herald Media, Inc., the suburban papers maintained their distinct editorial and marketing identity.
After years of operating profits at Community Newspaper and losses at the "Herald", Purcell in 2006 sold the suburban chain to newspaper conglomerate Liberty Group Publishing of Illinois, which soon after changed its name to GateHouse Media. The deal, which also saw GateHouse acquiring "The Patriot Ledger" and "The Enterprise" respectively in south suburban Quincy and Brockton, netted $225 million for Purcell, who vowed to use the funds to clear the "Herald"'s debt and reinvest in the Paper.
On August 5, 2013, the "Herald" launched an internet radio station named Boston Herald Radio which includes radio shows by much of the Herald staff. The station's morning lineup is simulcast on 830 AM WCRN from 10 AM Eastern time to 12 noon Eastern time.
In December 2017, the "Herald" announced plans to sell itself to GateHouse Media after filing for chapter 11 bankruptcy protection. The deal was scheduled to be completed by February 2018, with the new company streamlining and having layoffs in coming months. However, in early January 2018, another potential buyer, Revolution Capital Group of Los Angeles, filed a bid with the federal bankruptcy court; the "Herald" reported in a press release that "the court requires BHI [Boston Herald, Inc.] to hold an auction to allow all potential buyers an opportunity to submit competing offers."
In February 2018, acquisition of the "Herald" by Digital First Media for almost $12 million was approved by the bankruptcy court judge in Delaware. The new owner, DFM, said they would be keeping 175 of the approximately 240 employees the "Herald" had when it sought bankruptcy protection in December 2017. The acquisition was completed on March 19, 2018.
The Herald and parent DFM were criticized for ending the ten-year printing contract with competitor "The Boston Globe", moving printing from Taunton, Massachusetts, to Rhode Island and its "dehumanizing cost-cutting efforts" in personnel. In June, some design and advertising layoffs were expected, with work moving to a sister paper, "The Denver Post". The "consolidation" took effect in August, with nine jobs eliminated.
In late August 2018, it was announced that the "Herald" would move its offices from Boston's Seaport District to Braintree, Massachusetts, in late November or early December.
The Boston Herald Newspapers in Education (NIE) program provides teachers with classroom newspapers and educational materials designed to help students of all ages and abilities excel. This is made possible through donations from Herald readers and other sponsors. The "Boston Herald" is available in two formats: the print edition and the online e-Edition. The website can be found at http://bostonheraldnie.com/ | https://en.wikipedia.org/wiki?curid=4171 |
Cyan
Cyan (, ) is a greenish-blue color. It is evoked by light with a predominant wavelength of between 490520 nm, between the wavelengths of green and blue.
In the subtractive color system, or CMYK (subtractive), which can be overlaid to produce all colors in paint and color printing, cyan is one of the primary colors, along with magenta, yellow, and black. In the additive color system, or RGB (additive) color model, used to create all the colors on a computer or television display, cyan is made by mixing equal amounts of green and light blue. Cyan is the complement of red; it can be made by the removal of red from white light. Mixing red light and cyan light at the right intensity will make white light.
The web color cyan is synonymous with aqua. Other colors in the cyan color range are teal, turquoise, electric blue, aquamarine, and others described as blue-green.
Its name is derived from the Ancient Greek κύανος, transliterated "kyanos", meaning "dark blue, dark blue enamel, Lapis lazuli". It was formerly known as "cyan blue" or cyan-blue, and its first recorded use as a color name in English was in 1879. Further origins of the color name can be traced back to a dye produced from the cornflower ("Centaurea cyanus").
In most languages, 'cyan' is not a basic color term and it phenomenologically appears as a greenish vibrant hue of blue to most English speakers. Other English terms for this "borderline" hue region include "blue green", "aqua", and "turquoise".
The web color cyan shown at right is a secondary color in the RGB color model, which uses combinations of red, green and blue light to create all the colors on computer and television displays. In X11 colors, this color is called both cyan and aqua. In the HTML color list, this same color is called aqua.
The web colors are more vivid than the cyan used in the CMYK color system, and the web colors cannot be accurately reproduced on a printed page. To reproduce the web color cyan in inks, it is necessary to add some white ink to the printer's cyan below, so when it is reproduced in printing, it is not a primary subtractive color. It is called "aqua" (a name in use since 1598) because it is a color commonly associated with water, such as the appearance of the water at a tropical beach.
Cyan is also one of the common inks used in four-color printing, along with magenta, yellow, and black; this set of colors is referred to as CMYK. In printing, the cyan ink is sometimes known as printer's cyan, process cyan, or process blue.
While both the additive secondary and the subtractive primary are called "cyan", they can be substantially different from one another. Cyan printing ink is typically more saturated than the RGB secondary cyan, depending on what RGB color space and ink are considered. That is, process cyan is usually outside the RGB gamut, and there is no fixed conversion from CMYK primaries to RGB. Different formulations are used for printer's ink, so there can be variations in the printed color that is pure cyan ink. This is because real-world subtractive (unlike additive) color mixing does not consistently produce the same result when mixing apparently identical colors, since the specific frequencies filtered out to produce that color affect how it interacts with other colors. Phthalocyanine blue is one such commonly used pigment. A typical formulation of "process cyan" is shown in the color box at right. | https://en.wikipedia.org/wiki?curid=6102 |
Cream
Cream is a dairy product composed of the higher-fat layer skimmed from the top of milk before homogenization. In un-homogenized milk, the fat, which is less dense, eventually rises to the top. In the industrial production of cream, this process is accelerated by using centrifuges called "separators". In many countries, it is sold in several grades depending on the total butterfat content. It can be dried to a powder for shipment to distant markets, and contains high levels of saturated fat.
Cream skimmed from milk may be called "sweet cream" to distinguish it from cream skimmed from whey, a by-product of cheese-making. Whey cream has a lower fat content and tastes more salty, tangy and "cheesy". In many countries, cream is usually sold partially fermented: sour cream, crème fraîche, and so on. Both forms have many culinary uses in sweet, bitter, salty and tangy dishes.
Cream produced by cattle (particularly Jersey cattle) grazing on natural pasture often contains some natural carotenoid pigments derived from the plants they eat; this gives it a slightly yellow tone, hence the name of the yellowish-white color: cream. This is also the origin of butter's yellow color. Cream from goat's milk, water buffalo milk, or from cows fed indoors on grain or grain-based pellets, is white.
Cream is used as an ingredient in many foods, including ice cream, many sauces, soups, stews, puddings, and some custard bases, and is also used for cakes. Whipped cream is served as a topping on ice cream sundaes, milkshakes, lassi, eggnog, sweet pies, strawberries, blueberries or peaches. Irish cream is an alcoholic liqueur which blends cream with whiskey, and often honey, wine, or coffee. Cream is also used in Indian curries such as masala dishes.
Cream (usually light/single cream or half and half) is often added to coffee in the US and Canada.
Both single and double cream (see Types for definitions) can be used in cooking. Double cream or full-fat crème fraîche are often used when cream is added to a hot sauce, to prevent any problem with it separating or "splitting". Double cream can be thinned with milk to make an approximation of single cream.
The French word "crème" denotes not only dairy cream, but also other thick liquids such as sweet and savory custards, which are normally made with milk, not cream.
Different grades of cream are distinguished by their fat content, whether they have been heat-treated, whipped, and so on. In many jurisdictions, there are regulations for each type.
The Australia New Zealand Food Standards Code – Standard 2.5.2 – Defines cream as a milk product comparatively rich in fat, in the form of an emulsion of fat-in-skim milk, which can be obtained by separation from milk. Cream must contain no less than 350 g/kg (35%) milk fat.
Manufacturers labels may distinguish between different fat contents, a general guideline is as follows:
Canadian cream definitions are similar to those used in the United States, except for "light cream", which is very low-fat cream, usually with 5 or 6 percent butterfat. Specific product characteristics are generally uniform throughout Canada, but names vary by both geographic and linguistic area and by manufacturer: "coffee cream" may be 10 or 18 percent cream and "half-and-half" ("crème légère") may be 3, 5, 6 or 10 percent, all depending on location and brand.
Regulations allow cream to contain acidity regulators and stabilizers. For whipping cream, allowed additives include skim milk powder (≤ 0.25%), glucose solids (≤ 0.1%), calcium sulphate (≤ 0.005%), and xanthan gum (≤ 0.02%). The content of milk fat in canned cream must be displayed as a percentage followed by "milk fat", "B.F", or "M.F".
In France, the use of the term "cream" for food products is defined by the decree 80-313 of April 23, 1980. It specifies the minimum rate of milk fat (12%) as well as the rules for pasteurisation or UHT sterilisation. The mention "crème fraîche" (fresh cream) can only be used for pasteurised creams conditioned on production site within 24h after pasteurisation. Even if food additives complying with French and European laws are allowed, usually, none will be found in plain "crèmes" and "crèmes fraîches" apart from lactic ferments (some low cost creams (or close to creams) can contain thickening agents, but rarely). Fat content is commonly shown as "XX% M.G." ("matière grasse").
Russia, as well as other EAC countries, legally separates cream into two classes: normal (10–34% butterfat) and heavy (35–58%), but the industry has pretty much standardized around the following types:
In Sweden, cream is usually sold as:
Mellangrädde (27%) is, nowadays, a less common variant.
Gräddfil (usually 12 %) and Creme Fraiche (usually around 35 %) are two common sour cream products.
In Switzerland, the types of cream are legally defined as follows:
Sour cream and crème fraîche (German: Sauerrahm, Crème fraîche; French: crème acidulée, crème fraîche; Italian: panna acidula, crème fraîche) are defined as cream soured by bacterial cultures.
Thick cream (German: verdickter Rahm; French: crème épaissie; Italian: panna addensata) is defined as cream thickened using thickening agents.
In the United Kingdom, the types of cream are legally defined as follows:
In the United States, cream is usually sold as:
Most cream products sold in the United States at retail contain the minimum permissible fat content for their product type, e.g., "Half and half" almost always contains only 10.5% butterfat. Not all grades are defined by all jurisdictions, and the exact fat content ranges vary. The above figures, except for "manufacturer's cream", are based on the Code of Federal Regulations, Title 21, Part 131
Cream may have thickening agents and stabilizers added. Thickeners include sodium alginate, carrageenan, gelatine, sodium bicarbonate, tetrasodium pyrophosphate, and alginic acid.
Other processing may be carried out. For example, cream has a tendency to produce oily globules (called "feathering") when added to coffee. The stability of the cream may be increased by increasing the non-fat solids content, which can be done by partial demineralisation and addition of sodium caseinate, although this is expensive.
by churning cream to separate the butterfat and buttermilk. This can be done by hand or by machine.
Whipped cream is made by whisking or mixing air into cream with more than 30% fat, to turn the liquid cream into a soft solid. Nitrous oxide, from whipped-cream chargers may also be used to make whipped cream.
Sour cream, common in many countries including the U.S., Canada and Australia, is cream (12 to 16% or more milk fat) that has been subjected to a bacterial culture that produces lactic acid (0.5%+), which sours and thickens it.
Crème fraîche (28% milk fat) is slightly soured with bacterial culture, but not as sour or as thick as sour cream. Mexican crema (or cream espesa) is similar to crème fraîche.
Smetana is a heavy cream derived (15–40% milk fat) Central and Eastern European sweet or sour cream.
Rjome or rømme is Norwegian sour cream containing 35% milk fat, similar to Icelandic sýrður rjómi.
Clotted cream, common in the United Kingdom, is made through a process that starts by slowly heating whole milk to produce a very high-fat (55%) product. This is similar to Indian malai.
Reduced cream is a cream product used in New Zealand to make Kiwi dip.
Some non-edible substances are called creams due to their consistency: shoe cream is runny, unlike regular waxy shoe polish; hand/body 'creme' or "skin cream" is meant for moisturizing the skin.
Regulations in many jurisdictions restrict the use of the word "cream" for foods. Words such as "creme", "kreme", "creame", or "whipped topping" (e.g., Cool Whip) are often used for products which cannot legally be called cream, though in some jurisdictions even these spellings may be disallowed, for example under the doctrine of "idem sonans". Oreo and Hydrox cookies are a type of sandwich cookie in which two biscuits have a soft, sweet filling between them that is called "crème filling." In some cases, foods can be described as cream although they do not contain predominantly milk fats; for example in Britain "ice cream" does not have to be a dairy product (although it must be labelled "contains non-milk fat"), and salad cream is the customary name for a condiment that has been produced since the 1920s.
In other languages, cognates of "cream" are also sometimes used for non-food products, such as fogkrém (Hungarian for toothpaste), or Sonnencreme (German for suntan lotion).
Nutrition chart for heavy cream | https://en.wikipedia.org/wiki?curid=6109 |
Chemical vapor deposition
Chemical vapor deposition (CVD) is a vacuum deposition method used to produce high quality, high-performance, solid materials. The process is often used in the semiconductor industry to produce thin films.
In typical CVD, the wafer (substrate) is exposed to one or more volatile precursors, which react and/or decompose on the substrate surface to produce the desired deposit. Frequently, volatile by-products are also produced, which are removed by gas flow through the reaction chamber.
Microfabrication processes widely use CVD to deposit materials in various forms, including: monocrystalline, polycrystalline, amorphous, and epitaxial. These materials include: silicon (dioxide, carbide, nitride, oxynitride), carbon (fiber, nanofibers, nanotubes, diamond and graphene), fluorocarbons, filaments, tungsten, titanium nitride and various high-k dielectrics.
CVD is practiced in a variety of formats. These processes generally differ in the means by which chemical reactions are initiated.
Most modern CVD is either LPCVD or UHVCVD.
CVD is commonly used to deposit conformal films and augment substrate surfaces in ways that more traditional surface modification techniques are not capable of. CVD is extremely useful in the process of atomic layer deposition at depositing extremely thin layers of material. A variety of applications for such films exist. Gallium arsenide is used in some integrated circuits (ICs) and photovoltaic devices. Amorphous polysilicon is used in photovoltaic devices. Certain carbides and nitrides confer wear-resistance. Polymerization by CVD, perhaps the most versatile of all applications, allows for super-thin coatings which possess some very desirable qualities, such as lubricity, hydrophobicity and weather-resistance to name a few. The CVD of metal-organic frameworks, a class of crystalline nanoporous materials, has recently been demonstrated. Recently scaled up as an integrated cleanroom process depositing large-area substrates,the applications for these films are anticipated in gas sensing and low-k dielectrics
CVD techniques are adventageous for membrane coatings as well, such as those in desalination or water treatment, as these coatings can be sufficiently uniform (conformal) and thin that they do not clog membrane pores.
Polycrystalline silicon is deposited from trichlorosilane (SiHCl3) or silane (SiH4), using the following reactions:
This reaction is usually performed in LPCVD systems, with either pure silane feedstock, or a solution of silane with 70–80% nitrogen. Temperatures between 600 and 650 °C and pressures between 25 and 150 Pa yield a growth rate between 10 and 20 nm per minute. An alternative process uses a hydrogen-based solution. The hydrogen reduces the growth rate, but the temperature is raised to 850 or even 1050 °C to compensate. Polysilicon may be grown directly with doping, if gases such as phosphine, arsine or diborane are added to the CVD chamber. Diborane increases the growth rate, but arsine and phosphine decrease it.
Silicon dioxide (usually called simply "oxide" in the semiconductor industry) may be deposited by several different processes. Common source gases include silane and oxygen, dichlorosilane (SiCl2H2) and nitrous oxide (N2O), or tetraethylorthosilicate (TEOS; Si(OC2H5)4). The reactions are as follows:
The choice of source gas depends on the thermal stability of the substrate; for instance, aluminium is sensitive to high temperature. Silane deposits between 300 and 500 °C, dichlorosilane at around 900 °C, and TEOS between 650 and 750 °C, resulting in a layer of "low- temperature oxide" (LTO). However, silane produces a lower-quality oxide than the other methods (lower dielectric strength, for instance), and it deposits nonconformally. Any of these reactions may be used in LPCVD, but the silane reaction is also done in APCVD. CVD oxide invariably has lower quality than thermal oxide, but thermal oxidation can only be used in the earliest stages of IC manufacturing.
Oxide may also be grown with impurities (alloying or "doping"). This may have two purposes. During further process steps that occur at high temperature, the impurities may diffuse from the oxide into adjacent layers (most notably silicon) and dope them. Oxides containing 5–15% impurities by mass are often used for this purpose. In addition, silicon dioxide alloyed with phosphorus pentoxide ("P-glass") can be used to smooth out uneven surfaces. P-glass softens and reflows at temperatures above 1000 °C. This process requires a phosphorus concentration of at least 6%, but concentrations above 8% can corrode aluminium. Phosphorus is deposited from phosphine gas and oxygen:
Glasses containing both boron and phosphorus (borophosphosilicate glass, BPSG) undergo viscous flow at lower temperatures; around 850 °C is achievable with glasses containing around 5 weight % of both constituents, but stability in air can be difficult to achieve. Phosphorus oxide in high concentrations interacts with ambient moisture to produce phosphoric acid. Crystals of BPO4 can also precipitate from the flowing glass on cooling; these crystals are not readily etched in the standard reactive plasmas used to pattern oxides, and will result in circuit defects in integrated circuit manufacturing.
Besides these intentional impurities, CVD oxide may contain byproducts of the deposition. TEOS produces a relatively pure oxide, whereas silane introduces hydrogen impurities, and dichlorosilane introduces chlorine.
Lower temperature deposition of silicon dioxide and doped glasses from TEOS using ozone rather than oxygen has also been explored (350 to 500 °C). Ozone glasses have excellent conformality but tend to be hygroscopic – that is, they absorb water from the air due to the incorporation of silanol (Si-OH) in the glass. Infrared spectroscopy and mechanical strain as a function of temperature are valuable diagnostic tools for diagnosing such problems.
Silicon nitride is often used as an insulator and chemical barrier in manufacturing ICs. The following two reactions deposit silicon nitride from the gas phase:
Silicon nitride deposited by LPCVD contains up to 8% hydrogen. It also experiences strong tensile stress, which may crack films thicker than 200 nm. However, it has higher resistivity and dielectric strength than most insulators commonly available in microfabrication (1016 Ω·cm and 10 MV/cm, respectively).
Another two reactions may be used in plasma to deposit SiNH:
These films have much less tensile stress, but worse electrical properties (resistivity 106 to 1015 Ω·cm, and dielectric strength 1 to 5 MV/cm).
CVD for tungsten is achieved from tungsten hexafluoride (WF6), which may be deposited in two ways:
Other metals, notably aluminium and copper, can be deposited by CVD. , commercially cost-effective CVD for copper did not exist, although volatile sources exist, such as Cu(hfac)2. Copper is typically deposited by electroplating. Aluminum can be deposited from triisobutylaluminium (TIBAL) and related organoaluminium compounds.
CVD for molybdenum, tantalum, titanium, nickel is widely used. These metals can form useful silicides when deposited onto silicon. Mo, Ta and Ti are deposited by LPCVD, from their pentachlorides. Nickel, molybdenum, and tungsten can be deposited at low temperatures from their carbonyl precursors. In general, for an arbitrary metal "M", the chloride deposition reaction is as follows:
whereas the carbonyl decomposition reaction can happen spontaneously under thermal treatment or acoustic cavitation and is as follows:
the decomposition of metal carbonyls is often violently precipitated by moisture or air, where oxygen reacts with the metal precursor to form metal or metal oxide along with carbon dioxide.
Niobium(V) oxide layers can be produced by the thermal decomposition of niobium(V) ethoxide with the loss of diethyl ether according to the equation:
Many variations of CVD can be utilized to synthesize graphene. Although many advancements have been made, the processes listed below are not commercially viable yet.
The most popular carbon source that is used to produce graphene is methane gas. One of the less popular choices is petroleum asphalt, notable for being inexpensive but more difficult to work with.
Although methane is the most popular carbon source, hydrogen is required during the preparation process to promote carbon deposition on the substrate. If the flow ratio of methane and hydrogen are not appropriate, it will cause undesirable results. During the growth of graphene, the role of methane is to provide a carbon source, the role of hydrogen is to provide H atoms to corrode amorphous C, and improve the quality of graphene. But excessive H atoms can also corrode graphene. As a result, the integrity of the crystal lattice is destroyed, and the quality of graphene is deteriorated. Therefore, by optimizing the flow rate of methane and hydrogen gases in the growth process, the quality of graphene can be improved.
The use of catalyst is viable in changing the physical process of graphene production. Notable examples include iron nanoparticles, nickel foam, and gallium vapor. These catalysts can either be used in situ during graphene buildup, or situated at some distance away at the deposition area. Some catalysts require another step to remove them from the sample material.
The direct growth of high-quality, large single-crystalline domains of graphene on a dielectric substrate is of vital importance for applications in electronics and optoelectronics. Combining the advantages of both catalytic CVD and the ultra-flat dielectric substrate, gaseous catalyst-assisted CVD paves the way for synthesizing high-quality graphene for device applications while avoiding the transfer process.
Physical conditions such as surrounding pressure, temperature, carrier gas, and chamber material play a big role in production of graphene.
Most systems use LPCVD with pressures ranging from 1 to 1500 Pa. However, some still use APCVD. Low pressures are used more commonly as they help prevent unwanted reactions and produce more uniform thickness of deposition on the substrate.
On the other hand, temperatures used range from 800–1050 °C. High temperatures translate to an increase of the rate of reaction. Caution has to be exercised as high temperatures do pose higher danger levels in addition to greater energy costs.
Hydrogen gas and inert gases such as argon are flowed into the system. These gases act as a carrier, enhancing surface reaction and improving reaction rate, thereby increasing deposition of graphene onto the substrate.
Standard quartz tubing and chambers are used in CVD of graphene. Quartz is chosen because it has a very high melting point and is chemically inert. In other words, quartz does not interfere with any physical or chemical reactions regardless of the conditions.
Raman spectroscopy, X-ray spectroscopy, transmission electron microscopy (TEM), and scanning electron microscopy (SEM) are used to examine and characterize the graphene samples.
Raman spectroscopy is used to characterize and identify the graphene particles; X-ray spectroscopy is used to characterize chemical states; TEM is used to provide fine details regarding the internal composition of graphene; SEM is used to examine the surface and topography.
Sometimes, atomic force microscopy (AFM) is used to measure local properties such as friction and magnetism.
Cold wall CVD technique can be used to study the underlying surface science involved in graphene nucleation and growth as it allows unprecedented control of process parameters like gas flow rates, temperature and pressure as demonstrated in a recent study. The study was carried out in a home-built vertical cold wall system utilizing resistive heating by passing direct current through the substrate. It provided conclusive insight into a typical surface-mediated nucleation and growth mechanism involved in two-dimensional materials grown using catalytic CVD under conditions sought out in the semiconductor industry.
In spite of graphene's exciting electronic and thermal properties, it is unsuitable as a transistor for future digital devices, due to the absence of a bandgap between the conduction and valence bands. This makes it impossible to switch between on and off states with respect to electron flow. Scaling things down, graphene nanoribbons of less than 10 nm in width do exhibit electronic bandgaps and are therefore potential candidates for digital devices. Precise control over their dimensions, and hence electronic properties, however, represents a challenging goal, and the ribbons typically possess rough edges that are detrimental to their performance.
CVD can be used to produce a synthetic diamond by creating the circumstances necessary for carbon atoms in a gas to settle on a substrate in crystalline form. CVD of diamonds has received much attention in the materials sciences because it allows many new applications that had previously been considered too expensive. CVD diamond growth typically occurs under low pressure (1–27 kPa; 0.145–3.926 psi; 7.5–203 Torr) and involves feeding varying amounts of gases into a chamber, energizing them and providing conditions for diamond growth on the substrate. The gases always include a carbon source, and typically include hydrogen as well, though the amounts used vary greatly depending on the type of diamond being grown. Energy sources include hot filament, microwave power, and arc discharges, among others. The energy source is intended to generate a plasma in which the gases are broken down and more complex chemistries occur. The actual chemical process for diamond growth is still under study and is complicated by the very wide variety of diamond growth processes used.
Using CVD, films of diamond can be grown over large areas of substrate with control over the properties of the diamond produced. In the past, when high pressure high temperature (HPHT) techniques were used to produce a diamond, the result was typically very small free standing diamonds of varying sizes. With CVD diamond growth areas of greater than fifteen centimeters (six inches) diameter have been achieved and much larger areas are likely to be successfully coated with diamond in the future. Improving this process is key to enabling several important applications.
The growth of diamond directly on a substrate allows the addition of many of diamond's important qualities to other materials. Since diamond has the highest thermal conductivity of any bulk material, layering diamond onto high heat producing electronics (such as optics and transistors) allows the diamond to be used as a heat sink. Diamond films are being grown on valve rings, cutting tools, and other objects that benefit from diamond's hardness and exceedingly low wear rate. In each case the diamond growth must be carefully done to achieve the necessary adhesion onto the substrate. Diamond's very high scratch resistance and thermal conductivity, combined with a lower coefficient of thermal expansion than Pyrex glass, a coefficient of friction close to that of Teflon (polytetrafluoroethylene) and strong lipophilicity would make it a nearly ideal non-stick coating for cookware if large substrate areas could be coated economically.
CVD growth allows one to control the properties of the diamond produced. In the area of diamond growth, the word "diamond" is used as a description of any material primarily made up of sp3-bonded carbon, and there are many different types of diamond included in this. By regulating the processing parameters—especially the gases introduced, but also including the pressure the system is operated under, the temperature of the diamond, and the method of generating plasma—many different materials that can be considered diamond can be made. Single crystal diamond can be made containing various dopants. Polycrystalline diamond consisting of grain sizes from several nanometers to several micrometers can be grown. Some polycrystalline diamond grains are surrounded by thin, non-diamond carbon, while others are not. These different factors affect the diamond's hardness, smoothness, conductivity, optical properties and more.
Commercially, mercury cadmium telluride is of continuing interest for detection of infrared radiation. Consisting of an alloy of CdTe and HgTe, this material can be prepared from the dimethyl derivatives of the respective elements. | https://en.wikipedia.org/wiki?curid=6111 |
CN Tower
The CN Tower () is a concrete communications and observation tower located in Downtown Toronto, Ontario, Canada. Built on the former Railway Lands, it was completed in 1976. Its name "CN" originally referred to Canadian National, the railway company that built the tower. Following the railway's decision to divest non-core freight railway assets prior to the company's privatization in 1995, it transferred the tower to the Canada Lands Company, a federal Crown corporation responsible for real estate development.
The CN Tower held the record for the world's tallest free-standing structure for 32 years until 2007 when it was surpassed by the Burj Khalifa, and was the world's tallest tower until 2009 when it was surpassed by the Canton Tower. It is now the ninth tallest free-standing structure in the world and remains the tallest free-standing structure on land in the Western Hemisphere. In 1995, the CN Tower was declared one of the modern Seven Wonders of the World by the American Society of Civil Engineers. It also belongs to the World Federation of Great Towers.
It is a signature icon of Toronto's skyline and attracts more than two million international visitors annually.
On March 14, 2020, the CN Tower was closed due to the COVID-19 pandemic, and the closure has since been extended indefinitely.
The original concept of the CN Tower originated in 1968 when the Canadian National Railway wanted to build a large TV and radio communication platform to serve the Toronto area, as well as demonstrate the strength of Canadian industry and CN in particular. These plans evolved over the next few years, and the project became official in 1972.
The tower would have been part of Metro Centre (see CityPlace), a large development south of Front Street on the Railway Lands, a large railway switching yard that was being made redundant by newer yards outside the city. Key project team members were NCK Engineering as structural engineer; John Andrews Architects; Webb, Zerafa, Menkes, Housden Architects; Foundation Building Construction; and Canron (Eastern Structural Division).
As Toronto grew rapidly during the late 1960s and early 1970s, multiple skyscrapers were constructed in the downtown core, most notably First Canadian Place. The reflective nature of the new buildings compromised the quality of broadcast signals necessitating new, higher antennas that were at least tall.
At the time, most data communications took place over point-to-point microwave links, whose dish antennae covered the roofs of large buildings. As each new skyscraper was added to the downtown, former line-of-sight links were no longer possible. CN intended to rent "hub" space for microwave links, visible from almost any building in the Toronto area.
The original plan for the tower envisioned a tripod consisting of three independent cylindrical "pillars" linked at various heights by structural bridges. Had it been built, this design would have been considerably shorter, with the metal antenna located roughly where the concrete section between the main level and the SkyPod lies today. As the design effort continued, it evolved into the current design with a single continuous hexagonal core to the SkyPod, with three support legs blended into the hexagon below the main level, forming a large Y-shape structure at the ground level.
The idea for the main level in its current form evolved around this time, but the Space Deck (later renamed SkyPod) was not part of the plans until some time later. One engineer in particular felt that visitors would feel the higher observation deck would be worth paying extra for, and the costs in terms of construction were not prohibitive. It was also some time around this point that it was realized that the tower could become the world's tallest structure, and plans were changed to incorporate subtle modifications throughout the structure to this end.
The CN Tower was built by Canada Cement Company (also known as the Cement Foundation Company of Canada at the time), a subsidiary of Sweden's Skanska, a global project-development and construction group.
Construction began on February 6, 1973, with massive excavations at the tower base for the foundation. By the time the foundation was complete, of earth and shale were removed to a depth of in the centre, and a base incorporating of concrete with of rebar and of steel cable had been built to a thickness of . This portion of the construction was fairly rapid, with only four months needed between the start and the foundation being ready for construction on top.
To create the main support pillar, workers constructed a hydraulically raised slipform at the base. This was a fairly unprecedented engineering feat on its own, consisting of a large metal platform that raised itself on jacks at about per day as the concrete below set. Concrete was poured Monday to Friday (and not continuously) by a small team of people until February 22, 1974, at which time it had already become the tallest structure in Canada, surpassing the recently built Inco Superstack in Sudbury, which was built using similar methods.
The tower contains of concrete, all of which was mixed on-site in order to ensure batch consistency. Through the pour, the vertical accuracy of the tower was maintained by comparing the slip form's location to massive plumb bobs hanging from it, observed by small telescopes from the ground. Over the height of the tower, it varies from true vertical accuracy by only .
In August 1974, construction of the main level commenced. Using 45 hydraulic jacks attached to cables strung from a temporary steel crown anchored to the top of the tower, twelve giant steel and wooden bracket forms were slowly raised, ultimately taking about a week to crawl up to their final position. These forms were used to create the brackets that support the main level, as well as a base for the construction of the main level itself. The Space Deck (currently named SkyPod) was built of concrete poured into a wooden frame attached to rebar at the lower level deck, and then reinforced with a large steel compression band around the outside.
The antenna was originally to be raised by crane as well, but during construction the Sikorsky S-64 Skycrane helicopter became available when the United States Army sold one to civilian operators. The helicopter, named "Olga", was first used to remove the crane, and then flew the antenna up in 36 sections.
The flights of the antenna pieces were a minor tourist attraction of their own, and the schedule was printed in the local newspapers. Use of the helicopter saved months of construction time, with this phase taking only three and a half weeks instead of the planned six months. The tower was topped-off on April 2, 1975, after 26 months of construction, officially capturing the height record from Moscow's Ostankino Tower, and bringing the total mass to .
Two years into the construction, plans for Metro Centre were scrapped, leaving the tower isolated on the Railway Lands in what was then a largely abandoned light-industrial space. This caused serious problems for tourists to access the tower. Ned Baldwin, project architect with John Andrews, wrote at the time that "All of the logic which dictated the design of the lower accommodation has been upset," and that "Under such ludicrous circumstances Canadian National would hardly have chosen this location to build."
The CN Tower opened to the public on June 26, 1976. The construction costs of approximately ($ in dollars) were repaid in fifteen years. Canadian National Railway sold the tower prior to taking the company private in 1995, when it decided to divest all operations not directly related to its core freight shipping businesses.
From the mid-1970s to the mid-1980s, the CN Tower was practically the only development along Front Street West; it was still possible to see Lake Ontario from the foot of the CN Tower due to the expansive parking lots and lack of development in the area at the time. As the area around the tower was developed, particularly with the completion of the Metro Toronto Convention Centre (north building) in 1984 and SkyDome in 1989 (renamed Rogers Centre in 2005), the former Railway Lands were redeveloped and the tower became the centre of a newly developing entertainment area. Access was greatly improved with the construction of the SkyWalk in 1989, which connected the tower and SkyDome to the nearby Union Station railway and subway station, and, in turn, to the city's PATH underground pedestrian system. By the mid-1990s, it was the centre of a thriving tourist district. The entire area continues to be an area of intense building, notably a boom in condominium construction in the first quarter of the 21st century, as well as the 2013 opening of the Ripley's Aquarium by the base of the tower.
The CN Tower consists of several substructures. The main portion of the tower is a hollow concrete hexagonal pillar containing the stairwells and power and plumbing connections. The tower's six elevators are located in the three inverted angles created by the Tower's hexagonal shape (two elevators per angle). Each of the three elevator shafts is lined with glass, allowing for views of the city as the glass-windowed elevators make their way through the tower. The stairwell was originally located in one of these angles (the one facing north), but was moved into the central hollow of the tower; the tower's new fifth and sixth elevators were placed in the hexagonal angle that once contained the stairwell. On top of the main concrete portion of the tower is a tall metal broadcast antenna, carrying television and radio signals. There are three visitor areas: the Glass Floor and Outdoor Observation Terrace, which are both located at an elevation of , the Indoor Lookout Level (formerly known as "Indoor Observation Level") located at , and the higher SkyPod (formerly known as "Space Deck") at , just below the metal antenna. The hexagonal shape is visible between the two areas; however, below the main deck, three large supporting legs give the tower the appearance of a large tripod.
The main deck level is seven storeys, some of which are open to the public. Below the public areas — at — is a large white donut-shaped radome containing the structure's UHF transmitters. The glass floor and outdoor observation deck are at . The glass floor has an area of and can withstand a pressure of . The floor's thermal glass units are thick, consisting of a pane of laminated glass, airspace and a pane of laminated glass. In 2008, one elevator was upgraded to add a glass floor panel, believed to have the highest vertical rise of any elevator equipped with this feature. The Horizons Cafe and the lookout level are at . The 360 Restaurant, a revolving restaurant that completes a full rotation once every 72 minutes, is at . When the tower first opened, it also featured a disco named Sparkles (at the Indoor Observation Level), billed as the highest disco and dance floor in the world.
The SkyPod was once the highest public observation deck in the world until it was surpassed by the Shanghai World Financial Center in 2008.
A metal staircase reaches the main deck level after 1,776 steps, and the SkyPod above after 2,579 steps; it is the tallest metal staircase on Earth. These stairs are intended for emergency use only and are not open to the public, except for twice per year for charity stair-climb events. The average climber takes approximately 30 minutes to climb to the base of the radome, but the fastest climb on record is 7 minutes and 52 seconds in 1989 by Brendan Keenoy, an Ontario Provincial Police officer. In 2002, Canadian Olympian and Paralympic champion Jeff Adams climbed the stairs of the tower in a specially designed wheelchair. The stairs were originally on one of the three sides of the tower (facing north), with a glass view, but these were later replaced with the third elevator pair and the stairs were moved to the inside of the core. Top climbs on the new, windowless stairwell used since around 2003 have generally been over ten minutes.
On August 1, 2011, the CN Tower opened the EdgeWalk, an amusement in which thrill-seekers can walk on and around the roof of the main pod of the tower at , which is directly above the 360 Restaurant. It is the world's highest full-circle, hands-free walk. Visitors are tethered to an overhead rail system and walk around the edge of the CN Tower's main pod above the 360 Restaurant on a metal floor. The attraction is closed throughout the winter and during periods of electrical storms and high winds.
One of the notable guests who visited EdgeWalk was Canadian comedian Rick Mercer as featured as the first episode of the ninth season of his CBC Television news satire show, "Rick Mercer Report". There, he was accompanied by Canadian pop singer Jann Arden. The episode aired on April 10, 2013.
A freezing rain storm on March 2, 2007, resulted in a layer of ice several centimetres thick forming on the side of the tower and other downtown buildings. The sun thawed the ice, and winds of up to blew some of it away from the structure. There were fears that cars and windows of nearby buildings would be smashed by large chunks of ice. In response, police closed some streets surrounding the tower. During morning rush hour on March 5 of the same year, police expanded the area of closed streets to include the Gardiner Expressway away from the tower as increased winds blew the ice farther away, as far north as King Street West, away, where a taxicab window was shattered. Subsequently, on March 6, 2007, the Gardiner Expressway reopened after winds abated.
On April 16, 2018, falling ice from the CN Tower punctured the roof of the nearby Rogers Centre, causing the Toronto Blue Jays to postpone the game that day to the following day as a doubleheader; this was the third doubleheader held at the Rogers Centre. On April 20, the CN Tower reopened.
In August 2000, a fire broke out at the Ostankino Tower in Moscow killing three people and causing extensive damage. The fire was blamed on poor maintenance and outdated equipment. The failure of the fire-suppression systems and the lack of proper equipment for firefighters allowed the fire to destroy most of the interior and spark fears the tower might even collapse.
The Ostankino Tower was completed nine years before the CN Tower and is only shorter. The parallels between the towers led to some concern that the CN Tower could be at risk of a similar tragedy. However, Canadian officials subsequently stated that it is "highly unlikely" that a similar disaster could occur at the CN Tower, as it has important safeguards that were not present in the Ostankino Tower. Specifically, officials cited:
Officials also noted that the CN Tower has an excellent safety record, although there was an electrical fire in the antennae on August 16, 2017 — the tower's first fire. Moreover, other supertall structures built between 1967 and 1976 — such as the Willis Tower (formerly the Sears Tower), the World Trade Center (until its destruction on September 11, 2001), the Fernsehturm Berlin, the Aon Center, 875 North Michigan Avenue (formerly the John Hancock Center), and First Canadian Place — also have excellent safety records, which suggests that the Ostankino Tower accident was a rare safety failure, and that the likelihood of similar events occurring at other supertall structures is extremely low.
The CN Tower was originally lit at night with incandescent lights, which were removed in 1997 because they were inefficient and expensive to repair. In June 2007, the tower was outfitted with 1,330 super-bright LED lights inside the elevator shafts, shooting over the main pod and upward to the top of the tower's mast to light the tower from dusk until 2 a.m. The official opening ceremony took place on June 28 before the Canada Day holiday weekend.
The tower changes its lighting scheme on holidays and to commemorate major events. After the 95th Grey Cup in Toronto, the tower was lit in green and white to represent the colours of the Grey Cup champion Saskatchewan Roughriders. From sundown on August 27, 2011, to sunrise the following day, the tower was lit in orange, the official colour of the New Democratic Party (NDP), to commemorate the death of federal NDP leader and leader of the official opposition Jack Layton. When former South African president Nelson Mandela died, the tower was lit in the colours of the South African flag. When former federal finance minister under Stephen Harper's Conservatives Jim Flaherty died, the tower was lit in green to reflect his Irish Canadian heritage. On the night of the attacks on Paris on November 13, 2015, the tower displayed the colours of the French flag.
Programmed from a desktop computer with a wireless network interface controller in Burlington, Ontario, the LEDs use less energy to light than the previous incandescent lights (10% less energy than the dimly lit version and 60% less than the brightly lit version). The estimated cost to use the LEDs is $1,000 per month.
During the spring and autumn bird migration seasons, the lights would be turned off to comply with the voluntary Fatal Light Awareness Program, which "encourages buildings to dim unnecessary exterior lighting to mitigate bird mortality during spring and summer migration."
The CN Tower is the tallest freestanding structure in the Western Hemisphere. As of 2013, there are only two other freestanding structures in the Western Hemisphere which exceed in height; the Willis Tower in Chicago, which stands at when measured to its pinnacle; and the topped-out One World Trade Center in New York City, which has a pinnacle height of , or approximately shorter than the CN Tower. Due to the symbolism of the number 1776 (the year of the signing of the United States Declaration of Independence), the height of One World Trade Center is unlikely to be increased. The proposed Chicago Spire was expected to exceed the height of the CN Tower, but its construction was halted early due to financial difficulties amid the Great Recession, and was eventually cancelled in 2010.
"Guinness World Records" has called the CN Tower "the world's tallest self-supporting tower" and "the world's tallest free-standing tower". | https://en.wikipedia.org/wiki?curid=6112 |
Chain rule
In calculus, the chain rule is a formula to compute the derivative of a composite function. That is, if and are differentiable functions, then the chain rule expresses the derivative of their composite — the function which maps ' to formula_1— in terms of the derivatives of ' and " and the product of functions as follows:
Alternatively, by letting (equiv., for all "), one can also write the chain rule in Lagrange's notation, as follows:
The chain rule may also be rewritten in Leibniz's notation in the following way. If a variable ' depends on the variable ', which itself depends on the variable ' (i.e., ' and ' are dependent variables), then ', via the intermediate variable of ', depends on ' as well. In which case, the chain rule states that:
More precisely, to indicate the point each derivative is evaluated at, formula_5.
The versions of the chain rule in the Lagrange and the Leibniz notation are equivalent, in the sense that if formula_6 and formula_7, so that formula_8, then
and
Intuitively, the chain rule states that knowing the instantaneous rate of change of "z" relative to "y" and that of "y" relative to "x" allows one to calculate the instantaneous rate of change of "z" relative to "x". As put by George F. Simmons: "if a car travels twice as fast as a bicycle and the bicycle is four times as fast as a walking man, then the car travels 2 × 4 = 8 times as fast as the man."
In integration, the counterpart to the chain rule is the substitution rule.
The chain rule seems to have first been used by Gottfried Wilhelm Leibniz. He used it to calculate the derivative of formula_11 as the composite of the square root function and the function formula_12. He first mentioned it in a 1676 memoir (with a sign error in the calculation). The common notation of chain rule is due to Leibniz. Guillaume de l'Hôpital used the chain rule implicitly in his "Analyse des infiniment petits". The chain rule does not appear in any of Leonhard Euler's analysis books, even though they were written over a hundred years after Leibniz's discovery.
Suppose that a skydiver jumps from an aircraft. Assume that seconds after his jump, his height above sea level in meters is given by . One model for the atmospheric pressure at a height " is . These two equations can be differentiated and combined in various ways to produce the following data:
Here, the chain rule gives a method for computing in terms of and . While it is always possible to directly apply the definition of the derivative to compute the derivative of a composite function, this is usually very difficult. The utility of the chain rule is that it turns a complicated derivative into several easy derivatives.
The chain rule states that, under appropriate conditions,
In this example, this equals
In the statement of the chain rule, and play slightly different roles because is evaluated at formula_15, whereas is evaluated at ". This is necessary to make the units work out correctly.
For example, suppose that we want to compute the rate of change in atmospheric pressure ten seconds after the skydiver jumps. This is and has units of pascals per second. The factor in the chain rule is the velocity of the skydiver ten seconds after his jump, and it is expressed in meters per second. formula_16 is the change in pressure with respect to height at the height and is expressed in pascals per meter. The product of formula_16 and formula_18 therefore has the correct units of pascals per second.
Here, notice that it is not possible to evaluate " anywhere else. For instance, the 10 in the problem represents ten seconds, while the expression formula_19 would represent the change in pressure at a height of ten meters, which is not what we wanted. Similarly, while has a unit of meters per second, the expression would represent the change in pressure at a height of −98 meters, which is again not what we wanted. However, is 3020 meters above sea level, the height of the skydiver ten seconds after his jump, and this has the correct units for an input to .
The simplest form of the chain rule is for real-valued functions of one real variable. It states that if ' is a function that is differentiable at a point ' (i.e. the derivative exists) and ' is a function that is differentiable at , then the composite function is differentiable at ', and the derivative is
The rule is sometimes abbreviated as
If and , then this abbreviated form is written in Leibniz notation as:
The points where the derivatives are evaluated may also be stated explicitly:
Carrying the same reasoning further, given " functions formula_24 with the composite function formula_25, if each function formula_26 is differentiable at its immediate input, then the composite function is also differentiable by the repeated application of Chain Rule, where the derivative is (in Leibniz's notation):
It may be possible to apply the chain rule even when there are no formulas for the functions which are being differentiated. This can happen when the derivatives are measured directly. Suppose that a car is driving up a tall mountain. The car's speedometer measures its speed directly. If the grade is known, then the rate of ascent can be calculated using trigonometry. Suppose that the car is ascending at . Standard models for the Earth's atmosphere imply that the temperature drops about per kilometer ascended (called the lapse rate). To find the temperature drop per hour, we can apply the chain rule. Let the function be the altitude of the car at time , and let the function be the temperature kilometers above sea level. and are not known exactly: For example, the altitude where the car starts is not known and the temperature on the mountain is not known. However, their derivatives are known: is , and is . The chain rule states that the derivative of the composite function is the product of the derivative of and the derivative of . This is .
One of the reasons why this computation is possible is because is a constant function. A more accurate description of how the temperature near the car varies over time would require an accurate model of how the temperature varies at different altitudes. This model may not have a constant derivative. To compute the temperature change in such a model, it would be necessary to know and not just , because without knowing it is not possible to know where to evaluate .
The chain rule can be applied to composites of more than two functions. To take the derivative of a composite of more than two functions, notice that the composite of , , and ' (in that order) is the composite of with . The chain rule states that to compute the derivative of , it is sufficient to compute the derivative of ' and the derivative of . The derivative of "" can be calculated directly, and the derivative of can be calculated by applying the chain rule again.
For concreteness, consider the function
This can be decomposed as the composite of three functions:
Their derivatives are:
The chain rule states that the derivative of their composite at the point is:
In Leibniz notation, this is:
or for short,
The derivative function is therefore:
Another way of computing this derivative is to view the composite function as the composite of and "h". Applying the chain rule in this manner would yield:
This is the same as what was computed above. This should be expected because .
Sometimes, it is necessary to differentiate an arbitrarily long composition of the form formula_36. In this case, define
where formula_38 and formula_39 when formula_40. Then the chain rule takes the form
or, in the Lagrange notation,
The chain rule can be used to derive some well-known differentiation rules. For example, the quotient rule is a consequence of the chain rule and the product rule. To see this, write the function as the product . First apply the product rule:
To compute the derivative of , notice that it is the composite of with the reciprocal function, that is, the function that sends to . The derivative of the reciprocal function is formula_44. By applying the chain rule, the last expression becomes:
which is the usual formula for the quotient rule.
Suppose that has an inverse function. Call its inverse function so that we have . There is a formula for the derivative of in terms of the derivative of . To see this, note that ' and ' satisfy the formula
And because the functions formula_47 and are equal, their derivatives must be equal. The derivative of " is the constant function with value 1, and the derivative of formula_47 is determined by the chain rule. Therefore, we have that:
To express as a function of an independent variable ', we substitute formula_50 for ' wherever it appears. Then we can solve for .
For example, consider the function . It has an inverse . Because , the above formula says that
This formula is true whenever is differentiable and its inverse ' is also differentiable. This formula can fail when one of these conditions is not true. For example, consider . Its inverse is , which is not differentiable at zero. If we attempt to use the above formula to compute the derivative of ' at zero, then we must evaluate . Since and , we must evaluate 1/0, which is undefined. Therefore, the formula fails in this case. This is not surprising because " is not differentiable at zero.
Faà di Bruno's formula generalizes the chain rule to higher derivatives. Assuming that and , then the first few derivatives are:
One proof of the chain rule begins with the definition of the derivative:
Assume for the moment that formula_55 does not equal formula_56 for any near ". Then the previous expression is equal to the product of two factors:
If formula_58 oscillates near ', then it might happen that no matter how close one gets to ', there is always an even closer " such that formula_55 equals formula_56. For example, this happens for near the point . Whenever this happens, the above expression is undefined because it involves division by zero. To work around this, introduce a function "formula_61" as follows:
We will show that the difference quotient for is always equal to:
Whenever is not equal to , this is clear because the factors of cancel. When equals , then the difference quotient for is zero because equals , and the above product is zero because it equals times zero. So the above product is always equal to the difference quotient, and to show that the derivative of at exists and to determine its value, we need only show that the limit as goes to of the above product exists and determine its value.
To do this, recall that the limit of a product exists if the limits of its factors exist. When this happens, the limit of the product of these two factors will equal the product of the limits of the factors. The two factors are and . The latter is the difference quotient for at , and because ' is differentiable at ' by assumption, its limit as ' tends to ' exists and equals .
As for , notice that is defined wherever ' is. Furthermore, ' is differentiable at by assumption, so is continuous at , by definition of the derivative. The function ' is continuous at ' because it is differentiable at ', and therefore is continuous at '. So its limit as ' goes to ' exists and equals , which is .
This shows that the limits of both factors exist and that they equal and , respectively. Therefore, the derivative of at "a" exists and equals .
Another way of proving the chain rule is to measure the error in the linear approximation determined by the derivative. This proof has the advantage that it generalizes to several variables. It relies on the following equivalent definition of differentiability at a point: A function "g" is differentiable at "a" if there exists a real number "g"′("a") and a function "ε"("h") that tends to zero as "h" tends to zero, and furthermore
Here the left-hand side represents the true difference between the value of "g" at "a" and at , whereas the right-hand side represents the approximation determined by the derivative plus an error term.
In the situation of the chain rule, such a function "ε" exists because "g" is assumed to be differentiable at "a". Again by assumption, a similar function also exists for "f" at "g"("a"). Calling this function "η", we have
The above definition imposes no constraints on "η"(0), even though it is assumed that "η"("k") tends to zero as "k" tends to zero. If we set , then "η" is continuous at 0.
Proving the theorem requires studying the difference as "h" tends to zero. The first step is to substitute for using the definition of differentiability of "g" at "a":
The next step is to use the definition of differentiability of "f" at "g"("a"). This requires a term of the form for some "k". In the above equation, the correct "k" varies with "h". Set and the right hand side becomes . Applying the definition of the derivative gives:
To study the behavior of this expression as "h" tends to zero, expand "k""h". After regrouping the terms, the right-hand side becomes:
Because "ε"("h") and "η"("k""h") tend to zero as "h" tends to zero, the first two bracketed terms tend to zero as "h" tends to zero. Applying the same theorem on products of limits as in the first proof, the third bracketed term also tends zero. Because the above expression is equal to the difference , by the definition of the derivative is differentiable at "a" and its derivative is
The role of "Q" in the first proof is played by "η" in this proof. They are related by the equation:
The need to define "Q" at "g"("a") is analogous to the need to define "η" at zero.
Constantin Carathéodory's alternative definition of the differentiability of a function can be used to give an elegant proof of the chain rule.
Under this definition, a function is differentiable at a point if and only if there is a function , continuous at and such that . There is at most one such function, and if is differentiable at then .
Given the assumptions of the chain rule and the fact that differentiable functions and compositions of continuous functions are continuous, we have that there exist functions , continuous at and , continuous at and such that,
and
Therefore,
but the function given by is continuous at , and we get, for this
A similar approach works for continuously differentiable (vector-)functions of many variables. This method of factoring also allows a unified approach to stronger forms of differentiability, when the derivative is required to be Lipschitz continuous, Hölder continuous, etc. Differentiation itself can be viewed as the polynomial remainder theorem (the little Bézout theorem, or factor theorem), generalized to an appropriate class of functions.
If formula_74 and formula_75 then choosing infinitesimal formula_76 we compute the corresponding formula_77 and then the corresponding formula_78, so that
and applying the standard part we obtain
which is the chain rule.
The generalization of the chain rule to multi-variable functions is rather technical. However, it is simpler to write in the case of functions of the form
As this case occurs often in the study of functions of a single variable, it is worth describing it separately.
For writing the chain rule for a function of the form
one needs the partial derivatives of with respect to its arguments. The usual notations for partial derivatives involve names for the arguments of the function. As these arguments are not named in the above formula, it is simpler and clearer to denote by
the derivative of with respect to its th argument, and by
the value of this derivative at .
With this notation, the chain rule is
If the function is addition, that is, if
then formula_86 (the constant function 1).
Thus, the chain rule gives
For multiplication
the partials are formula_89 and formula_90 Thus,
The case of exponentiation
is slightly more complicated, as
and, as formula_94
It follows that
The simplest way for writing the chain rule in the general case is to use the total derivative, which is a linear transformation that captures all directional derivatives in a single formula. Consider differentiable functions and , and a point in . Let denote the total derivative of at and denote the total derivative of at . These two derivatives are linear transformations and , respectively, so they can be composed. The chain rule for total derivatives is that their composite is the total derivative of at :
or for short,
The higher-dimensional chain rule can be proved using a technique similar to the second proof given above.
Because the total derivative is a linear transformation, the functions appearing in the formula can be rewritten as matrices. The matrix corresponding to a total derivative is called a Jacobian matrix, and the composite of two derivatives corresponds to the product of their Jacobian matrices. From this perspective the chain rule therefore says:
or for short,
That is, the Jacobian of a composite function is the product of the Jacobians of the composed functions (evaluated at the appropriate points).
The higher-dimensional chain rule is a generalization of the one-dimensional chain rule. If "k", "m", and "n" are 1, so that and , then the Jacobian matrices of "f" and "g" are . Specifically, they are:
The Jacobian of "f" ∘ "g" is the product of these matrices, so it is , as expected from the one-dimensional chain rule. In the language of linear transformations, "D""a"("g") is the function which scales a vector by a factor of "g"′("a") and "D""g"("a")("f") is the function which scales a vector by a factor of "f"′("g"("a")). The chain rule says that the composite of these two linear transformations is the linear transformation , and therefore it is the function that scales a vector by "f"′("g"("a"))⋅"g"′("a").
Another way of writing the chain rule is used when "f" and "g" are expressed in terms of their components as and . In this case, the above rule for Jacobian matrices is usually written as:
The chain rule for total derivatives implies a chain rule for partial derivatives. Recall that when the total derivative exists, the partial derivative in the "i"th coordinate direction is found by multiplying the Jacobian matrix by the "i"th basis vector. By doing this to the formula above, we find:
Since the entries of the Jacobian matrix are partial derivatives, we may simplify the above formula to get:
More conceptually, this rule expresses the fact that a change in the "x""i" direction may change all of "g"1 through "gm", and any of these changes may affect "f".
In the special case where , so that "f" is a real-valued function, then this formula simplifies even further:
This can be rewritten as a dot product. Recalling that , the partial derivative is also a vector, and the chain rule says that:
Given where and , determine the value of and using the chain rule.
and
Faà di Bruno's formula for higher-order derivatives of single-variable functions generalizes to the multivariable case. If is a function of as above, then the second derivative of is:
All extensions of calculus have a chain rule. In most of these, the formula remains the same, though the meaning of that formula may be vastly different.
One generalization is to manifolds. In this situation, the chain rule represents the fact that the derivative of is the composite of the derivative of "f" and the derivative of "g". This theorem is an immediate consequence of the higher dimensional chain rule given above, and it has exactly the same formula.
The chain rule is also valid for Fréchet derivatives in Banach spaces. The same formula holds as before. This case and the previous one admit a simultaneous generalization to Banach manifolds.
In abstract algebra, the derivative is interpreted as a morphism of modules of Kähler differentials. A ring homomorphism of commutative rings determines a morphism of Kähler differentials which sends an element "dr" to "d"("f"("r")), the exterior differential of "f"("r"). The formula holds in this context as well.
The common feature of these examples is that they are expressions of the idea that the derivative is part of a functor. A functor is an operation on spaces and functions between them. It associates to each space a new space and to each function between two spaces a new function between the corresponding new spaces. In each of the above cases, the functor sends each space to its tangent bundle and it sends each function to its derivative. For example, in the manifold case, the derivative sends a "C""r"-manifold to a "C""r"−1-manifold (its tangent bundle) and a "C""r"-function to its total derivative. There is one requirement for this to be a functor, namely that the derivative of a composite must be the composite of the derivatives. This is exactly the formula .
There are also chain rules in stochastic calculus. One of these, Itō's lemma, expresses the composite of an Itō process (or more generally a semimartingale) "dX""t" with a twice-differentiable function "f". In Itō's lemma, the derivative of the composite function depends not only on "dX""t" and the derivative of "f" but also on the second derivative of "f". The dependence on the second derivative is a consequence of the non-zero quadratic variation of the stochastic process, which broadly speaking means that the process can move up and down in a very rough way. This variant of the chain rule is not an example of a functor because the two functions being composed are of different types. | https://en.wikipedia.org/wiki?curid=6113 |
P versus NP problem
The P versus NP problem is a major unsolved problem in computer science. It asks whether every problem whose solution can be quickly verified can also be solved quickly.
It is one of the seven Millennium Prize Problems selected by the Clay Mathematics Institute, each of which carries a US$1,000,000 prize for the first correct solution.
The informal term "quickly", used above, means the existence of an algorithm solving the task that runs in polynomial time, such that the time to complete the task varies as a polynomial function on the size of the input to the algorithm (as opposed to, say, exponential time). The general class of questions for which some algorithm can provide an answer in polynomial time is called "class P" or just "P". For some questions, there is no known way to find an answer quickly, but if one is provided with information showing what the answer is, it is possible to verify the answer quickly. The class of questions for which an answer can be "verified" in polynomial time is called NP, which stands for "nondeterministic polynomial time".
An answer to the P = NP question would determine whether problems that can be verified in polynomial time can also be solved in polynomial time. If it turned out that P ≠ NP, which is widely believed, it would mean that there are problems in NP that are harder to compute than to verify: they could not be solved in polynomial time, but the answer could be verified in polynomial time.
Aside from being an important problem in computational theory, a proof either way would have profound implications for mathematics, cryptography, algorithm research, artificial intelligence, game theory, multimedia processing, philosophy, economics and many other fields.
Consider Sudoku, a game where the player is given a partially filled-in grid of numbers and attempts to complete the grid following certain rules. Given an incomplete Sudoku grid, of any size, is there at least one legal solution? Any proposed solution is easily verified, and the time to check a solution grows slowly (polynomially) as the grid gets bigger. However, all known algorithms for finding solutions take, for difficult examples, time that grows exponentially as the grid gets bigger. So, Sudoku is in NP (quickly checkable) but does not seem to be in P (quickly solvable). Thousands of other problems seem similar, in that they are fast to check but slow to solve. Researchers have shown that many of the problems in NP have the extra property that a fast solution to any one of them could be used to build a quick solution to any other problem in NP, a property called NP-completeness. Decades of searching have not yielded a fast solution to any of these problems, so most scientists suspect that none of these problems can be solved quickly. This, however, has never been proven.
The travelling salesman problem (also called the travelling salesperson problem or TSP) asks the following question: "Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city and returns to the origin city?" It is an NP-hard problem in combinatorial optimization, important in theoretical computer science and operations research. Some algorithms solve TSP for small number of cities (under 10 or up to 15) but they start to approximate when the number of cities become high.
The precise statement of the P versus NP problem was introduced in 1971 by Stephen Cook in his seminal paper "The complexity of theorem proving procedures" (and independently by Leonid Levin in 1973) and is considered by many to be the most important open problem in computer science.
Although the P versus NP problem was formally defined in 1971, there were previous inklings of the problems involved, the difficulty of proof, and the potential consequences. In 1955, mathematician John Nash wrote a letter to the NSA, where he speculated that cracking a sufficiently complex code would require time exponential in the length of the key. If proved (and Nash was suitably skeptical) this would imply what is now called P ≠ NP, since a proposed key can easily be verified in polynomial time. Another mention of the underlying problem occurred in a 1956 letter written by Kurt Gödel to John von Neumann. Gödel asked whether theorem-proving (now known to be co-NP-complete) could be solved in quadratic or linear time, and pointed out one of the most important consequences—that if so, then the discovery of mathematical proofs could be automated.
The relation between the complexity classes P and NP is studied in computational complexity theory, the part of the theory of computation dealing with the resources required during computation to solve a given problem. The most common resources are time (how many steps it takes to solve a problem) and space (how much memory it takes to solve a problem).
In such analysis, a model of the computer for which time must be analyzed is required. Typically such models assume that the computer is "deterministic" (given the computer's present state and any inputs, there is only one possible action that the computer might take) and "sequential" (it performs actions one after the other).
In this theory, the class P consists of all those "decision problems" (defined below) that can be solved on a deterministic sequential machine in an amount of time that is polynomial in the size of the input; the class NP consists of all those decision problems whose positive solutions can be verified in polynomial time given the right information, or equivalently, whose solution can be found in polynomial time on a non-deterministic machine. Clearly, P ⊆ NP. Arguably the biggest open question in theoretical computer science concerns the relationship between those two classes:
Since 2002, William Gasarch has conducted three polls of researchers concerning this and related questions. Confidence that P ≠ NP has been increasing — in 2019, 88% believed P ≠ NP, as opposed to 83% in 2012 and 61% in 2002. When restricted to experts, the 2019 answers became 99% believe P ≠ NP.
To attack the P = NP question, the concept of NP-completeness is very useful. NP-complete problems are a set of problems to each of which any other NP-problem can be reduced in polynomial time and whose solution may still be verified in polynomial time. That is, any NP problem can be transformed into any of the NP-complete problems. Informally, an NP-complete problem is an NP problem that is at least as "tough" as any other problem in NP.
NP-hard problems are those at least as hard as NP problems, i.e., all NP problems can be reduced (in polynomial time) to them. NP-hard problems need not be in NP, i.e., they need not have solutions verifiable in polynomial time.
For instance, the Boolean satisfiability problem is NP-complete by the Cook–Levin theorem, so "any" instance of "any" problem in NP can be transformed mechanically into an instance of the Boolean satisfiability problem in polynomial time. The Boolean satisfiability problem is one of many such NP-complete problems. If any NP-complete problem is in P, then it would follow that P = NP. However, many important problems have been shown to be NP-complete, and no fast algorithm for any of them is known.
Based on the definition alone it is not obvious that NP-complete problems exist; however, a trivial and contrived NP-complete problem can be formulated as follows: given a description of a Turing machine "M" guaranteed to halt in polynomial time, does there exist a polynomial-size input that "M" will accept? It is in NP because (given an input) it is simple to check whether "M" accepts the input by simulating "M"; it is NP-complete because the verifier for any particular instance of a problem in NP can be encoded as a polynomial-time machine "M" that takes the solution to be verified as input. Then the question of whether the instance is a yes or no instance is determined by whether a valid input exists.
The first natural problem proven to be NP-complete was the Boolean satisfiability problem, also known as SAT. As noted above, this is the Cook–Levin theorem; its proof that satisfiability is NP-complete contains technical details about Turing machines as they relate to the definition of NP. However, after this problem was proved to be NP-complete, proof by reduction provided a simpler way to show that many other problems are also NP-complete, including the game Sudoku discussed earlier. In this case, the proof shows that a solution of Sudoku in polynomial time could also be used to complete Latin squares in polynomial time. This in turn gives a solution to the problem of partitioning tri-partite graphs into triangles, which could then be used to find solutions for the special case of SAT known as 3-SAT, which then provides a solution for general Boolean satisfiability. So a polynomial time solution to Sudoku leads, by a series of mechanical transformations, to a polynomial time solution of satisfiability, which in turn can be used to solve any other NP-problem in polynomial time. Using transformations like this, a vast class of seemingly unrelated problems are all reducible to one another, and are in a sense "the same problem".
Although it is unknown whether P = NP, problems outside of P are known. Just as the class P is defined in terms of polynomial running time, the class EXPTIME is the set of all decision problems that have "exponential" running time. In other words, any problem in EXPTIME is solvable by a deterministic Turing machine in O(2"p"("n")) time, where "p"("n") is a polynomial function of "n". A decision problem is EXPTIME-complete if it is in EXPTIME, and every problem in EXPTIME has a polynomial-time many-one reduction to it. A number of problems are known to be EXPTIME-complete. Because it can be shown that P ≠ EXPTIME, these problems are outside P, and so require more than polynomial time. In fact, by the time hierarchy theorem, they cannot be solved in significantly less than exponential time. Examples include finding a perfect strategy for chess positions on an "N" × "N" board and similar problems for other board games.
The problem of deciding the truth of a statement in Presburger arithmetic requires even more time. Fischer and Rabin proved in 1974 that every algorithm that decides the truth of Presburger statements of length "n" has a runtime of at least formula_1 for some constant "c". Hence, the problem is known to need more than exponential run time. Even more difficult are the undecidable problems, such as the halting problem. They cannot be completely solved by any algorithm, in the sense that for any particular algorithm there is at least one input for which that algorithm will not produce the right answer; it will either produce the wrong answer, finish without giving a conclusive answer, or otherwise run forever without producing any answer at all.
It is also possible to consider questions other than decision problems. One such class, consisting of counting problems, is called #P: whereas an NP problem asks "Are there any solutions?", the corresponding #P problem asks "How many solutions are there?" Clearly, a #P problem must be at least as hard as the corresponding NP problem, since a count of solutions immediately tells if at least one solution exists, if the count is greater than zero. Surprisingly, some #P problems that are believed to be difficult correspond to easy (for example linear-time) P problems. For these problems, it is very easy to tell whether solutions exist, but thought to be very hard to tell how many. Many of these problems are #P-complete, and hence among the hardest problems in #P, since a polynomial time solution to any of them would allow a polynomial time solution to all other #P problems.
In 1975, Richard E. Ladner showed that if P ≠ NP then there exist problems in NP that are neither in P nor NP-complete. Such problems are called NP-intermediate problems. The graph isomorphism problem, the discrete logarithm problem and the integer factorization problem are examples of problems believed to be NP-intermediate. They are some of the very few NP problems not known to be in P or to be NP-complete.
The graph isomorphism problem is the computational problem of determining whether two finite graphs are isomorphic. An important unsolved problem in complexity theory is whether the graph isomorphism problem is in P, NP-complete, or NP-intermediate. The answer is not known, but it is believed that the problem is at least not NP-complete. If graph isomorphism is NP-complete, the polynomial time hierarchy collapses to its second level. Since it is widely believed that the polynomial hierarchy does not collapse to any finite level, it is believed that graph isomorphism is not NP-complete. The best algorithm for this problem, due to László Babai and Eugene Luks, has run time 2O() for graphs with "n" vertices.
The integer factorization problem is the computational problem of determining the prime factorization of a given integer. Phrased as a decision problem, it is the problem of deciding whether the input has a factor less than "k". No efficient integer factorization algorithm is known, and this fact forms the basis of several modern cryptographic systems, such as the RSA algorithm. The integer factorization problem is in NP and in co-NP (and even in UP and co-UP). If the problem is NP-complete, the polynomial time hierarchy will collapse to its first level (i.e., NP = co-NP). The best known algorithm for integer factorization is the general number field sieve, which takes expected time
to factor an "n"-bit integer. However, the best known quantum algorithm for this problem, Shor's algorithm, does run in polynomial time, although this does not indicate where the problem lies with respect to non-quantum complexity classes.
All of the above discussion has assumed that P means "easy" and "not in P" means "hard", an assumption known as "Cobham's thesis". It is a common and reasonably accurate assumption in complexity theory; however, it has some caveats.
First, it is not always true in practice. A theoretical polynomial algorithm may have extremely large constant factors or exponents thus rendering it impractical. For example, the problem of deciding whether a graph "G" contains "H" as a minor, where "H" is fixed, can be solved in a running time of "O"("n"2), where "n" is the number of vertices in "G". However, the big O notation hides a constant that depends superexponentially on "H". The constant is greater than formula_3 (using Knuth's up-arrow notation), and where "h" is the number of vertices in "H".
On the other hand, even if a problem is shown to be NP-complete, and even if P ≠ NP, there may still be effective approaches to tackling the problem in practice. There are algorithms for many NP-complete problems, such as the knapsack problem, the traveling salesman problem and the Boolean satisfiability problem, that can solve to optimality many real-world instances in reasonable time. The empirical average-case complexity (time vs. problem size) of such algorithms can be surprisingly low. An example is the simplex algorithm in linear programming, which works surprisingly well in practice; despite having exponential worst-case time complexity it runs on par with the best known polynomial-time algorithms.
Finally, there are types of computations which do not conform to the Turing machine model on which P and NP are defined, such as quantum computation and randomized algorithms.
According to polls, most computer scientists believe that P ≠ NP. A key reason for this belief is that after decades of studying these problems no one has been able to find a polynomial-time algorithm for any of more than 3000 important known NP-complete problems (see List of NP-complete problems). These algorithms were sought long before the concept of NP-completeness was even defined (Karp's 21 NP-complete problems, among the first found, were all well-known existing problems at the time they were shown to be NP-complete). Furthermore, the result P = NP would imply many other startling results that are currently believed to be false, such as NP = co-NP and P = PH.
It is also intuitively argued that the existence of problems that are hard to solve but for which the solutions are easy to verify matches real-world experience.
On the other hand, some researchers believe that there is overconfidence in believing P ≠ NP and that researchers should explore proofs of P = NP as well. For example, in 2002 these statements were made:
One of the reasons the problem attracts so much attention is the consequences of the answer. Either direction of resolution would advance theory enormously, and perhaps have huge practical consequences as well.
A proof that P = NP could have stunning practical consequences if the proof leads to efficient methods for solving some of the important problems in NP. It is also possible that a proof would not lead directly to efficient methods, perhaps if the proof is non-constructive, or the size of the bounding polynomial is too big to be efficient in practice. The consequences, both positive and negative, arise since various NP-complete problems are fundamental in many fields.
Cryptography, for example, relies on certain problems being difficult. A constructive and efficient solution to an NP-complete problem such as 3-SAT would break most existing cryptosystems including:
These would need to be modified or replaced by information-theoretically secure solutions not inherently based on P-NP inequivalence.
On the other hand, there are enormous positive consequences that would follow from rendering tractable many currently mathematically intractable problems. For instance, many problems in operations research are NP-complete, such as some types of integer programming and the travelling salesman problem. Efficient solutions to these problems would have enormous implications for logistics. Many other important problems, such as some problems in protein structure prediction, are also NP-complete; if these problems were efficiently solvable it could spur considerable advances in life sciences and biotechnology.
But such changes may pale in significance compared to the revolution an efficient method for solving NP-complete problems would cause in mathematics itself. Gödel, in his early thoughts on computational complexity, noted that a mechanical method that could solve any problem would revolutionize mathematics:
Similarly, Stephen Cook says
Research mathematicians spend their careers trying to prove theorems, and some proofs have taken decades or even centuries to find after problems have been stated—for instance, Fermat's Last Theorem took over three centuries to prove. A method that is guaranteed to find proofs to theorems, should one exist of a "reasonable" size, would essentially end this struggle.
Donald Knuth has stated that he has come to believe that P = NP, but is reserved about the impact of a possible proof:
A proof that showed that P ≠ NP would lack the practical computational benefits of a proof that P = NP, but would nevertheless represent a very significant advance in computational complexity theory and provide guidance for future research. It would allow one to show in a formal way that many common problems cannot be solved efficiently, so that the attention of researchers can be focused on partial solutions or solutions to other problems. Due to widespread belief in P ≠ NP, much of this focusing of research has already taken place.
Also P ≠ NP still leaves open the average-case complexity of hard problems in NP. For example, it is possible that SAT requires exponential time in the worst case, but that almost all randomly selected instances of it are efficiently solvable. Russell Impagliazzo has described five hypothetical "worlds" that could result from different possible resolutions to the average-case complexity question. These range from "Algorithmica", where P = NP and problems like SAT can be solved efficiently in all instances, to "Cryptomania", where P ≠ NP and generating hard instances of problems outside P is easy, with three intermediate possibilities reflecting different possible distributions of difficulty over instances of NP-hard problems. The "world" where P ≠ NP but all problems in NP are tractable in the average case is called "Heuristica" in the paper. A Princeton University workshop in 2009 studied the status of the five worlds.
Although the P = NP problem itself remains open despite a million-dollar prize and a huge amount of dedicated research, efforts to solve the problem have led to several new techniques. In particular, some of the most fruitful research related to the P = NP problem has been in showing that existing proof techniques are not powerful enough to answer the question, thus suggesting that novel technical approaches are required.
As additional evidence for the difficulty of the problem, essentially all known proof techniques in computational complexity theory fall into one of the following classifications, each of which is known to be insufficient to prove that P ≠ NP:
These barriers are another reason why NP-complete problems are useful: if a polynomial-time algorithm can be demonstrated for an NP-complete problem, this would solve the P = NP problem in a way not excluded by the above results.
These barriers have also led some computer scientists to suggest that the P versus NP problem may be independent of standard axiom systems like ZFC (cannot be proved or disproved within them). The interpretation of an independence result could be that either no polynomial-time algorithm exists for any NP-complete problem, and such a proof cannot be constructed in (e.g.) ZFC, or that polynomial-time algorithms for NP-complete problems may exist, but it is impossible to prove in ZFC that such algorithms are correct. However, if it can be shown, using techniques of the sort that are currently known to be applicable, that the problem cannot be decided even with much weaker assumptions extending the Peano axioms (PA) for integer arithmetic, then there would necessarily exist nearly-polynomial-time algorithms for every problem in NP. Therefore, if one believes (as most complexity theorists do) that not all problems in NP have efficient algorithms, it would follow that proofs of independence using those techniques cannot be possible. Additionally, this result implies that proving independence from PA or ZFC using currently known techniques is no easier than proving the existence of efficient algorithms for all problems in NP.
While the P versus NP problem is generally considered unsolved, many amateur and some professional researchers have claimed solutions. Gerhard J. Woeginger maintains a list that, as of 2018, contains 62 purported proofs of P = NP, 50 proofs of P ≠ NP, 2 proofs the problem is unprovable, and one proof that it is undecidable. Some attempts at resolving P versus NP have received brief media attention, though these attempts have since been refuted.
The P = NP problem can be restated in terms of expressible certain classes of logical statements, as a result of work in descriptive complexity.
Consider all languages of finite structures with a fixed signature including a linear order relation. Then, all such languages in P can be expressed in first-order logic with the addition of a suitable least fixed-point combinator. Effectively, this, in combination with the order, allows the definition of recursive functions. As long as the signature contains at least one predicate or function in addition to the distinguished order relation, so that the amount of space taken to store such finite structures is actually polynomial in the number of elements in the structure, this precisely characterizes P.
Similarly, NP is the set of languages expressible in existential second-order logic—that is, second-order logic restricted to exclude universal quantification over relations, functions, and subsets. The languages in the polynomial hierarchy, PH, correspond to all of second-order logic. Thus, the question "is P a proper subset of NP" can be reformulated as "is existential second-order logic able to describe languages (of finite linearly ordered structures with nontrivial signature) that first-order logic with least fixed point cannot?". The word "existential" can even be dropped from the previous characterization, since P = NP if and only if P = PH (as the former would establish that NP = co-NP, which in turn implies that NP = PH).
No algorithm for any NP-complete problem is known to run in polynomial time. However, there are algorithms known for NP-complete problems with the property that if P = NP, then the algorithm runs in polynomial time on accepting instances (although with enormous constants, making the algorithm impractical). However, these algorithms do not qualify as polynomial time because their running time on rejecting instances are not polynomial. The following algorithm, due to Levin (without any citation), is such an example below. It correctly accepts the NP-complete language SUBSET-SUM. It runs in polynomial time on inputs that are in SUBSET-SUM if and only if P = NP:
If, and only if, P = NP, then this is a polynomial-time algorithm accepting an NP-complete language. "Accepting" means it gives "yes" answers in polynomial time, but is allowed to run forever when the answer is "no" (also known as a "semi-algorithm").
This algorithm is enormously impractical, even if P = NP. If the shortest program that can solve SUBSET-SUM in polynomial time is "b" bits long, the above algorithm will try at least other programs first.
Conceptually speaking, a "decision problem" is a problem that takes as input some string "w" over an alphabet Σ, and outputs "yes" or "no". If there is an algorithm (say a Turing machine, or a computer program with unbounded memory) that can produce the correct answer for any input string of length "n" in at most "cnk" steps, where "k" and "c" are constants independent of the input string, then we say that the problem can be solved in "polynomial time" and we place it in the class P. Formally, P is defined as the set of all languages that can be decided by a deterministic polynomial-time Turing machine. That is,
where
and a deterministic polynomial-time Turing machine is a deterministic Turing machine "M" that satisfies the following two conditions:
NP can be defined similarly using nondeterministic Turing machines (the traditional way). However, a modern approach to define NP is to use the concept of "certificate" and "verifier". Formally, NP is defined as the set of languages over a finite alphabet that have a verifier that runs in polynomial time, where the notion of "verifier" is defined as follows.
Let "L" be a language over a finite alphabet, Σ.
"L" ∈ NP if, and only if, there exists a binary relation formula_12 and a positive integer "k" such that the following two conditions are satisfied:
A Turing machine that decides "LR" is called a "verifier" for "L" and a "y" such that ("x", "y") ∈ "R" is called a "certificate of membership" of "x" in "L".
In general, a verifier does not have to be polynomial-time. However, for "L" to be in NP, there must be a verifier that runs in polynomial time.
Let
Clearly, the question of whether a given "x" is a composite is equivalent to the question of whether "x" is a member of COMPOSITE. It can be shown that COMPOSITE ∈ NP by verifying that it satisfies the above definition (if we identify natural numbers with their binary representations).
COMPOSITE also happens to be in P, a fact demonstrated by the invention of the AKS primality test.
There are many equivalent ways of describing NP-completeness.
Let "L" be a language over a finite alphabet Σ.
"L" is NP-complete if, and only if, the following two conditions are satisfied:
Alternatively, if "L" ∈ NP, and there is another NP-complete problem that can be polynomial-time reduced to "L", then "L" is NP-complete. This is a common way of proving some new problem is NP-complete.
The film "Travelling Salesman", by director Timothy Lanzone, is the story of four mathematicians hired by the US government to solve the P versus NP problem.
In the sixth episode of "The Simpsons" seventh season "Treehouse of Horror VI", the equation P=NP is seen shortly after Homer accidentally stumbles into the "third dimension".
In the second episode of season 2 of "Elementary", "Solve for X" revolves around Sherlock and Watson investigating the murders of mathematicians who were attempting to solve P versus NP. | https://en.wikipedia.org/wiki?curid=6115 |
Charles Sanders Peirce
Charles Sanders Peirce ( ; September 10, 1839 – April 19, 1914) was an American philosopher, logician, mathematician, and scientist who is sometimes known as "the father of pragmatism". He was educated as a chemist and employed as a scientist for thirty years. Today he is appreciated largely for his contributions to logic, mathematics, philosophy, scientific methodology, and semiotics, and for his founding of pragmatism.
An innovator in mathematics, statistics, philosophy, research methodology, and various sciences, Peirce considered himself, first and foremost, a logician. He made major contributions to logic, but logic for him encompassed much of that which is now called epistemology and philosophy of science. He saw logic as the formal branch of semiotics, of which he is a founder, which foreshadowed the debate among logical positivists and proponents of philosophy of language that dominated 20th-century Western philosophy. Additionally, he defined the concept of abductive reasoning, as well as rigorously formulated mathematical induction and deductive reasoning. As early as 1886 he saw that logical operations could be carried out by electrical switching circuits. The same idea was used decades later to produce digital computers.
In 1934, the philosopher Paul Weiss called Peirce "the most original and versatile of American philosophers and America's greatest logician". "Webster's Biographical Dictionary" said in 1943 that Peirce was "now regarded as the most original thinker and greatest logician of his time".
Peirce was born at 3 Phillips Place in Cambridge, Massachusetts. He was the son of Sarah Hunt Mills and Benjamin Peirce, himself a professor of astronomy and mathematics at Harvard University and perhaps the first serious research mathematician in America. At age 12, Charles read his older brother's copy of Richard Whately's "Elements of Logic", then the leading English-language text on the subject. So began his lifelong fascination with logic and reasoning. He went on to earn a Bachelor of Arts degree and a Master of Arts degree (1862) from Harvard. In 1863 the Lawrence Scientific School awarded him a Bachelor of Science degree, Harvard's first "summa cum laude" chemistry degree. His academic record was otherwise undistinguished. At Harvard, he began lifelong friendships with Francis Ellingwood Abbot, Chauncey Wright, and William James. One of his Harvard instructors, Charles William Eliot, formed an unfavorable opinion of Peirce. This proved fateful, because Eliot, while President of Harvard (1869–1909—a period encompassing nearly all of Peirce's working life), repeatedly vetoed Peirce's employment at the university.
Peirce suffered from his late-teens onward from a nervous condition then known as "facial neuralgia", which would today be diagnosed as trigeminal neuralgia. His biographer, Joseph Brent, says that when in the throes of its pain "he was, at first, almost stupefied, and then aloof, cold, depressed, extremely suspicious, impatient of the slightest crossing, and subject to violent outbursts of temper". Its consequences may have led to the social isolation of his later life.
Between 1859 and 1891, Peirce was intermittently employed in various scientific capacities by the United States Coast Survey and its successor, the United States Coast and Geodetic Survey, where he enjoyed his highly influential father's protection until the latter's death in 1880. That employment exempted Peirce from having to take part in the American Civil War; it would have been very awkward for him to do so, as the Boston Brahmin Peirces sympathized with the Confederacy. At the Survey, he worked mainly in geodesy and gravimetry, refining the use of pendulums to determine small local variations in the Earth's gravity. He was elected a resident fellow of the American Academy of Arts and Sciences in January 1867. The Survey sent him to Europe five times, first in 1871 as part of a group sent to observe a solar eclipse. There, he sought out Augustus De Morgan, William Stanley Jevons, and William Kingdon Clifford, British mathematicians and logicians whose turn of mind resembled his own. From 1869 to 1872, he was employed as an assistant in Harvard's astronomical observatory, doing important work on determining the brightness of stars and the shape of the Milky Way. On April 20, 1877 he was elected a member of the National Academy of Sciences. Also in 1877, he proposed measuring the meter as so many wavelengths of light of a certain frequency, the kind of definition employed from 1960 to 1983.
During the 1880s, Peirce's indifference to bureaucratic detail waxed while his Survey work's quality and timeliness waned. Peirce took years to write reports that he should have completed in months. Meanwhile, he wrote entries, ultimately thousands, during 1883–1909 on philosophy, logic, science, and other subjects for the encyclopedic "Century Dictionary". In 1885, an investigation by the Allison Commission exonerated Peirce, but led to the dismissal of Superintendent Julius Hilgard and several other Coast Survey employees for misuse of public funds. In 1891, Peirce resigned from the Coast Survey at Superintendent Thomas Corwin Mendenhall's request.
In 1879, Peirce was appointed lecturer in logic at Johns Hopkins University, which had strong departments in areas that interested him, such as philosophy (Royce and Dewey completed their Ph.D.s at Hopkins), psychology (taught by G. Stanley Hall and studied by Joseph Jastrow, who coauthored a landmark empirical study with Peirce), and mathematics (taught by J. J. Sylvester, who came to admire Peirce's work on mathematics and logic). His "Studies in Logic by Members of the Johns Hopkins University" (1883) contained works by himself and Allan Marquand, Christine Ladd, Benjamin Ives Gilman, and Oscar Howard Mitchell, several of whom were his graduate students. Peirce's nontenured position at Hopkins was the only academic appointment he ever held.
Brent documents something Peirce never suspected, namely that his efforts to obtain academic employment, grants, and scientific respectability were repeatedly frustrated by the covert opposition of a major Canadian-American scientist of the day, Simon Newcomb. Peirce's efforts may also have been hampered by what Brent characterizes as "his difficult personality". In contrast, Keith Devlin believes that Peirce's work was too far ahead of his time to be appreciated by the academic establishment of the day and that this played a large role in his inability to obtain a tenured position.
Peirce's personal life undoubtedly worked against his professional success. After his first wife, Harriet Melusina Fay ("Zina"), left him in 1875, Peirce, while still legally married, became involved with Juliette, whose last name, given variously as Froissy and Pourtalai, and nationality (she spoke French) remains uncertain. When his divorce from Zina became final in 1883, he married Juliette. That year, Newcomb pointed out to a Johns Hopkins trustee that Peirce, while a Hopkins employee, had lived and traveled with a woman to whom he was not married; the ensuing scandal led to his dismissal in January 1884. Over the years Peirce sought academic employment at various universities without success. He had no children by either marriage.
In 1887 Peirce spent part of his inheritance from his parents to buy of rural land near Milford, Pennsylvania, which never yielded an economic return. There he had an 1854 farmhouse remodeled to his design. The Peirces named the property "Arisbe". There they lived with few interruptions for the rest of their lives, Charles writing prolifically, much of it unpublished to this day (see Works). Living beyond their means soon led to grave financial and legal difficulties. He spent much of his last two decades unable to afford heat in winter and subsisting on old bread donated by the local baker. Unable to afford new stationery, he wrote on the verso side of old manuscripts. An outstanding warrant for assault and unpaid debts led to his being a fugitive in New York City for a while. Several people, including his brother James Mills Peirce and his neighbors, relatives of Gifford Pinchot, settled his debts and paid his property taxes and mortgage.
Peirce did some scientific and engineering consulting and wrote much for meager pay, mainly encyclopedic dictionary entries, and reviews for "The Nation" (with whose editor, Wendell Phillips Garrison, he became friendly). He did translations for the Smithsonian Institution, at its director Samuel Langley's instigation. Peirce also did substantial mathematical calculations for Langley's research on powered flight. Hoping to make money, Peirce tried inventing. He began but did not complete several books. In 1888, President Grover Cleveland appointed him to the Assay Commission.
From 1890 on, he had a friend and admirer in Judge Francis C. Russell of Chicago, who introduced Peirce to editor Paul Carus and owner Edward C. Hegeler of the pioneering American philosophy journal "The Monist", which eventually published at least 14 articles by Peirce. He wrote many texts in James Mark Baldwin's "Dictionary of Philosophy and Psychology" (1901–1905); half of those credited to him appear to have been written actually by Christine Ladd-Franklin under his supervision. He applied in 1902 to the newly formed Carnegie Institution for a grant to write a systematic book describing his life's work. The application was doomed; his nemesis, Newcomb, served on the Carnegie Institution executive committee, and its president had been president of Johns Hopkins at the time of Peirce's dismissal.
The one who did the most to help Peirce in these desperate times was his old friend William James, dedicating his "Will to Believe" (1897) to Peirce, and arranging for Peirce to be paid to give two series of lectures at or near Harvard (1898 and 1903). Most important, each year from 1907 until James's death in 1910, James wrote to his friends in the Boston intelligentsia to request financial aid for Peirce; the fund continued even after James died. Peirce reciprocated by designating James's eldest son as his heir should Juliette predecease him. It has been believed that this was also why Peirce used "Santiago" ("St. James" in English) as a middle name, but he appeared in print as early as 1890 as Charles Santiago Peirce. (See Charles Santiago Sanders Peirce for discussion and references).
Peirce died destitute in Milford, Pennsylvania, twenty years before his widow. Juliette Peirce kept the urn with Peirce's ashes at Arisbe. In 1934, Pennsylvania Governor Gifford Pinchot arranged for Juliette's burial on Milford Cemetery. The urn with Peirce's ashes was interred with Juliette.
Peirce grew up in a home where white supremacy was taken for granted, and Southern slavery was considered natural.
Until the outbreak of the Civil War his father described himself as a secessionist, but after the outbreak of the war, this stopped and he became a Union partisan, providing donations to the Sanitary Commission, the leading Northern war charity. No members of the Peirce family volunteered or enlisted. Peirce shared his father's views and liked to use the following syllogism to illustrate the unreliability of traditional forms of logic (see also: ):
All Men are equal in their political rights.
Negroes are Men.
Therefore, negroes are equal in political rights to whites.
Bertrand Russell (1959) wrote "Beyond doubt [...] he was one of the most original minds of the later nineteenth century and certainly the greatest American thinker ever". Russell and Whitehead's "Principia Mathematica", published from 1910 to 1913, does not mention Peirce (Peirce's work was not widely known until later). A. N. Whitehead, while reading some of Peirce's unpublished manuscripts soon after arriving at Harvard in 1924, was struck by how Peirce had anticipated his own "process" thinking. (On Peirce and process metaphysics, see Lowe 1964). Karl Popper viewed Peirce as "one of the greatest philosophers of all times". Yet Peirce's achievements were not immediately recognized. His imposing contemporaries William James and Josiah Royce admired him and Cassius Jackson Keyser, at Columbia and C. K. Ogden, wrote about Peirce with respect but to no immediate effect.
The first scholar to give Peirce his considered professional attention was Royce's student Morris Raphael Cohen, the editor of an anthology of Peirce's writings entitled "Chance, Love, and Logic" (1923), and the author of the first bibliography of Peirce's scattered writings. John Dewey, studied under Peirce at Johns Hopkins. From 1916 onward, Dewey's writings repeatedly mention Peirce with deference. His 1938 "Logic: The Theory of Inquiry" is much influenced by Peirce. The publication of the first six volumes of "Collected Papers" (1931–1935), the most important event to date in Peirce studies and one that Cohen made possible by raising the needed funds, did not prompt an outpouring of secondary studies. The editors of those volumes, Charles Hartshorne and Paul Weiss, did not become Peirce specialists. Early landmarks of the secondary literature include the monographs by Buchler (1939), Feibleman (1946), and Goudge (1950), the 1941 PhD thesis by Arthur W. Burks (who went on to edit volumes 7 and 8), and the studies edited by Wiener and Young (1952). The Charles S. Peirce Society was founded in 1946. Its "Transactions", an academic quarterly specializing in Peirce's pragmatism and American philosophy has appeared since 1965. (See Phillips 2014, 62 for discussion of Peirce and Dewey relative to transactionalism).
In 1949, while doing unrelated archival work, the historian of mathematics Carolyn Eisele (1902–2000) chanced on an autograph letter by Peirce. So began her forty years of research on Peirce, “the mathematician and scientist,” culminating in Eisele (1976, 1979, 1985). Beginning around 1960, the philosopher and historian of ideas Max Fisch (1900–1995) emerged as an authority on Peirce (Fisch, 1986). He includes many of his relevant articles in a survey (Fisch 1986: 422–48) of the impact of Peirce's thought through 1983.
Peirce has gained an international following, marked by university research centers devoted to Peirce studies and pragmatism in Brazil (CeneP/CIEP), Finland (HPRC and ), Germany (Wirth's group, Hoffman's and Otte's group, and Deuser's and Härle's group), France (L'I.R.S.C.E.), Spain (GEP), and Italy (CSP). His writings have been translated into several languages, including German, French, Finnish, Spanish, and Swedish. Since 1950, there have been French, Italian, Spanish, British, and Brazilian Peirce scholars of note. For many years, the North American philosophy department most devoted to Peirce was the University of Toronto, thanks in part to the leadership of Thomas Goudge and David Savan. In recent years, U.S. Peirce scholars have clustered at Indiana University – Purdue University Indianapolis, home of the Peirce Edition Project (PEP) –, and Pennsylvania State University.
In recent years, Peirce's trichotomy of signs is exploited by a growing number of practitioners for marketing and design tasks.
Peirce's reputation rests largely on academic papers published in American scientific and scholarly journals such as "Proceedings of the American Academy of Arts and Sciences", the "Journal of Speculative Philosophy", "The Monist", "Popular Science Monthly", the "American Journal of Mathematics", "Memoirs of the National Academy of Sciences", "The Nation", and others. See Articles by Peirce, published in his lifetime for an extensive list with links to them online. The only full-length book (neither extract nor pamphlet) that Peirce authored and saw published in his lifetime was "Photometric Researches" (1878), a 181-page monograph on the applications of spectrographic methods to astronomy. While at Johns Hopkins, he edited "Studies in Logic" (1883), containing chapters by himself and his graduate students. Besides lectures during his years (1879–1884) as lecturer in Logic at Johns Hopkins, he gave at least nine series of lectures, many now published; see Lectures by Peirce.
After Peirce's death, Harvard University obtained from Peirce's widow the papers found in his study, but did not microfilm them until 1964. Only after Richard Robin (1967) catalogued this "Nachlass" did it become clear that Peirce had left approximately 1,650 unpublished manuscripts, totaling over 100,000 pages, mostly still unpublished except on microfilm. On the vicissitudes of Peirce's papers, see Houser (1989). Reportedly the papers remain in unsatisfactory condition.
The first published anthology of Peirce's articles was the one-volume "Chance, Love and Logic: Philosophical Essays", edited by Morris Raphael Cohen, 1923, still in print. Other one-volume anthologies were published in 1940, 1957, 1958, 1972, 1994, and 2009, most still in print. The main posthumous editions of Peirce's works in their long trek to light, often multi-volume, and some still in print, have included:
1931–1958: "Collected Papers of Charles Sanders Peirce" (CP), 8 volumes, includes many published works, along with a selection of previously unpublished work and a smattering of his correspondence. This long-time standard edition drawn from Peirce's work from the 1860s to 1913 remains the most comprehensive survey of his prolific output from 1893 to 1913. It is organized thematically, but texts (including lecture series) are often split up across volumes, while texts from various stages in Peirce's development are often combined, requiring frequent visits to editors' notes. Edited (1–6) by Charles Hartshorne and Paul Weiss and (7–8) by Arthur Burks, in print and online.
1975–1987: "Charles Sanders Peirce: Contributions to" The Nation, 4 volumes, includes Peirce's more than 300 reviews and articles published 1869–1908 in "The Nation". Edited by Kenneth Laine Ketner and James Edward Cook, online.
1976: "The New Elements of Mathematics by Charles S. Peirce", 4 volumes in 5, included many previously unpublished Peirce manuscripts on mathematical subjects, along with Peirce's important published mathematical articles. Edited by Carolyn Eisele, back in print.
1977: "Semiotic and Significs: The Correspondence between C. S. Peirce and Victoria Lady Welby" (2nd edition 2001), included Peirce's entire correspondence (1903–1912) with Victoria, Lady Welby. Peirce's other published correspondence is largely limited to the 14 letters included in volume 8 of the "Collected Papers", and the 20-odd pre-1890 items included so far in the "Writings". Edited by Charles S. Hardwick with James Cook, out of print.
1982–now: "Writings of Charles S. Peirce, A Chronological Edition" (W), Volumes 1–6 & 8, of a projected 30. The limited coverage, and defective editing and organization, of the "Collected Papers" led Max Fisch and others in the 1970s to found the Peirce Edition Project (PEP), whose mission is to prepare a more complete critical chronological edition. Only seven volumes have appeared to date, but they cover the period from 1859 to 1892, when Peirce carried out much of his best-known work. "Writings of Charles S. Peirce", 8 was published in November 2010; and work continues on "Writings of Charles S. Peirce", 7, 9, and 11. In print and online.
1985: "Historical Perspectives on Peirce's Logic of Science: A History of Science", 2 volumes. Auspitz has said, "The extent of Peirce's immersion in the science of his day is evident in his reviews in the "Nation" [...] and in his papers, grant applications, and publishers' prospectuses in the history and practice of science", referring latterly to "Historical Perspectives". Edited by Carolyn Eisele, back in print.
1992: "Reasoning and the Logic of Things" collects in one place Peirce's 1898 series of lectures invited by William James. Edited by Kenneth Laine Ketner, with commentary by Hilary Putnam, in print.
1992–1998: "The Essential Peirce" (EP), 2 volumes, is an important recent sampler of Peirce's philosophical writings. Edited (1) by Nathan Hauser and Christian Kloesel and (2) by "Peirce Edition Project" editors, in print.
1997: "Pragmatism as a Principle and Method of Right Thinking" collects Peirce's 1903 Harvard "Lectures on Pragmatism" in a study edition, including drafts, of Peirce's lecture manuscripts, which had been previously published in abridged form; the lectures now also appear in "The Essential Peirce", 2. Edited by Patricia Ann Turisi, in print.
2010: "Philosophy of Mathematics: Selected Writings" collects important writings by Peirce on the subject, many not previously in print. Edited by Matthew E. Moore, in print.
Peirce's most important work in pure mathematics was in logical and foundational areas. He also worked on linear algebra, matrices, various geometries, topology and Listing numbers, Bell numbers, graphs, the four-color problem, and the nature of continuity.
He worked on applied mathematics in economics, engineering, and map projections (such as the Peirce quincuncial projection), and was especially active in probability and statistics.
Peirce made a number of striking discoveries in formal logic and foundational mathematics, nearly all of which came to be appreciated only long after he died:
In 1860 he suggested a cardinal arithmetic for infinite numbers, years before any work by Georg Cantor (who completed his dissertation in 1867) and without access to Bernard Bolzano's 1851 (posthumous) "Paradoxien des Unendlichen".
↓ The Peirce arrow, symbol for "(neither) ... nor ...", also called the "Quine dagger".
In 1880–1881 he showed how Boolean algebra could be done via a repeated sufficient single binary operation (logical NOR), anticipating Henry M. Sheffer by 33 years. (See also De Morgan's Laws.)
In 1881 he set out the axiomatization of natural number arithmetic, a few years before Richard Dedekind and Giuseppe Peano. In the same paper Peirce gave, years before Dedekind, the first purely cardinal definition of a finite set in the sense now known as "Dedekind-finite", and implied by the same stroke an important formal definition of an infinite set (Dedekind-infinite), as a set that can be put into a one-to-one correspondence with one of its proper subsets.
In 1885 he distinguished between first-order and second-order quantification. In the same paper he set out what can be read as the first (primitive) axiomatic set theory, anticipating Zermelo by about two decades (Brady 2000, pp. 132–33).
In 1886, he saw that Boolean calculations could be carried out via electrical switches, anticipating Claude Shannon by more than 50 years.
By the later 1890s he was devising existential graphs, a diagrammatic notation for the predicate calculus. Based on them are John F. Sowa's conceptual graphs and Sun-Joo Shin's diagrammatic reasoning.
Peirce wrote drafts for an introductory textbook, with the working title "The New Elements of Mathematics", that presented mathematics from an original standpoint. Those drafts and many other of his previously unpublished mathematical manuscripts finally appeared in "The New Elements of Mathematics by Charles S. Peirce" (1976), edited by mathematician Carolyn Eisele.
Peirce agreed with Auguste Comte in regarding mathematics as more basic than philosophy and the special sciences (of nature and mind). Peirce classified mathematics into three subareas: (1) mathematics of logic, (2) discrete series, and (3) pseudo-continua (as he called them, including the real numbers) and continua. Influenced by his father Benjamin, Peirce argued that mathematics studies purely hypothetical objects and is not just the science of quantity but is more broadly the science which draws necessary conclusions; that mathematics aids logic, not vice versa; and that logic itself is part of philosophy and is the science "about" drawing conclusions necessary and otherwise.
Mathematical logic and foundations, some noted articles
Beginning with his first paper on the "Logic of Relatives" (1870), Peirce extended the theory of relations that Augustus De Morgan had just recently awakened from its Cinderella slumbers. Much of the mathematics of relations now taken for granted was "borrowed" from Peirce, not always with all due credit; on that and on how the young Bertrand Russell, especially his "Principles of Mathematics" and "Principia Mathematica", did not do Peirce justice, see Anellis (1995). In 1918 the logician C. I. Lewis wrote, "The contributions of C.S. Peirce to symbolic logic are more numerous and varied than those of any other writer—at least in the nineteenth century." Beginning in 1940, Alfred Tarski and his students rediscovered aspects of Peirce's larger vision of relational logic, developing the perspective of relation algebra.
Relational logic gained applications. In mathematics, it influenced the abstract analysis of E. H. Moore and the lattice theory of Garrett Birkhoff. In computer science, the relational model for databases was developed with Peircean ideas in work of Edgar F. Codd, who was a doctoral student of Arthur W. Burks, a Peirce scholar. In economics, relational logic was used by Frank P. Ramsey, John von Neumann, and Paul Samuelson to study preferences and utility and by Kenneth J. Arrow in "Social Choice and Individual Values", following Arrow's association with Tarski at City College of New York.
On Peirce and his contemporaries Ernst Schröder and Gottlob Frege, Hilary Putnam (1982) documented that Frege's work on the logic of quantifiers had little influence on his contemporaries, although it was published four years before the work of Peirce and his student Oscar Howard Mitchell. Putnam found that mathematicians and logicians learned about the logic of quantifiers through the independent work of Peirce and Mitchell, particularly through Peirce's "On the Algebra of Logic: A Contribution to the Philosophy of Notation" (1885), published in the premier American mathematical journal of the day, and cited by Peano and Schröder, among others, who ignored Frege. They also adopted and modified Peirce's notations, typographical variants of those now used. Peirce apparently was ignorant of Frege's work, despite their overlapping achievements in logic, philosophy of language, and the foundations of mathematics.
Peirce's work on formal logic had admirers besides Ernst Schröder:
A philosophy of logic, grounded in his categories and semiotic, can be extracted from Peirce's writings and, along with Peirce's logical work more generally, is exposited and defended in Hilary Putnam (1982); the Introduction in Nathan Houser "et al." (1997); and Randall Dipert's chapter in Cheryl Misak (2004).
Continuity and synechism are central in Peirce's philosophy: "I did not at first suppose that it was, as I gradually came to find it, the master-Key of philosophy".
From a mathematical point of view, he embraced infinitesimals and worked long on the mathematics of continua. He long held that the real numbers constitute a pseudo-continuum; that a true continuum is the real subject matter of "analysis situs" (topology); and that a true continuum of instants exceeds—and within any lapse of time has room for—any Aleph number (any infinite "multitude" as he called it) of instants.
In 1908 Peirce wrote that he found that a true continuum might have or lack such room. Jérôme Havenel (2008): "It is on 26 May 1908, that Peirce finally gave up his idea that in every continuum there is room for whatever collection of any multitude. From now on, there are different kinds of continua, which have different properties." | https://en.wikipedia.org/wiki?curid=6117 |
Manhattan
Manhattan (), often referred to by residents of the New York City area as the City, is the most densely populated of the five boroughs of New York City, and coextensive with the County of New York, one of the original counties of the U.S. state of New York. Manhattan serves as the city's economic and administrative center, cultural identifier, and historical birthplace. The borough consists mostly of Manhattan Island, bounded by the Hudson, East, and Harlem rivers; as well as several small adjacent islands. Manhattan additionally contains Marble Hill, a small neighborhood now on the U.S. mainland, that was connected using landfill to the Bronx and separated from the rest of Manhattan by the Harlem River. Manhattan Island is divided into three informally bounded components, each aligned with the borough's long axis: Lower, Midtown, and Upper Manhattan.
Manhattan has been described as the cultural, financial, media, and entertainment capital of the world, and the borough hosts the United Nations Headquarters. Anchored by Wall Street in the Financial District of Lower Manhattan, New York City has been called both the most economically powerful city and the leading financial center of the world, and Manhattan is home to the world's two largest stock exchanges by total market capitalization: the New York Stock Exchange and NASDAQ. Many multinational media conglomerates are based in Manhattan, and the borough has been the for numerous books, films, and television shows. Manhattan real estate has since become among the most expensive in the world, with the value of Manhattan Island, including real estate, estimated to exceed US$3 trillion in 2013; median residential property sale prices in Manhattan approximated US as of 2018, with Fifth Avenue in Midtown Manhattan commanding the highest retail rents in the world, at US per year in 2017.
Manhattan traces its origins to a trading post founded by colonists from the Dutch Republic in 1624 on Lower Manhattan; the post was named New Amsterdam in 1626. Manhattan is historically documented to have been purchased by Dutch colonists from Native Americans in 1626 for 60 guilders, which equals roughly $ in current terms. The territory and its surroundings came under English control in 1664 and were renamed New York after King Charles II of England granted the lands to his brother, the Duke of York. New York, based in present-day Manhattan, served as the capital of the United States from 1785 until 1790. The Statue of Liberty greeted millions of immigrants as they came to America by ship in the late 19th century and is a world symbol of the United States and its ideals of liberty and peace. Manhattan became a borough during the consolidation of New York City in 1898.
New York County is the United States' second-smallest county by land area (larger only than Kalawao County, Hawaii), and is also the most densely populated U.S. county. It is also one of the most densely populated areas in the world, with a census-estimated 2019 population of 1,628,706 living in a land area of , or 72,918 residents per square mile (28,154/km2), higher than the density of any individual U.S. city. On business days, the influx of commuters increases this number to over 3.9 million, or more than 170,000 people per square mile (65,600/km2). Manhattan has the third-largest population of New York City's five boroughs, after Brooklyn and Queens, and is the smallest borough in terms of land area. If each borough were ranked as a city, Manhattan would rank as the sixth-most populous in the U.S.
Many districts and landmarks in Manhattan are well known, as New York City received a record 62.8 million tourists in 2017, and Manhattan hosts three of the world's 10 most-visited tourist attractions in 2013: Times Square, Central Park, and Grand Central Terminal. The borough hosts many prominent bridges, such as the Brooklyn, Manhattan, Williamsburg, Queensboro, Triborough, and George Washington Bridges; tunnels such as the Holland and Lincoln Tunnels; skyscrapers such as the Empire State Building, Chrysler Building, and One World Trade Center; and parks, such as Central Park. Chinatown incorporates the highest concentration of Chinese people in the Western Hemisphere, and the Stonewall Inn in Greenwich Village, part of the Stonewall National Monument, is considered the birthplace of the modern gay rights movement. The City of New York was founded at the southern tip of Manhattan, and the borough houses New York City Hall, the seat of the city's government. Numerous colleges and universities are located in Manhattan, including Columbia University, New York University, Cornell Tech, Weill Cornell Medical College, and Rockefeller University, which have been ranked among the top 40 in the world.
The name "Manhattan" derives from the Munsee Lenape language term "manaháhtaan" (where "manah-" means "gather", "-aht-" means "bow", and "-aan" is an abstract element used to form verb stems). The Lenape word has been translated as "the place where we get bows" or "place for gathering the (wood to make) bows". According to a Munsee tradition recorded in the 19th century, the island was named so for a grove of hickory trees at the lower end that was considered ideal for the making of bows.
It was first recorded in writing as "Manna-hata", in the 1609 logbook of Robert Juet, an officer on Henry Hudson's yacht "Halve Maen" ("Half Moon"). A 1610 map depicts the name as Manna-hata, twice, on both the west and east sides of the Mauritius River (later named the Hudson River). Alternative folk etymologies include "island of many hills", "the island where we all became intoxicated" and simply "island", as well as a phrase descriptive of the whirlpool at Hell Gate.
The area that is now Manhattan was long inhabited by the Lenape Native Americans. In 1524, Florentine explorer Giovanni da Verrazzano – sailing in service of King Francis I of France – became the first documented European to visit the area that would become New York City. He entered the tidal strait now known as The Narrows and named the land around Upper New York Harbor "New Angoulême", in reference to the family name of King Francis I that was derived from Angoulême in France; he sailed far enough into the harbor to sight the Hudson River, which he referred to in his report to the French king as a "very big river"; and he named the "Bay of Santa Margarita" – what is now Upper New York Bay – after Marguerite de Navarre, the elder sister of the king.
It was not until the voyage of Henry Hudson, an Englishman who worked for the Dutch East India Company, that the area was mapped. Hudson came across Manhattan Island and the native people living there in 1609, and continued up the river that would later bear his name, the Hudson River, until he arrived at the site of present-day Albany.
A permanent European presence in New Netherland began in 1624, with the founding of a Dutch fur trading settlement on Governors Island. In 1625, construction was started on the citadel of Fort Amsterdam on Manhattan Island, later called New Amsterdam ("Nieuw Amsterdam"), in what is now Lower Manhattan. The 1625 establishment of Fort Amsterdam at the southern tip of Manhattan Island is recognized as the birth of New York City.
According to a letter by Pieter Janszoon Schagen, Peter Minuit and Dutch colonists acquired Manhattan on May 24, 1626, from unnamed Native American people, who are believed to have been Canarsee Indians of the Lenape, in exchange for traded goods worth 60 guilders, often said to be worth US$24. The figure of 60 guilders comes from a letter by a representative of the Dutch Estates General and member of the board of the Dutch West India Company, Pieter Janszoon Schagen, to the Estates General in November 1626. In 1846, New York historian John Romeyn Brodhead converted the figure of Fl 60 (or 60 guilders) to US$24 (because 24 = 60/2.5, 1 dollar = confused with rijksdaalder = 2.5 guilders). "[A] variable-rate myth being a contradiction in terms, the purchase price remains forever frozen at twenty-four dollars," as Edwin G. Burrows and Mike Wallace remarked in their history of New York. Sixty guilders in 1626 was valued at approximately $1,000 in 2006, according to the Institute for Social History of Amsterdam. Based on the price of silver, "Straight Dope" author Cecil Adams calculated an equivalent of $72 in 1992. Historians James and Michelle Nevius revisited the issue in 2014, suggesting that using the prices of beer and brandy as monetary equivalencies, the price Minuit paid would have the purchasing power of somewhere between $2,600 and $15,600 in current dollars. According to the writer Nathaniel Benchley, Minuit conducted the transaction with Seyseys, chief of the Canarsee Native Americans, who were willing to accept valuable merchandise in exchange for the island that was mostly controlled by the Weckquaesgeeks, a band of the Wappinger.
In 1647, Peter Stuyvesant was appointed as the last Dutch Director-General of the colony. New Amsterdam was formally incorporated as a city on February 2, 1653. In 1664, the English conquered New Netherland and renamed it "New York" after the English Duke of York and Albany, the future King James II. The Dutch, under Director General Stuyvesant, successfully negotiated with the English to produce 24 articles of provisional transfer, which sought to retain for the extant citizens of New Netherland their previously attained liberties (including freedom of religion) under new colonial English rulers.
The Dutch Republic regained the city in August 1673, renaming it "New Orange". New Netherland was ultimately ceded to the English in November 1674 through the Treaty of Westminster.
Manhattan was at the heart of the New York Campaign, a series of major battles in the early American Revolutionary War. The Continental Army was forced to abandon Manhattan after the Battle of Fort Washington on November 16, 1776. The city, greatly damaged by the Great Fire of New York during the campaign, became the British military and political center of operations in North America for the remainder of the war. The military center for the colonists was established in New Jersey. British occupation lasted until November 25, 1783, when George Washington returned to Manhattan, as the last British forces left the city.
From January 11, 1785, to the fall of 1788, New York City was the fifth of five capitals of the United States under the Articles of Confederation, with the Continental Congress meeting at New York City Hall (then at Fraunces Tavern). New York was the first capital under the newly enacted Constitution of the United States, from March 4, 1789, to August 12, 1790, at Federal Hall. Federal Hall was also the site where the United States Supreme Court met for the first time, the United States Bill of Rights were drafted and ratified, and where the Northwest Ordinance was adopted, establishing measures for adding new states to the Union.
New York grew as an economic center, first as a result of Alexander Hamilton's policies and practices as the first Secretary of the Treasury and, later, with the opening of the Erie Canal in 1825, which connected the Atlantic port to the vast agricultural markets of the Midwestern United States and Canada. By 1810, New York City, then confined to Manhattan, had surpassed Philadelphia as the largest city in the United States.
Tammany Hall, a Democratic Party political machine, began to grow in influence with the support of many of the immigrant Irish, culminating in the election of the first Tammany mayor, Fernando Wood, in 1854. Tammany Hall dominated local politics for decades. Central Park, which opened to the public in 1858, became the first landscaped public park in an American city.
New York City played a complex role in the American Civil War. The city's strong commercial ties to the southern United States existed for many reasons, including the industrial power of the Hudson River, which allowed trade with stops such as the West Point Foundry, one of the great manufacturing operations in the early United States; and the city's Atlantic Ocean ports, rendering New York City the American powerhouse in terms of industrial trade between the northern and southern United States. New York's growing immigrant population, which had originated largely from Germany and Ireland, began in the late 1850s to include waves of Italians and Central and Eastern European Jews flowing in en masse. Anger arose about conscription, with resentment at those who could afford to pay $300 to avoid service leading to resentment against Lincoln's war policies and fomenting paranoia about free Blacks taking the poor immigrants' jobs, culminating in the three-day-long New York Draft Riots of July 1863. These intense war-time riots are counted among the worst incidents of civil disorder in American history, with an estimated 119 participants and passersby massacred.
The rate of immigration from Europe grew steeply after the Civil War, and Manhattan became the first stop for millions seeking a new life in the United States, a role acknowledged by the dedication of the Statue of Liberty on October 28, 1886, a gift from the people of France. The new European immigration brought further social upheaval. In a city of tenements packed with poorly paid laborers from dozens of nations, the city became a hotbed of revolution (including anarchists and communists among others), syndicalism, racketeering, and unionization.
In 1883, the opening of the Brooklyn Bridge established a road connection to Brooklyn, across the East River. In 1874 the western portion of the present Bronx County was transferred to New York County from Westchester County, and in 1895 the remainder of the present Bronx County was annexed. In 1898, when New York City consolidated with three neighboring counties to form "the City of Greater New York", Manhattan and the Bronx, though still one county, were established as two separate boroughs. On January 1, 1914, the New York State Legislature created Bronx County and New York County was reduced to its present boundaries.
The construction of the New York City Subway, which opened in 1904, helped bind the new city together, as did additional bridges to Brooklyn. In the 1920s Manhattan experienced large arrivals of African-Americans as part of the Great Migration from the southern United States, and the Harlem Renaissance, part of a larger boom time in the Prohibition era that included new skyscrapers competing for the skyline. New York City became the most populous city in the world in 1925, overtaking London, which had reigned for a century. Manhattan's majority white ethnic group declined from 98.7% in 1900 to 58.3% by 1990.
On March 25, 1911, the Triangle Shirtwaist Factory fire in Greenwich Village killed 146 garment workers. The disaster eventually led to overhauls of the city's fire department, building codes, and workplace regulations.
The period between the World Wars saw the election of reformist mayor Fiorello La Guardia and the fall of Tammany Hall after 80 years of political dominance. As the city's demographics stabilized, labor unionization brought new protections and affluence to the working class, the city's government and infrastructure underwent a dramatic overhaul under La Guardia. Despite the Great Depression, some of the world's tallest skyscrapers were completed in Manhattan during the 1930s, including numerous Art Deco masterpieces that are still part of the city's skyline, most notably the Empire State Building, the Chrysler Building, and the 30 Rockefeller Plaza.
Returning World War II veterans created a postwar economic boom, which led to the development of huge housing developments targeted at returning veterans, the largest being Peter Cooper Village-Stuyvesant Town, which opened in 1947. In 1951–1952, the United Nations relocated to a new headquarters the East Side of Manhattan.
The Stonewall riots were a series of spontaneous, violent demonstrations by members of the gay community against a police raid that took place in the early morning hours of June 28, 1969, at the Stonewall Inn in the Greenwich Village neighborhood of Lower Manhattan. They are widely considered to constitute the single most important event leading to the gay liberation movement and the modern fight for LGBT rights.
In the 1970s, job losses due to industrial restructuring caused New York City, including Manhattan, to suffer from economic problems and rising crime rates. While a resurgence in the financial industry greatly improved the city's economic health in the 1980s, New York's crime rate continued to increase through the decade and into the beginning of the 1990s.
The 1980s saw a rebirth of Wall Street, and Manhattan reclaimed its role at the center of the worldwide financial industry. The 1980s also saw Manhattan at the heart of the AIDS crisis, with Greenwich Village at its epicenter. The organizations Gay Men's Health Crisis (GMHC) and AIDS Coalition to Unleash Power (ACT UP) were founded to advocate on behalf of those stricken with the disease.
By the 1990s crime rates started to drop dramatically due to revised police strategies, improving economic opportunities, gentrification, and new residents, both American transplants and new immigrants from Asia and Latin America. Murder rates that had reached 2,245 in 1990 plummeted to 537 by 2008, and the crack epidemic and its associated drug-related violence came under greater control. The outflow of population turned around, as the city once again became the destination of immigrants from around the world, joining with low interest rates and Wall Street bonuses to fuel the growth of the real estate market. Important new sectors, such as Silicon Alley, emerged in Manhattan's economy.
On September 11, 2001, two of four hijacked planes were flown into the Twin Towers of the original World Trade Center, and the towers subsequently collapsed. 7 World Trade Center collapsed due to fires and structural damage caused by heavy debris falling from the collapse of the Twin Towers. The other buildings within the World Trade Center complex were damaged beyond repair and soon after demolished. The collapse of the Twin Towers caused extensive damage to other surrounding buildings and skyscrapers in Lower Manhattan, and resulted in the deaths of 2,606 people, in addition to those on the planes. Since 2001, most of Lower Manhattan has been restored, although there has been controversy surrounding the rebuilding. Many rescue workers and residents of the area developed several life-threatening illnesses that have led to some of their subsequent deaths. A memorial at the site was opened to the public on September 11, 2011, and the museum opened in 2014. In 2014, the new One World Trade Center, at and formerly known as the Freedom Tower, became the tallest building in the Western Hemisphere, while other skyscrapers were under construction at the site.
The Occupy Wall Street protests in Zuccotti Park in the Financial District of Lower Manhattan began on September 17, 2011, receiving global attention and spawning the Occupy movement against social and economic inequality worldwide.
On October 29 and 30, 2012, Hurricane Sandy caused extensive destruction in the borough, ravaging portions of Lower Manhattan with record-high storm surge from New York Harbor, severe flooding, and high winds, causing power outages for hundreds of thousands of city residents and leading to gasoline shortages and disruption of mass transit systems. The storm and its profound impacts have prompted the discussion of constructing seawalls and other coastal barriers around the shorelines of the borough and the metropolitan area to minimize the risk of destructive consequences from another such event in the future. Around 15 percent of the borough is considered to be in flood-risk zones.
On October 31, 2017, a terrorist took a rental pickup truck and deliberately drove down a bike path alongside the West Side Highway in Lower Manhattan, killing eight people and injuring a dozen others before crashing into a school bus.
The borough consists of Manhattan Island, Marble Hill, and several small islands, including Randalls Island and Wards Island, and Roosevelt Island in the East River, and Governors Island and Liberty Island to the south in New York Harbor.
According to the United States Census Bureau, New York County has a total area of , of which is land and (32%) is water. The northern segment of Upper Manhattan represents a geographic panhandle. Manhattan Island is in area, long and wide, at its widest (near 14th Street). Icebergs are often compared in size to the area of Manhattan.
Manhattan Island is loosely divided into Downtown (Lower Manhattan), Midtown (Midtown Manhattan), and Uptown (Upper Manhattan), with Fifth Avenue dividing Manhattan lengthwise into its East Side and West Side. Manhattan Island is bounded by the Hudson River to the west and the East River to the east. To the north, the Harlem River divides Manhattan Island from the Bronx and the mainland United States.
Early in the 19th century, landfill was used to expand Lower Manhattan from the natural Hudson shoreline at Greenwich Street to West Street. When building the World Trade Center in 1968, 1.2 million cubic yards (917,000 m3) of material was excavated from the site. Rather than dumping the spoil at sea or in landfills, the fill material was used to expand the Manhattan shoreline across West Street, creating Battery Park City. The result was a 700-foot (210-m) extension into the river, running six blocks or , covering , providing a riverfront esplanade and over of parks; Hudson River Park was subsequently opened in stages beginning in 1998.
One neighborhood of New York County, Marble Hill, is contiguous with the U.S. mainland. Marble Hill at one time was part of Manhattan Island, but the Harlem River Ship Canal, dug in 1895 to improve navigation on the Harlem River, separated it from the remainder of Manhattan as an island between the Bronx and the remainder of Manhattan. Before World War I, the section of the original Harlem River channel separating Marble Hill from The Bronx was filled in, and Marble Hill became part of the mainland.
Marble Hill is one example of how Manhattan's land has been considerably altered by human intervention. The borough has seen substantial land reclamation along its waterfronts since Dutch colonial times, and much of the natural variation in its topography has been evened out.
In New York Harbor, there are three smaller islands:
Other smaller islands, in the East River, include (from north to south):
The bedrock underlying much of Manhattan is a mica schist known as "Manhattan schist" of the Manhattan Prong physiographic region. It is a strong, competent metamorphic rock that was created when Pangaea formed. It is well suited for the foundations of tall buildings. In Central Park, outcrops of Manhattan schist occur and Rat Rock is one rather large example.
Geologically, a predominant feature of the substrata of Manhattan is that the underlying bedrock base of the island rises considerably closer to the surface near Midtown Manhattan, dips down lower between 29th Street and Canal Street, then rises toward the surface again in Lower Manhattan. It has been widely believed that the depth to bedrock was the primary underlying reason for the clustering of skyscrapers in the Midtown and Financial District areas, and their absence over the intervening territory between these two areas. However, research has shown that economic factors played a bigger part in the locations of these skyscrapers.
According to the United States Geological Survey, an updated analysis of seismic hazard in July 2014 revealed a "slightly lower hazard for tall buildings" in Manhattan than previously assessed. Scientists estimated this lessened risk based upon a lower likelihood than previously thought of slow shaking near New York City, which would be more likely to cause damage to taller structures from an earthquake in the vicinity of the city.
Manhattan's many neighborhoods are not named according to any particular convention. Some are geographical (the Upper East Side), or ethnically descriptive (Little Italy). Others are acronyms, such as TriBeCa (for "TRIangle BElow CAnal Street") or SoHo ("SOuth of HOuston"), or the far more recent vintages NoLIta ("NOrth of Little ITAly"). and NoMad ("NOrth of MADison Square Park"). Harlem is a name from the Dutch colonial era after Haarlem, a city in the Netherlands. Alphabet City comprises Avenues A, B, C, and D, to which its name refers. Some have simple folkloric names, such as Hell's Kitchen, alongside their more official but lesser used title (in this case, Clinton).
Some neighborhoods, such as SoHo, which is mixed use, are known for upscale shopping as well as residential use. Others, such as Greenwich Village, the Lower East Side, Alphabet City and the East Village, have long been associated with the Bohemian subculture. Chelsea is one of several Manhattan neighborhoods with large gay populations and has become a center of both the international art industry and New York's nightlife. Washington Heights is a primary destination for immigrants from the Dominican Republic. Chinatown has the highest concentration of people of Chinese descent outside of Asia. Koreatown is roughly bounded by 6th and Madison Avenues, between 31st and 33rd Streets, where Hangul (한글) signage is ubiquitous. Rose Hill features a growing number of Indian restaurants and spice shops along a stretch of Lexington Avenue between 25th and 30th Streets which has become known as "Curry Hill". Since 2010, a "Little Australia" has emerged and is growing in Nolita, Lower Manhattan.
In Manhattan, "uptown" means north (more precisely north-northeast, which is the direction the island and its street grid system are oriented) and "downtown" means south (south-southwest). This usage differs from that of most American cities, where "downtown" refers to the central business district. Manhattan has two central business districts, the Financial District at the southern tip of the island, and Midtown Manhattan. The term "uptown" also refers to the northern part of Manhattan above 72nd Street and "downtown" to the southern portion below 14th Street, with "Midtown" covering the area in between, though definitions can be rather fluid depending on the situation.
Fifth Avenue roughly bisects Manhattan Island and acts as the demarcation line for east/west designations (e.g., East 27th Street, West 42nd Street); street addresses start at Fifth Avenue and increase heading away from Fifth Avenue, at a rate of 100 per block on most streets. South of Waverly Place, Fifth Avenue terminates and Broadway becomes the east/west demarcation line. Although the grid does start with 1st Street, just north of Houston Street (the southernmost street divided in west and east portions; pronounced HOW-stin), the grid does not fully take hold until north of 14th Street, where nearly all east-west streets are numerically identified, which increase from south to north to 220th Street, the highest numbered street on the island. Streets in Midtown are usually one-way, with the few exceptions generally being the busiest cross-town thoroughfares (14th, 23rd, 34th, and 42nd Streets, for example), which are bidirectional across the width of Manhattan Island. The rule of thumb is that odd-numbered streets run west, while even-numbered streets run east.
Under the Köppen climate classification, using the isotherm, New York City features a humid subtropical climate ("Cfa"), and is thus the northernmost major city on the North American continent with this categorization. The suburbs to the immediate north and west lie in the transitional zone between humid subtropical and humid continental climates ("Dfa"). The city averages 234 days with at least some sunshine annually. The city lies in the USDA 7b plant hardiness zone.
Winters are cold and damp, and prevailing wind patterns that blow offshore temper the moderating effects of the Atlantic Ocean; yet the Atlantic and the partial shielding from colder air by the Appalachians keep the city warmer in the winter than inland North American cities at similar or lesser latitudes such as Pittsburgh, Cincinnati, and Indianapolis. The daily mean temperature in January, the area's coldest month, is ; temperatures usually drop to several times per winter, and reach several days in the coldest winter month. Spring and autumn are unpredictable and can range from chilly to warm, although they are usually mild with low humidity. Summers are typically warm to hot and humid, with a daily mean temperature of in July. Nighttime conditions are often exacerbated by the urban heat island phenomenon, while daytime temperatures exceed on average of 17 days each summer and in some years exceed . Extreme temperatures have ranged from , recorded on February 9, 1934, up to on July 9, 1936.
Summer evening temperatures are elevated by the urban heat island effect, which causes heat absorbed during the day to be radiated back at night, raising temperatures by as much as when winds are slow. Manhattan receives of precipitation annually, which is relatively evenly spread throughout the year. Average winter snowfall between 1981 and 2010 has been ; this varies considerably from year to year.
At the 2010 Census, there were 1,585,873 people living in Manhattan, an increase of 3.2% since 2000. Since 2010, Manhattan's population was estimated by the Census Bureau to have increased 2.7% to 1,628,706 , representing 19.5% of New York City's population of 8,336,817 and 8.4% of New York State's population of 19,745,289. As of the 2017 Census estimates, the population density of New York County was around 72,918 people per square mile (28,154/km²), the highest population density of any county in the United States. In 1910, at the height of European immigration to New York, Manhattan's population density reached a peak of 101,548 people per square mile (39,208/km²).
In 2006, the New York City Department of City Planning projected that Manhattan's population would increase by 289,000 people between 2000 and 2030, an increase of 18.8% over the period. However, since then, Lower Manhattan has been experiencing a baby boom, well above the overall birth rate in Manhattan, with the area south of Canal Street witnessing 1,086 births in 2010, 12% greater than 2009 and over twice the number born in 2001. The Financial District alone has witnessed growth in its population to approximately 43,000 , nearly double the 23,000 recorded at the 2000 Census. The southern tip of Manhattan became the fastest growing part of New York City between 1990 and 2014.
According to the 2009 American Community Survey, the average household size was 2.11, and the average family size was 3.21. Approximately 59.4% of the population over the age of 25 have a bachelor's degree or higher. Approximately 27.0% of the population is foreign-born, and 61.7% of the population over the age of 5 speak only English at home. People of Irish ancestry make up 7.8% of the population, while Italian Americans make up 6.8% of the population. German Americans and Russian Americans make up 7.2% and 6.2% of the population respectively.
Manhattan is one of the highest-income places in the United States with a population greater than one million. , Manhattan's cost of living was the highest in the United States, but the borough also contained the country's most profound level of income inequality. Manhattan is also the United States county with the highest per capita income, being the sole county whose per capita income exceeded $100,000 in 2010. However, from 2011–2015 Census data of New York County, the per capita income was recorded in 2015 dollars as $64,993, with the median household income at $72,871, and poverty at 17.6%. In 2012, "The New York Times" reported that inequality was higher than in most developing countries, stating, "The wealthiest fifth of Manhattanites made more than 40 times what the lowest fifth reported, a widening gap (it was 38 times, the year before) surpassed by only a few developing countries".
In 2010 statistics, the largest religious group in Manhattan was the Archdiocese of New York, with 323,325 Catholics worshipping at 109 parishes, followed by 64,000 Orthodox Jews with 77 congregations, an estimated 42,545 Muslims with 21 congregations, 42,502 non-denominational adherents with 54 congregations, 26,178 TEC Episcopalians with 46 congregations, 25,048 ABC-USA Baptists with 41 congregations, 24,536 Reform Jews with 10 congregations, 23,982 Mahayana Buddhists with 35 congregations, 10,503 PC-USA Presbyterians with 30 congregations, and 10,268 RCA Presbyterians with 10 congregations. Altogether, 44.0% of the population was claimed as members by religious congregations, although members of historically African-American denominations were underrepresented due to incomplete information. In 2014, Manhattan had 703 religious organizations, the seventeenth most out of all US counties.
, 59.98% (902,267) of Manhattan residents, aged five and older, spoke only English at home, while 23.07% (347,033) spoke Spanish, 5.33% (80,240) Chinese, 2.03% (30,567) French, 0.78% (11,776) Japanese, 0.77% (11,517) Russian, 0.72% (10,788) Korean, 0.70% (10,496) German, 0.66% (9,868) Italian, 0.64% (9,555) Hebrew, and 0.48% (7,158) spoke African languages at home. In total, 40.02% (602,058) of Manhattan's population, aged five and older, spoke a language other than English at home.
Points of interest on Manhattan Island include the American Museum of Natural History, Broadway and the Theater District, Bryant Park, Central Park, Chinatown, the Chrysler Building, Columbia University, the Empire State Building, Flatiron Building, Fulton Center, Grand Central Terminal, Harlem and Spanish Harlem, the High Line, Koreatown, Lincoln Center for the Performing Arts, Little Italy, Madison Square Garden, Museum Mile on Fifth Avenue, including the Metropolitan Museum of Art, the New York Stock Exchange on Wall Street, New York University and the Washington Square Arch in Greenwich Village, Penn Station, Port Authority Bus Terminal, Rockefeller Center (including Radio City Music Hall), South Street Seaport, Stonewall Inn, The Battery, Times Square, Trump Tower, World Trade Center, including the National September 11 Museum and One World Trade Center.
There are also numerous iconic bridges across rivers that connect to Manhattan Island, as well as an emerging number of supertall skyscrapers. The Statue of Liberty rests on a pedestal on Liberty Island, an exclave of Manhattan, and part of Ellis Island is also an exclave of Manhattan. The borough has many energy-efficient green office buildings, such as the Hearst Tower, the rebuilt 7 World Trade Center, and the Bank of America Tower—the first skyscraper designed to attain a Platinum LEED Certification.
The skyscraper, which has shaped Manhattan's distinctive skyline, has been closely associated with New York City's identity since the end of the 19th century. From 1890 to 1973, the title of world's tallest building resided continually in Manhattan (with a gap between 1901 and 1908, when the title was held by Philadelphia City Hall), with nine different buildings holding the title. The New York World Building on Park Row, was the first to take the title in 1890, standing until 1955, when it was demolished to construct a new ramp to the Brooklyn Bridge. The nearby Park Row Building, with its 29 stories standing high took the title in 1899. The 41-story Singer Building, constructed in 1908 as the headquarters of the eponymous sewing machine manufacturer, stood high until 1967, when it became the tallest building ever demolished. The Metropolitan Life Insurance Company Tower, standing at the foot of Madison Avenue, wrested the title in 1909, with a tower reminiscent of St Mark's Campanile in Venice. The Woolworth Building, and its distinctive Gothic architecture, took the title in 1913, topping off at . Structures such as the Equitable Building of 1915, which rises vertically forty stories from the sidewalk, prompted the passage of the 1916 Zoning Resolution, requiring new buildings to contain setbacks withdrawing progressively at a defined angle from the street as they rose, in order to preserve a view of the sky at street level.
The Roaring Twenties saw a race to the sky, with three separate buildings pursuing the world's tallest title in the span of a year. As the stock market soared in the days before the Wall Street Crash of 1929, two developers publicly competed for the crown. At , 40 Wall Street, completed in May 1930 in only eleven months as the headquarters of the Bank of Manhattan, seemed to have secured the title. At Lexington Avenue and 42nd Street, auto executive Walter Chrysler and his architect William Van Alen developed plans to build the structure's trademark spire in secret, pushing the Chrysler Building to and making it the tallest in the world when it was completed in 1929. Both buildings were soon surpassed with the May 1931 completion of the 102-story Empire State Building with its Art Deco tower reaching at the top of the building. The high pinnacle was later added bringing the total height of the building to .
The former Twin Towers of the World Trade Center were located in Lower Manhattan. At , the 110-story buildings were the world's tallest from 1972 until they were surpassed by the construction of the Willis Tower in 1974 (formerly known as the Sears Tower, located in Chicago). One World Trade Center, a replacement for the Twin Towers of the World Trade Center, is currently the tallest building in the Western Hemisphere.
In 1961, the Pennsylvania Railroad unveiled plans to tear down the old Penn Station and replace it with a new Madison Square Garden and office building complex. Organized protests were aimed at preserving the McKim, Mead & White-designed structure completed in 1910, widely considered a masterpiece of the Beaux-Arts style and one of the architectural jewels of New York City. Despite these efforts, demolition of the structure began in October 1963. The loss of Penn Station—called "an act of irresponsible public vandalism" by historian Lewis Mumford—led directly to the enactment in 1965 of a local law establishing the New York City Landmarks Preservation Commission, which is responsible for preserving the "city's historic, aesthetic, and cultural heritage". The historic preservation movement triggered by Penn Station's demise has been credited with the retention of some one million structures nationwide, including nearly 1,000 in New York City. In 2017, a multibillion-dollar rebuilding plan was unveiled to restore the historic grandeur of Penn Station, in the process of upgrading the landmark's status as a critical transportation hub.
Parkland composes 17.8% of the borough, covering a total of . The Central Park, the largest park comprising 30% of Manhattan's parkland, is bordered on the north by West 110th Street (Central Park North), on the west by Eighth Avenue (Central Park West), on the south by West 59th Street (Central Park South), and on the east by Fifth Avenue. Central Park, designed by Frederick Law Olmsted and Calvert Vaux, offers extensive walking tracks, two ice-skating rinks, a wildlife sanctuary, and several lawns and sporting areas, as well as 21 playgrounds and a road from which automobile traffic is banned. While much of the park looks natural, it is almost entirely landscaped, and the construction of Central Park in the 1850s was one of the era's most massive public works projects, with some 20,000 workers crafting the topography to create the English-style pastoral landscape Olmsted and Vaux sought to create.
The remaining 70% of Manhattan's parkland includes 204 playgrounds, 251 Greenstreets, 371 basketball courts, and many other amenities. The next-largest park in Manhattan is the Hudson River Park, stretches on the Hudson River and comprises . Other major parks include:
Manhattan is the economic engine of New York City, with its 2.3 million workers in 2007 drawn from the entire New York metropolitan area accounting for almost two-thirds of all jobs in New York City. In the first quarter of 2014, the average weekly wage in Manhattan (New York County) was $2,749, representing the highest total among large counties in the United States. Manhattan's workforce is overwhelmingly focused on white collar professions, with manufacturing nearly extinct. Manhattan also has the highest per capita income of any county in the United States.
In 2010, Manhattan's daytime population was swelling to 3.94 million, with commuters adding a net 1.48 million people to the population, along with visitors, tourists, and commuting students. The commuter influx of 1.61 million workers coming into Manhattan was the largest of any county or city in the country, and was more than triple the 480,000 commuters who headed into second-ranked Washington, D.C.
Manhattan's most important economic sector lies in its role as the headquarters for the U.S. financial industry, metonymously known as Wall Street. The borough's securities industry, enumerating 163,400 jobs in August 2013, continues to form the largest segment of the city's financial sector and an important economic engine for Manhattan, accounting in 2012 for 5 percent of private sector jobs in New York City, 8.5 percent (US$3.8 billion) of the city's tax revenue, and 22 percent of the city's total wages, including an average salary of US$360,700. Wall Street investment banking fees in 2012 totaled approximately US$40 billion, while in 2013, senior New York City bank officers who manage risk and compliance functions earned as much as US$324,000 annually.
Lower Manhattan is home to the New York Stock Exchange (NYSE), on Wall Street, and the NASDAQ, at 165 Broadway, representing the world's largest and second largest stock exchanges, respectively, when measured both by overall share trading value and by total market capitalization of their listed companies in 2013. The NYSE American (formerly the American Stock Exchange, AMEX), New York Board of Trade, and the New York Mercantile Exchange (NYMEX) are also located downtown. In July 2013, NYSE Euronext, the operator of the New York Stock Exchange, took over the administration of the London interbank offered rate from the British Bankers Association.
New York City is home to the most corporate headquarters of any city in the United States, the overwhelming majority based in Manhattan. Manhattan contained over 500 million square feet (46.5 million m2) of office space in 2018, making it the largest office market in the United States, while Midtown Manhattan, with 400 million square feet (37.2 million m2) in 2018, is the largest central business district in the world. New York City's role as the top global center for the advertising industry is metonymously reflected as "Madison Avenue".
Silicon Alley, centered in Manhattan, has evolved into a metonym for the sphere encompassing the New York City metropolitan region's high tech industries, including the Internet, new media, telecommunications, digital media, software development, biotechnology, game design, financial technology ("fintech"), and other fields within information technology that are supported by the area's entrepreneurship ecosystem and venture capital investments. , New York City hosted 300,000 employees in the tech sector. In 2015, Silicon Alley generated over US$7.3 billion in venture capital investment, most based in Manhattan, as well as in Brooklyn, Queens, and elsewhere in the region. High technology startup companies and employment are growing in Manhattan and across New York City, bolstered by the city's emergence as a global node of creativity and entrepreneurship, social tolerance, and environmental sustainability, as well as New York's position as the leading Internet hub and telecommunications center in North America, including its vicinity to several transatlantic fiber optic trunk lines, the city's intellectual capital, and its extensive outdoor wireless connectivity. Verizon Communications, headquartered at 140 West Street in Lower Manhattan, was at the final stages in 2014 of completing a US$3 billion fiberoptic telecommunications upgrade throughout New York City. As of October 2014, New York City hosted 300,000 employees in the tech sector, with a significant proportion in Manhattan. The technology sector has been expanding across Manhattan since 2010.
The biotechnology sector is also growing in Manhattan based upon the city's strength in academic scientific research and public and commercial financial support. By mid-2014, Accelerator, a biotech investment firm, had raised more than US$30 million from investors, including Eli Lilly and Company, Pfizer, and Johnson & Johnson, for initial funding to create biotechnology startups at the Alexandria Center for Life Science, which encompasses more than on East 29th Street and promotes collaboration among scientists and entrepreneurs at the center and with nearby academic, medical, and research institutions. The New York City Economic Development Corporation's Early Stage Life Sciences Funding Initiative and venture capital partners, including Celgene, General Electric Ventures, and Eli Lilly, committed a minimum of US$100 million to help launch 15 to 20 ventures in life sciences and biotechnology. In 2011, Mayor Michael R. Bloomberg had announced his choice of Cornell University and Technion-Israel Institute of Technology to build a US$2 billion graduate school of applied sciences on Roosevelt Island, Manhattan, with the goal of transforming New York City into the world's premier technology capital.
Tourism is vital to Manhattan's economy, and the landmarks of Manhattan are the focus of New York City's tourists, enumerating an eighth consecutive annual record of approximately 62.8 million visitors in 2017. According to The Broadway League, shows on Broadway sold approximately US$1.27 billion worth of tickets in the 2013–2014 season, an increase of 11.4% from US$1.139 billion in the 2012–2013 season; attendance in 2013–2014 stood at 12.21 million, representing a 5.5% increase from the 2012–2013 season's 11.57 million. As of June 2016, Manhattan had nearly 91,500 hotel rooms, a 26% increase from 2010.
Real estate is a major force in Manhattan's economy, and indeed the city's, as the total value of all New York City property was assessed at US$914.8 billion for the 2015 fiscal year. Manhattan has perennially been home to some of the nation's, as well as the world's, most valuable real estate, including the Time Warner Center, which had the highest-listed market value in the city in 2006 at US$1.1 billion, to be subsequently surpassed in October 2014 by the Waldorf Astoria New York, which became the most expensive hotel ever sold after being purchased by the Anbang Insurance Group, based in China, for . When 450 Park Avenue was sold on July 2, 2007, for US$510 million, about US$1,589 per square foot (US$17,104/m²), it broke the barely month-old record for an American office building of US$1,476 per square foot (US$15,887/m²) based on the sale of 660 Madison Avenue. In 2014, Manhattan was home to six of the top ten zip codes in the United States by median housing price. In 2019, the most expensive home sale ever in the United States occurred in Manhattan, at a selling price of US$238 million, for a penthouse apartment overlooking Central Park.
Manhattan had approximately 520 million square feet (48.1 million m²) of office space in 2013, making it the largest office market in the United States. Midtown Manhattan is the largest central business district in the nation based on office space, while Lower Manhattan is the third-largest (after Chicago's Loop).
Manhattan is served by the major New York City daily news publications, including "The New York Times", "New York Daily News", and "New York Post", which are all headquartered in the borough. The nation's largest newspaper by circulation, "The Wall Street Journal", is also based there. Other daily newspapers include "AM New York" and "The Villager". "The New York Amsterdam News", based in Harlem, is one of the leading African American weekly newspapers in the United States. "The Village Voice", historically the largest alternative newspaper in the United States, announced in 2017 that it would cease publication of its print edition and convert to a fully digital venture.
The television industry developed in Manhattan and is a significant employer in the borough's economy. The four major American broadcast networks, ABC, CBS, NBC, and Fox, as well as Univision, are all headquartered in Manhattan, as are many cable channels, including MSNBC, MTV, Fox News, HBO, and Comedy Central. In 1971, WLIB became New York City's first black-owned radio station and began broadcasts geared toward the African-American community in 1949. WQHT, also known as "Hot 97", claims to be the premier hip-hop station in the United States. WNYC, comprising an AM and FM signal, has the largest public radio audience in the nation and is the most-listened to commercial or non-commercial radio station in Manhattan. WBAI, with news and information programming, is one of the few socialist radio stations operating in the United States.
The oldest public-access television cable TV channel in the United States is the Manhattan Neighborhood Network, founded in 1971, offers eclectic local programming that ranges from a jazz hour to discussion of labor issues to foreign language and religious programming. NY1, Time Warner Cable's local news channel, is known for its beat coverage of City Hall and state politics.
Education in Manhattan is provided by a vast number of public and private institutions. Public schools in the borough are operated by the New York City Department of Education, the largest public school system in the United States. Charter schools include Success Academy Harlem 1 through 5, Success Academy Upper West, and Public Prep.
Some notable New York City public high schools are located in Manhattan, including Beacon High School, Stuyvesant High School, Fiorello H. LaGuardia High School, High School of Fashion Industries, Eleanor Roosevelt High School, NYC Lab School, Manhattan Center for Science and Mathematics, Hunter College High School, and High School for Math, Science and Engineering at City College. Bard High School Early College, a hybrid school created by Bard College, serves students from around the city.
Many private preparatory schools are also situated in Manhattan, including the Upper East Side's Brearley School, Dalton School, Browning School, Spence School, Chapin School, Nightingale-Bamford School, Convent of the Sacred Heart, Hewitt School, Saint David's School, Loyola School, and Regis High School. The Upper West Side is home to the Collegiate School and Trinity School. The borough is also home to Manhattan Country School, Trevor Day School, and the United Nations International School.
Based on data from the 2011–2015 American Community Survey, 59.9% of Manhattan residents over age 25 have a bachelor's degree. As of 2005, about 60% of residents were college graduates and some 25% had earned advanced degrees, giving Manhattan one of the nation's densest concentrations of highly educated people.
Manhattan has various colleges and universities, including Columbia University (and its affiliate Barnard College), Cooper Union, Marymount Manhattan College, New York Institute of Technology, New York University (NYU), The Juilliard School, Pace University, Berkeley College, The New School, Yeshiva University, and a campus of Fordham University. Other schools include Bank Street College of Education, Boricua College, Jewish Theological Seminary of America, Manhattan School of Music, Metropolitan College of New York, Parsons School of Design, School of Visual Arts, Touro College, and Union Theological Seminary. Several other private institutions maintain a Manhattan presence, among them Mercy College, St. John's University, The College of New Rochelle, The King's College, and Pratt Institute. Cornell Tech is developing on Roosevelt Island.
The City University of New York (CUNY), the municipal college system of New York City, is the largest urban university system in the United States, serving more than 226,000 degree students and a roughly equal number of adult, continuing and professional education students. A third of college graduates in New York City graduate from CUNY, with the institution enrolling about half of all college students in New York City. CUNY senior colleges located in Manhattan include: Baruch College, City College of New York, Hunter College, John Jay College of Criminal Justice, and the CUNY Graduate Center (graduate studies and doctorate granting institution). The only CUNY community college located in Manhattan is the Borough of Manhattan Community College. The State University of New York is represented by the Fashion Institute of Technology, State University of New York State College of Optometry, and Stony Brook University – Manhattan.
Manhattan is a world center for training and education in medicine and the life sciences. The city as a whole receives the second-highest amount of annual funding from the National Institutes of Health among all U.S. cities, the bulk of which goes to Manhattan's research institutions, including Memorial Sloan-Kettering Cancer Center, Rockefeller University, Mount Sinai School of Medicine, Columbia University College of Physicians and Surgeons, Weill Cornell Medical College, and New York University School of Medicine.
Manhattan is served by the New York Public Library, which has the largest collection of any public library system in the country. The five units of the Central Library—Mid-Manhattan Library, 53rd Street Library, the New York Public Library for the Performing Arts, Andrew Heiskell Braille and Talking Book Library, and the Science, Industry and Business Library—are all located in Manhattan. More than 35 other branch libraries are located in the borough.
Manhattan is the borough most closely associated with New York City by non-residents; regionally, residents within the New York City metropolitan area, including natives of New York City's boroughs outside Manhattan, will often describe a trip to Manhattan as "going to the City". Journalist Walt Whitman characterized the streets of Manhattan as being traversed by "hurrying, feverish, electric crowds".
Manhattan has been the scene of many important American cultural movements. In 1912, about 20,000 workers, a quarter of them women, marched upon Washington Square Park to commemorate the Triangle Shirtwaist Factory fire, which killed 146 workers on March 25, 1911. Many of the women wore fitted tucked-front blouses like those manufactured by the Triangle Shirtwaist Company, a clothing style that became the working woman's uniform and a symbol of women's liberation, reflecting the alliance of labor and suffrage movements.
The Harlem Renaissance in the 1920s established the African-American literary canon in the United States and introduced writers Langston Hughes and Zora Neale Hurston. Manhattan's vibrant visual art scene in the 1950s and 1960s was a center of the American pop art movement, which gave birth to such giants as Jasper Johns and Roy Lichtenstein. The downtown pop art movement of the late 1970s included artist Andy Warhol and clubs like Serendipity 3 and Studio 54, where he socialized.
Broadway theatre is often considered the highest professional form of theatre in the United States. Plays and musicals are staged in one of the 39 larger professional theatres with at least 500 seats, almost all in and around Times Square. Off-Broadway theatres feature productions in venues with 100–500 seats. Lincoln Center for the Performing Arts, anchoring Lincoln Square on the Upper West Side of Manhattan, is home to 12 influential arts organizations, including the Metropolitan Opera, New York City Opera, New York Philharmonic, and New York City Ballet, as well as the Vivian Beaumont Theater, the Juilliard School, Jazz at Lincoln Center, and Alice Tully Hall. Performance artists displaying diverse skills are ubiquitous on the streets of Manhattan.
Manhattan is also home to some of the most extensive art collections in the world, both contemporary and classical art, including the Metropolitan Museum of Art, the Museum of Modern Art (MoMA), the Frick Collection, the Whitney Museum of American Art, and the Frank Lloyd Wright-designed Guggenheim Museum. The Upper East Side has many art galleries, and the downtown neighborhood of Chelsea is known for its more than 200 art galleries that are home to modern art from both upcoming and established artists. Many of the world's most lucrative art auctions are held in Manhattan.
Manhattan is the center of LGBT culture in New York City. The borough is widely acclaimed as the cradle of the modern LGBTQ rights movement, with its inception at the June 1969 Stonewall Riots in Greenwich Village, Lower Manhattan – widely considered to constitute the single most important event leading to the gay liberation movement and the modern fight for LGBT rights in the United States. Multiple gay villages have developed, spanning the length of the borough from the Lower East Side, East Village, and Greenwich Village, through Chelsea and Hell's Kitchen, uptown to Morningside Heights. The annual New York City Pride March (or gay pride parade) traverses southward down Fifth Avenue and ends at Greenwich Village; the Manhattan parade rivals the Sao Paulo Gay Pride Parade as the largest pride parade in the world, attracting tens of thousands of participants and millions of sidewalk spectators each June. Stonewall 50 – WorldPride NYC 2019 was the largest international Pride celebration in history, produced by Heritage of Pride and enhanced through a partnership with the I ❤ NY program's LGBT division, commemorating the 50th anniversary of the Stonewall uprising, with 150,000 participants and five million spectators attending in Manhattan alone.
The borough has a place in several American idioms. The phrase "New York minute" is meant to convey an extremely short time such as an instant, sometimes in hyperbolic form, as in "perhaps faster than you would believe is possible," referring to the rapid pace of life in Manhattan. The expression "melting pot" was first popularly coined to describe the densely populated immigrant neighborhoods on the Lower East Side in Israel Zangwill's play "The Melting Pot", which was an adaptation of William Shakespeare's "Romeo and Juliet" set by Zangwill in New York City in 1908. The iconic Flatiron Building is said to have been the source of the phrase "23 skidoo" or scram, from what cops would shout at men who tried to get glimpses of women's dresses being blown up by the winds created by the triangular building. The "Big Apple" dates back to the 1920s, when a reporter heard the term used by New Orleans stablehands to refer to New York City's horse racetracks and named his racing column "Around The Big Apple". Jazz musicians adopted the term to refer to the city as the world's jazz capital, and a 1970s ad campaign by the New York Convention and Visitors Bureau helped popularize the term. Manhattan, Kansas, a city of 53,000 people, was named by New York investors after the borough and is nicknamed the "little apple".
Manhattan is well known for its street parades, which celebrate a broad array of themes, including holidays, nationalities, human rights, and major league sports team championship victories. The majority of higher profile parades in New York City are held in Manhattan. The primary orientation of the annual street parades is typically from north to south, marching along major avenues. The annual Macy's Thanksgiving Day Parade is the world's largest parade, beginning alongside Central Park and processing southward to the flagship Macy's Herald Square store; the parade is viewed on telecasts worldwide and draws millions of spectators in person. Other notable parades including the annual St. Patrick's Day Parade in March, the New York City Pride Parade in June, the Greenwich Village Halloween Parade in October, and numerous parades commemorating the independence days of many nations. Ticker-tape parades celebrating championships won by sports teams as well as other heroic accomplishments march northward along the Canyon of Heroes on Broadway from Bowling Green to City Hall Park in Lower Manhattan. New York Fashion Week, held at various locations in Manhattan, is a high-profile semiannual event featuring models displaying the latest wardrobes created by prominent fashion designers worldwide in advance of these fashions proceeding to the retail marketplace.
Manhattan is home to the NBA's New York Knicks and the NHL's New York Rangers, both of which play their home games at Madison Square Garden, the only major professional sports arena in the borough. The Garden was also home to the WNBA's New York Liberty through the 2017 season, but that team's primary home is now the Westchester County Center in White Plains, New York. The New York Jets proposed a West Side Stadium for their home field, but the proposal was eventually defeated in June 2005, and they now play at MetLife Stadium in East Rutherford, New Jersey.
Manhattan is the only borough in New York City that does not have a professional baseball franchise. The Bronx has the Yankees (American League) and Queens has the Mets (National League) of Major League Baseball. The Minor League Baseball Brooklyn Cyclones, affiliated with the Mets, play in Brooklyn, while the Staten Island Yankees, affiliated with the Yankees, play in Staten Island. However, three of the four major league baseball teams to play in New York City played in Manhattan. The original New York Giants baseball team played in the various incarnations of the Polo Grounds at 155th Street and Eighth Avenue from their inception in 1883—except for 1889, when they split their time between Jersey City and Staten Island, and when they played in Hilltop Park in 1911—until they headed to California with the Brooklyn Dodgers after the 1957 season. The New York Yankees began their franchise as the Highlanders, named for Hilltop Park, where they played from their creation in 1903 until 1912. The team moved to the Polo Grounds with the 1913 season, where they were officially christened the "New York Yankees", remaining there until they moved across the Harlem River in 1923 to Yankee Stadium. The New York Mets played in the Polo Grounds in 1962 and 1963, their first two seasons, before Shea Stadium was completed in 1964. After the Mets departed, the Polo Grounds was demolished in April 1964, replaced by public housing.
The first national college-level basketball championship, the National Invitation Tournament, was held in New York in 1938 and remains in the city. The New York Knicks started play in 1946 as one of the National Basketball Association's original teams, playing their first home games at the 69th Regiment Armory, before making Madison Square Garden their permanent home. The New York Liberty of the WNBA shared the Garden with the Knicks from their creation in 1997 as one of the league's original eight teams through the 2017 season, after which the team moved nearly all of its home schedule to White Plains in Westchester County. Rucker Park in Harlem is a playground court, famed for its "streetball" style of play, where many NBA athletes have played in the summer league.
Although both of New York City's football teams play today across the Hudson River in MetLife Stadium in East Rutherford, New Jersey, both teams started out playing in the Polo Grounds. The New York Giants played side-by-side with their baseball namesakes from the time they entered the National Football League in 1925, until crossing over to Yankee Stadium in 1956. The New York Jets, originally known as the "Titans of New York", started out in 1960 at the Polo Grounds, staying there for four seasons before joining the Mets in Queens at Shea Stadium in 1964.
The New York Rangers of the National Hockey League have played in the various locations of Madison Square Garden since the team's founding in the 1926–1927 season. The Rangers were predated by the New York Americans, who started play in the Garden the previous season, lasting until the team folded after the 1941–1942 NHL season, a season it played in the Garden as the "Brooklyn Americans".
The New York Cosmos of the North American Soccer League played their home games at Downing Stadium for two seasons, starting in 1974. The playing pitch and facilities at Downing Stadium were in unsatisfactory condition, however, and as the team's popularity grew they too left for Yankee Stadium, and then Giants Stadium. The stadium was demolished in 2002 to make way for the $45 million, 4,754-seat Icahn Stadium, which includes an Olympic-standard 400-meter running track and, as part of Pelé's and the Cosmos' legacy, includes a FIFA-approved floodlit soccer stadium that hosts matches between the 48 youth teams of a Manhattan soccer club.
Since New York City's consolidation in 1898, Manhattan has been governed by the New York City Charter, which has provided for a strong mayor–council system since its revision in 1989. The centralized New York City government is responsible for public education, correctional institutions, libraries, public safety, recreational facilities, sanitation, water supply, and welfare services in Manhattan.
The office of Borough President was created in the consolidation of 1898 to balance centralization with local authority. Each borough president had a powerful administrative role derived from having a vote on the New York City Board of Estimate, which was responsible for creating and approving the city's budget and proposals for land use. In 1989, the Supreme Court of the United States declared the Board of Estimate unconstitutional because Brooklyn, the most populous borough, had no greater effective representation on the Board than Staten Island, the least populous borough, a violation of the Fourteenth Amendment's Equal Protection Clause pursuant to the high court's 1964 "one man, one vote" decision.
Since 1990, the largely powerless Borough President has acted as an advocate for the borough at the mayoral agencies, the City Council, the New York state government, and corporations. Manhattan's current Borough President is Gale Brewer, elected as a Democrat in November 2013 with 82.9% of the vote. Brewer replaced Scott Stringer, who went on to become New York City Comptroller.
Cyrus Vance Jr., a Democrat, has been the District Attorney of New York County since 2010. Manhattan has ten City Council members, the third largest contingent among the five boroughs. It also has twelve administrative districts, each served by a local Community Board. Community Boards are representative bodies that field complaints and serve as advocates for local residents.
As the host of the United Nations, the borough is home to the world's largest international consular corps, comprising 105 consulates, consulates general and honorary consulates. It is also the home of New York City Hall, the seat of New York City government housing the Mayor of New York City and the New York City Council. The mayor's staff and thirteen municipal agencies are located in the nearby Manhattan Municipal Building, completed in 1914, one of the largest governmental buildings in the world.
The Democratic Party holds most public offices. Registered Republicans are a minority in the borough, constituting 9.88% of the electorate . Registered Republicans are more than 20% of the electorate only in the neighborhoods of the Upper East Side and the Financial District . Democrats accounted for 68.41% of those registered to vote, while 17.94% of voters were unaffiliated.
No Republican has won the presidential election in Manhattan since 1924, when Calvin Coolidge won a plurality of the New York County vote over Democrat John W. Davis, 41.20%–39.55%. Warren G. Harding was the most recent Republican presidential candidate to win a majority of the Manhattan vote, with 59.22% of the 1920 vote. In the 2004 presidential election, Democrat John Kerry received 82.1% of the vote in Manhattan and Republican George W. Bush received 16.7%. The borough is the most important source of funding for presidential campaigns in the United States; in 2004, it was home to six of the top seven ZIP codes in the nation for political contributions. The top ZIP code, 10021 on the Upper East Side, generated the most money for the United States presidential election for all presidential candidates, including both Kerry and Bush during the 2004 election.
In 2018, four Democrats represented Manhattan in the United States House of Representatives.
The United States Postal Service operates post offices in Manhattan. The James Farley Post Office at 421 Eighth Avenue in Midtown Manhattan, between 31st Street and 33rd Street, is New York City's main post office. Both the United States District Court for the Southern District of New York and United States Court of Appeals for the Second Circuit are located in Lower Manhattan's Foley Square, and the U.S. Attorney and other federal offices and agencies maintain locations in that area.
Starting in the mid-19th century, the United States became a magnet for immigrants seeking to escape poverty in their home countries. After arriving in New York, many new arrivals ended up living in squalor in the slums of the Five Points neighborhood, an area between Broadway and the Bowery, northeast of New York City Hall. By the 1820s, the area was home to many gambling dens and brothels, and was known as a dangerous place to go. In 1842, Charles Dickens visited the area and was appalled at the horrendous living conditions he had seen. The area was so notorious that it even caught the attention of Abraham Lincoln, who visited the area before his Cooper Union speech in 1860. The predominantly Irish Five Points Gang was one of the country's first major organized crime entities.
As Italian immigration grew in the early 20th century many joined ethnic gangs, including Al Capone, who got his start in crime with the Five Points Gang. The Mafia (also known as "Cosa Nostra") first developed in the mid-19th century in Sicily and spread to the East Coast of the United States during the late 19th century following waves of Sicilian and Southern Italian emigration. Lucky Luciano established Cosa Nostra in Manhattan, forming alliances with other criminal enterprises, including the Jewish mob, led by Meyer Lansky, the leading Jewish gangster of that period. From 1920–1933, Prohibition helped create a thriving black market in liquor, upon which the Mafia was quick to capitalize.
As in the whole of New York City, Manhattan experienced a sharp increase in crime during the 1960s and 1970s. Since 1990, crime in Manhattan has plummeted in all categories tracked by the CompStat profile. A borough that saw 503 murders in 1990 has seen a drop of nearly 88% to 62 in 2008 and has continued to decline since then. Robbery and burglary are down by more than 80% during the period, and auto theft has been reduced by more than 93%. In the seven major crime categories tracked by the system, overall crime has declined by more than 75% since 1990, and year-to-date statistics through May 2009 show continuing declines. Based on 2005 data, New York City has the lowest crime rate among the ten largest cities in the United States.
During Manhattan's early history, wood construction and poor access to water supplies left the city vulnerable to fires. In 1776, shortly after the Continental Army evacuated Manhattan and left it to the British, a massive fire broke out destroying one-third of the city and some 500 houses.
The rise of immigration near the turn of the 20th century left major portions of Manhattan, especially the Lower East Side, densely packed with recent arrivals, crammed into unhealthy and unsanitary housing. Tenements were usually five stories high, constructed on the then-typical lots, with "cockroach landlords" exploiting the new immigrants. By 1929, stricter fire codes and the increased use of elevators in residential buildings, were the impetus behind a new housing code that effectively ended the tenement as a form of new construction, though many tenement buildings survive today on the East Side of the borough.
Manhattan offers a wide array of public and private housing options. There were 852,575 housing units in 2013 at an average density of 37,345 per square mile (14,419/km²). , only 20.3% of Manhattan residents lived in owner-occupied housing, the second-lowest rate of all counties in the nation, behind the Bronx. Although the city of New York has the highest average cost for rent in the United States, it simultaneously hosts a higher average of income per capita. Because of this, rent is a lower percentage of annual income than in several other American cities.
Manhattan's real estate market for luxury housing continues to be among the most expensive in the world, and Manhattan residential property continues to have the highest sale price per square foot in the United States. Manhattan's apartments cost $, compared to San Francisco housing at $, Boston housing at , and Los Angeles housing at $.
Manhattan is unique in the U.S. for intense use of public transportation and lack of private car ownership. While 88% of Americans nationwide drive to their jobs, with only 5% using public transport, mass transit is the dominant form of travel for residents of Manhattan, with 72% of borough residents using public transport to get to work, while only 18% drove. According to the 2000 United States Census, 77.5% of Manhattan households do not own a car.
In 2008, Mayor Michael Bloomberg proposed a congestion pricing system to regulate entering Manhattan south of 60th Street. The state legislature rejected the proposal in June 2008.
The New York City Subway, the largest subway system in the world by number of stations, is the primary means of travel within the city, linking every borough except Staten Island. There are 151 subway stations in Manhattan, out of the stations. A second subway, the PATH system, connects six stations in Manhattan to northern New Jersey. Passengers pay fares with pay-per-ride MetroCards, which are valid on all city buses and subways, as well as on PATH trains. There are 7-day and 30-day MetroCards that allow unlimited trips on all subways (except PATH) and MTA bus routes (except for express buses). The PATH QuickCard is being phased out, having been replaced by the SmartLink. The MTA is testing "smart card" payment systems to replace the MetroCard. Commuter rail services operating to and from Manhattan are the Long Island Rail Road (LIRR), which connects Manhattan and other New York City boroughs to Long Island; the Metro-North Railroad, which connects Manhattan to Upstate New York and Southwestern Connecticut; and NJ Transit trains, which run to various points in New Jersey.
The US$11.1 billion East Side Access project, which will bring LIRR trains to Grand Central Terminal, is under construction and is scheduled to open in 2022; this project will create a new train tunnel beneath the East River, connecting the East Side of Manhattan with Long Island City, Queens. Four multi-billion-dollar projects were completed in the mid-2010s: the $1.4 billion Fulton Center in November 2014, the $2.4 billion 7 Subway Extension in September 2015, the $4 billion World Trade Center Transportation Hub in March 2016, and Phase 1 of the $4.5 billion Second Avenue Subway in January 2017.
MTA New York City Transit offers a wide variety of local buses within Manhattan under the brand New York City Bus. An extensive network of express bus routes serves commuters and other travelers heading into Manhattan. The bus system served 784 million passengers citywide in 2011, placing the bus system's ridership as the highest in the nation, and more than double the ridership of the second-place Los Angeles system.
The Roosevelt Island Tramway, one of two commuter cable car systems in North America, whisks commuters between Roosevelt Island and Manhattan in less than five minutes, and has been serving the island since 1978. (The other system in North America is the Portland Aerial Tram.)
The Staten Island Ferry, which runs 24 hours a day, 365 days a year, annually carries over 21 million passengers on the run between Manhattan and Staten Island. Each weekday, five vessels transport about 65,000 passengers on 109 boat trips. The ferry has been fare-free since 1997, when the then-50-cent fare was eliminated. In February 2015, Mayor Bill de Blasio announced that the city government would begin NYC Ferry to extend ferry transportation to traditionally underserved communities in the city. The first routes of NYC Ferry opened in 2017. All of the system's routes have termini in Manhattan, and the Lower East Side and Soundview routes also have intermediate stops on the East River.
The metro region's commuter rail lines converge at Penn Station and Grand Central Terminal, on the west and east sides of Midtown Manhattan, respectively. They are the two busiest rail stations in the United States. About one-third of users of mass transit and two-thirds of railway passengers in the country live in New York and its suburbs. Amtrak provides inter-city passenger rail service from Penn Station to Boston, Philadelphia, Baltimore, and Washington, D.C.; Upstate New York and New England; cross-Canadian border service to Toronto and Montreal; and destinations in the Southern and Midwestern United States.
New York's iconic yellow taxicabs, which number 13,087 city-wide and must have the requisite medallion authorizing the pick up of street hails, are ubiquitous in the borough. Various private transportation network companies provide significant competition for cab drivers in Manhattan.
Manhattan also has tens of thousands of bicycle commuters.
The Commissioners' Plan of 1811 called for twelve numbered avenues running north and south roughly parallel to the shore of the Hudson River, each wide, with First Avenue on the east side and Twelfth Avenue on the west side. There are several intermittent avenues east of First Avenue, including four additional lettered avenues running from Avenue A eastward to Avenue D in an area now known as Alphabet City in Manhattan's East Village. The numbered streets in Manhattan run east-west, and are generally wide, with about between each pair of streets. With each combined street and block adding up to about , there are almost exactly 20 blocks per mile. The typical block in Manhattan is .
According to the original Commissioner's Plan, there were 155 numbered crosstown streets, but later the grid was extended up to the northernmost corner of Manhattan, where the last numbered street is 220th Street. Moreover, the numbering system continues even in The Bronx, north of Manhattan, despite the fact that the grid plan is not as regular in that borough, whose last numbered street is 263rd Street. Fifteen crosstown streets were designated as wide, including 34th, 42nd, 57th and 125th Streets, which became some of the borough's most significant transportation and shopping venues. Broadway is the most notable of many exceptions to the grid, starting at Bowling Green in Lower Manhattan and continuing north into the Bronx at Manhattan's northern tip. In much of Midtown Manhattan, Broadway runs at a diagonal to the grid, creating major named intersections at Union Square (Park Avenue South/Fourth Avenue and 14th Street), Madison Square (Fifth Avenue and 23rd Street), Herald Square (Sixth Avenue and 34th Street), Times Square (Seventh Avenue and 42nd Street), and Columbus Circle (Eighth Avenue/Central Park West and 59th Street).
"Crosstown traffic" refers primarily to vehicular traffic between Manhattan's East Side and West Side. The trip is notoriously frustrating for drivers because of heavy congestion on narrow local streets laid out by the Commissioners' Plan of 1811, absence of express roads other than the Trans-Manhattan Expressway at the far north end of Manhattan Island; and restricted to very limited crosstown automobile travel within Central Park, further prohibited beginning in 2018 south of 72nd Street within the park, to augment pedestrian safety. Proposals in the mid-1900s to build express roads through the city's densest neighborhoods, namely the Mid-Manhattan Expressway and Lower Manhattan Expressway, did not go forward. Unlike the rest of the United States, New York State prohibits right or left turns on red in cities with a population greater than one million, to reduce traffic collisions and increase pedestrian safety. In New York City, therefore, all turns at red lights are illegal unless a sign permitting such maneuvers is present, significantly shaping traffic patterns in Manhattan.
Another consequence of the strict grid plan of most of Manhattan, and the grid's skew of approximately 28.9 degrees, is a phenomenon sometimes referred to as Manhattanhenge (by analogy with Stonehenge). On separate occasions in late May and early July, the sunset is aligned with the street grid lines, with the result that the sun is visible at or near the western horizon from street level. A similar phenomenon occurs with the sunrise in January and December.
The FDR Drive and Harlem River Drive, both designed by controversial New York master planner Robert Moses, comprise a single, long limited-access parkway skirting the east side of Manhattan along the East River and Harlem River south of Dyckman Street. The Henry Hudson Parkway is the corresponding parkway on the West Side north of 57th Street.
Being primarily an island, Manhattan is linked to New York City's outer boroughs by numerous bridges, of various sizes. Manhattan has fixed highway connections with New Jersey to its west by way of the George Washington Bridge, the Holland Tunnel, and the Lincoln Tunnel, and to three of the four other New York City boroughs—the Bronx to the northeast, and Brooklyn and Queens (both on Long Island) to the east and south. Its only direct connection with the fifth New York City borough, Staten Island, is the Staten Island Ferry across New York Harbor, which is free of charge. The ferry terminal is located near Battery Park at Manhattan's southern tip. It is also possible to travel on land to Staten Island by way of Brooklyn, via the Verrazzano-Narrows Bridge.
The George Washington Bridge, the world's busiest motor vehicle bridge, connects Washington Heights, in Upper Manhattan, to Bergen County, in New Jersey. There are numerous bridges to the Bronx across the Harlem River, and five (listed north to south)—the Triborough (known officially as the Robert F. Kennedy Bridge), Ed Koch Queensboro (also known as the 59th Street Bridge), Williamsburg, Manhattan, and Brooklyn Bridges—that cross the East River to connect Manhattan to Long Island.
Several tunnels also link Manhattan Island to New York City's outer boroughs and New Jersey. The Lincoln Tunnel, which carries 120,000 vehicles a day under the Hudson River between New Jersey and Midtown Manhattan, is the busiest vehicular tunnel in the world. The tunnel was built instead of a bridge to allow unfettered passage of large passenger and cargo ships that sail through New York Harbor and up the Hudson River to Manhattan's piers. The Holland Tunnel, connecting Lower Manhattan to Jersey City, New Jersey, was the world's first mechanically ventilated vehicular tunnel. The Queens–Midtown Tunnel, built to relieve congestion on the bridges connecting Manhattan with Queens and Brooklyn, was the largest non-federal project in its time when it was completed in 1940; President Franklin D. Roosevelt was the first person to drive through it. The Brooklyn–Battery Tunnel runs underneath Battery Park and connects the Financial District at the southern tip of Manhattan to Red Hook in Brooklyn.
Several ferry services operate between New Jersey and Manhattan. These ferries mainly serve midtown (at W. 39th St.), Battery Park City (WFC at Brookfield Place), and Wall Street (Pier 11).
Manhattan has three public heliports: the East 34th Street Heliport (also known as the Atlantic Metroport) at East 34th Street, owned by New York City and run by the New York City Economic Development Corporation (NYCEDC); the Port Authority Downtown Manhattan/Wall Street Heliport, owned by the Port Authority of New York and New Jersey and run by the NYCEDC; and the West 30th Street Heliport, a privately owned heliport that is owned by the Hudson River Park Trust. US Helicopter offered regularly scheduled helicopter service connecting the Downtown Manhattan Heliport with John F. Kennedy International Airport in Queens and Newark Liberty International Airport in New Jersey, before going out of business in 2009.
Gas and electric service is provided by Consolidated Edison to all of Manhattan. Con Edison's electric business traces its roots back to Thomas Edison's Edison Electric Illuminating Company, the first investor-owned electric utility. The company started service on September 4, 1882, using one generator to provide 110 volts direct current (DC) to 59 customers with 800 light bulbs, in a one-square-mile area of Lower Manhattan from his Pearl Street Station. Con Edison operates the world's largest district steam system, which consists of of steam pipes, providing steam for heating, hot water, and air conditioning by some 1,800 Manhattan customers. Cable service is provided by Time Warner Cable and telephone service is provided by Verizon Communications, although AT&T is available as well.
Manhattan witnessed the doubling of the natural gas supply delivered to the borough when a new gas pipeline opened on November 1, 2013.
The New York City Department of Sanitation is responsible for garbage removal. The bulk of the city's trash ultimately is disposed at mega-dumps in Pennsylvania, Virginia, South Carolina and Ohio (via transfer stations in New Jersey, Brooklyn and Queens) since the 2001 closure of the Fresh Kills Landfill on Staten Island. A small amount of trash processed at transfer sites in New Jersey is sometimes incinerated at waste-to-energy facilities. Like New York City, New Jersey and much of Greater New York relies on exporting its trash to far-flung areas.
New York City has the largest clean-air diesel-hybrid and compressed natural gas bus fleet, which also operates in Manhattan, in the country. It also has some of the first hybrid taxis, most of which operate in Manhattan.
There are many hospitals in Manhattan, including two of the 25 largest in the United States (as of 2017):
New York City is supplied with drinking water by the protected Catskill Mountains watershed. As a result of the watershed's integrity and undisturbed natural water filtration system, New York is one of only four major cities in the United States the majority of whose drinking water is pure enough not to require purification by water treatment plants. The Croton Watershed north of the city is undergoing construction of a US$3.2 billion water purification plant to augment New York City's water supply by an estimated 290 million gallons daily, representing a greater than 20% addition to the city's current availability of water. Manhattan, surrounded by two brackish rivers, had a limited supply of fresh water. To satisfy its growing population, the City of New York acquired land in adjacent Westchester County and constructed the old Croton Aqueduct system there, which went into service in 1842 and was superseded by the new Croton Aqueduct, which opened in 1890. This, however, was interrupted in 2008 for the ongoing construction of a US$3.2 billion water purification plant that can supply an estimated 290 million gallons daily when completed, representing an almost 20% addition to the city's availability of water, with this addition going to Manhattan and the Bronx. Water comes to Manhattan through the tunnels 1 and 2, completed in 1917 and 1935, and in future through Tunnel No. 3, begun in 1970.
The address algorithm of Manhattan refers to the formulas used to estimate the closest east–west cross street for building numbers on north–south avenues. It is commonly noted in telephone directories, New York City travel guides, and MTA Manhattan bus maps. | https://en.wikipedia.org/wiki?curid=45470 |
Lynn Margulis
Lynn Margulis (born Lynn Petra Alexander; March 5, 1938 – November 22, 2011) was an American evolutionary theorist, biologist, science author, educator, and science popularizer, and was the primary modern proponent for the significance of symbiosis in evolution. Historian Jan Sapp has said that "Lynn Margulis's name is as synonymous with symbiosis as Charles Darwin's is with evolution." In particular, Margulis transformed and fundamentally framed current understanding of the evolution of cells with nuclei – an event Ernst Mayr called "perhaps the most important and dramatic event in the history of life" – by proposing it to have been the result of symbiotic mergers of bacteria. Margulis was also the co-developer of the Gaia hypothesis with the British chemist James Lovelock, proposing that the Earth functions as a single self-regulating system, and was the principal defender and promulgator of the five kingdom classification of Robert Whittaker.
Throughout her career, Margulis' work could arouse intense objection (one grant application elicited the response, "Your research is crap, do not bother to apply again"), and her formative paper, "On the Origin of Mitosing Cells", appeared in 1967 after being rejected by about fifteen journals. Still a junior faculty member at Boston University at the time, her theory that cell organelles such as mitochondria and chloroplasts were once independent bacteria was largely ignored for another decade, becoming widely accepted only after it was powerfully substantiated through genetic evidence. Margulis was elected a member of the US National Academy of Sciences in 1983. President Bill Clinton presented her the National Medal of Science in 1999. The Linnean Society of London awarded her the Darwin-Wallace Medal in 2008.
Called "Science's Unruly Earth Mother", a "vindicated heretic", or a scientific "rebel", Margulis was a strong critic of neo-Darwinism. Her position sparked lifelong debate with leading neo-Darwinian biologists, including Richard Dawkins, George C. Williams, and John Maynard Smith. Margulis' work on symbiosis and her endosymbiotic theory had important predecessors, going back to the mid-19th century – notably Andreas Franz Wilhelm Schimper, Konstantin Mereschkowski, (1890–1957), and Ivan Wallin – and Margulis not only promoted greater recognition for their contributions, but personally oversaw the first English translation of Kozo-Polyansky's "Symbiogenesis: A New Principle of Evolution", which appeared the year before her death. Many of her major works, particularly those intended for a general readership, were collaboratively written with her son Dorion Sagan.
In 2002, "Discover" magazine recognized Margulis as one of the 50 most important women in science.
Lynn Margulis was born in Chicago, to a Jewish, Zionist family. Her parents were Morris Alexander and Leona Wise Alexander. She was the eldest of four daughters. Her father was an attorney who also ran a company that made road paints. Her mother operated a travel agency. She entered the Hyde Park Academy High School in 1952, describing herself as a bad student who frequently had to stand in the corner.
A precocious child, she was accepted at the University of Chicago Laboratory Schools at the age of fifteen. In 1957, at age 19, she earned a BA from the University of Chicago in Liberal Arts. She joined the University of Wisconsin to study biology under Hans Ris and Walter Plaut, her supervisor, and graduated in 1960 with an MS in genetics and zoology. (Her first publication was with Plaut, on the genetics of "Euglena", published in 1958 in the "Journal of Protozoology".) She then pursued research at the University of California, Berkeley, under the zoologist Max Alfert. Before she could complete her dissertation, she was offered research associateship and then lectureship at Brandeis University in Massachusetts in 1964. It was while working there that she obtained her PhD from the University of California, Berkeley in 1965. Her thesis was "An Unusual Pattern of Thymidine Incorporation in "Euglena"." In 1966 she moved to Boston University, where she taught biology for twenty-two years. She was initially an Adjunct Assistant Professor, then was appointed to Assistant Professor in 1967. She was promoted to Associate Professor in 1971, to full Professor in 1977, and to University Professor in 1986. In 1988 she was appointed Distinguished Professor of Botany at the University of Massachusetts at Amherst. She was Distinguished Professor of Biology in 1993. In 1997 she transferred to the Department of Geosciences at Amherst to become Distinguished Professor of Geosciences "with great delight", the post which she held until her death.
Margulis married astronomer Carl Sagan in 1957 soon after she got her bachelor's degree. Sagan was then a graduate student in physics at the University of Chicago. Their marriage ended in 1964, just before she completed her PhD. They had two sons, Dorion Sagan, who later became a popular science writer and her collaborator, and Jeremy Sagan, software developer and founder of Sagan Technology. In 1967, she married Thomas N. Margulis, a crystallographer. They had a son named Zachary Margulis-Ohnuma, a New York City criminal defense lawyer, and a daughter Jennifer Margulis, teacher and author. They divorced in 1980. She commented, "I quit my job as a wife twice," and, "it’s not humanly possible to be a good wife, a good mother, and a first-class scientist. No one can do it — something has to go." In the 2000s she had a relationship with fellow biologist Ricardo Guerrero. Her sister Joan Alexander married Nobel Laureate Sheldon Lee Glashow; another sister, Sharon, married mathematician Daniel Kleitman.
She was a religious agnostic, and a staunch evolutionist. But she rejected the modern evolutionary synthesis, and said: "I remember waking up one day with an epiphanous revelation: I am not a neo-Darwinist! I recalled an earlier experience, when I realized that I wasn't a humanistic Jew. Although I greatly admire Darwin's contributions and agree with most of his theoretical analysis and I am a Darwinist, I am not a neo-Darwinist." She argued that "Natural selection eliminates and maybe maintains, but it doesn't create", and maintained that symbiosis was the major driver of evolutionary change.
In 2013, Margulis was listed as having been a member of the Advisory Council of the National Center for Science Education.
Margulis died on 22 November 2011 at home in Amherst, Massachusetts, five days after suffering a hemorrhagic stroke. As her wish, she was cremated and her ashes were scattered in her favorite research areas, near her home.
In 1966, as a young faculty member at Boston University, Margulis wrote a theoretical paper titled "On the Origin of Mitosing Cells". The paper, however, was "rejected by about fifteen scientific journals," she recalled. It was finally accepted by "Journal of Theoretical Biology" and is considered today a landmark in modern endosymbiotic theory. Weathering constant criticism of her ideas for decades, Margulis was famous for her tenacity in pushing her theory forward, despite the opposition she faced at the time. The descent of mitochondria from bacteria and of chloroplasts from cyanobacteria was experimentally demonstrated in 1978 by Robert Schwartz and Margaret Dayhoff. This formed the first experimental evidence for the symbiogenesis theory. The endosymbiosis theory of organogenesis became widely accepted in the early 1980s, after the genetic material of mitochondria and chloroplasts had been found to be significantly different from that of the symbiont's nuclear DNA.
In 1995, English evolutionary biologist Richard Dawkins had this to say about Lynn Margulis and her work:
I greatly admire Lynn Margulis's sheer courage and stamina in sticking by the endosymbiosis theory, and carrying it through from being an unorthodoxy to an orthodoxy. I'm referring to the theory that the eukaryotic cell is a symbiotic union of primitive prokaryotic cells. This is one of the great achievements of twentieth-century evolutionary biology, and I greatly admire her for it.
Margulis opposed competition-oriented views of evolution, stressing the importance of symbiotic or cooperative relationships between species.
She later formulated a theory that proposed symbiotic relationships between organisms of different phyla or kingdoms as the driving force of evolution, and explained genetic variation as occurring mainly through transfer of nuclear information between bacterial cells or viruses and eukaryotic cells. Her organelle genesis ideas are now widely accepted, but the proposal that symbiotic relationships explain most genetic variation is still something of a fringe idea.
Margulis also held a negative view of certain interpretations of Neo-Darwinism that she felt were excessively focused on competition between organisms, as she believed that history will ultimately judge them as comprising "a minor twentieth-century religious sect within the sprawling religious persuasion of Anglo-Saxon Biology."
She wrote that proponents of the standard theory "wallow in their zoological, capitalistic, competitive, cost-benefit interpretation of Darwin – having mistaken him ... Neo-Darwinism, which insists on [the slow accrual of mutations by gene-level natural selection], is in a complete funk."
Margulis initially sought out the advice of Lovelock for her own research: she explained that, "In the early seventies, I was trying to align bacteria by their metabolic pathways. I noticed that all kinds of bacteria produced gases. Oxygen, hydrogen sulfide, carbon dioxide, nitrogen, ammonia—more than thirty different gases are given off by the bacteria whose evolutionary history I was keen to reconstruct. Why did every scientist I asked believe that atmospheric oxygen was a biological product but the other atmospheric gases—nitrogen, methane, sulfur, and so on—were not? 'Go talk to Lovelock,' at least four different scientists suggested. Lovelock believed that the gases in the atmosphere were biological."
Margulis met with Lovelock, who explained his Gaia hypothesis to her, and very soon they began an intense collaborative effort on the concept. One of the earliest significant publications on Gaia was a 1974 paper co-authored by Lovelock and Margulis, which succinctly defined the hypothesis as follows: "The notion of the biosphere as an active adaptive control system able to maintain the Earth in homeostasis we are calling the 'Gaia hypothesis.'"
Like other early presentations of Lovelock's idea, the Lovelock-Margulis 1974 paper seemed to give living organisms complete agency in creating planetary self-regulation, whereas later, as the idea matured, this planetary-scale self-regulation was recognized as an emergent property of the Earth system, life and its physical environment taken together. When climatologist Stephen Schneider convened the 1989 American Geophysical Union Chapman Conference around the issue of Gaia, the idea of "strong Gaia" and "weak Gaia" was introduced by James Kirchner, after which Margulis was sometimes associated with the idea of "weak Gaia", incorrectly (her essay ""Gaia is a Tough Bitch"" dates from 1995 – and it stated her own distinction from Lovelock as she saw it, which was primarily that she did not like the metaphor of Earth as a single organism, because, she said, "No organism eats its own waste"). In her 1998 book "Symbiotic Planet", Margulis explored the relationship between Gaia and her work on symbiosis.
Since 1969, life on earth was classified into five kingdoms, as introduced by Robert Whittaker. Margulis became the most important supporter, as well as critic – while supporting parts, she was the first to recognize the limitations of Whittaker's classification of microbes. But later discoveries of new organisms, such as archaea, and emergence of molecular taxonomy challenged the concept. By the mid-2000s, most scientists began to agree that there are more than five kingdoms. Margulis became the most important defender of the five kingdom classification. She rejected the three-domain system introduced by Carl Woese in 1990, which gained wide acceptance. She introduced a modified classification by which all life forms, including the newly discovered, could be integrated into the classical five kingdoms. According to her the main problem, archaea, falls under the kingdom Prokaryotae alongside bacteria (in contrast to the three-domain system, which treats archaea as a higher taxon than kingdom, or the six-kingdom system, which holds that it is a separate kingdom). Her concept is given in detail in her book "Five Kingdoms", written with Karlene V. Schwartz. It has been suggested that it is mainly because of Margulis that the five-kingdom system survives.
It has been suggested that initial rejection of Margulis’ work on the endosymbiotic theory, and the controversial nature of it as well as Gaia theory, made her identify throughout her career with scientific mavericks, outsiders and unaccepted theories generally. In the last decade of her life, while key components of her life's work began to be understood as fundamental to a modern scientific viewpoint – the widespread adoption of Earth System Science and the incorporation of key parts of endosymbiotic theory into biology curricula worldwide – Margulis if anything became more embroiled in controversy, not less. Journalist John Wilson explained this by saying that Lynn Margulis “defined herself by oppositional science,” and in the commemorative collection of essays "Lynn Margulis: The Life and Legacy of a Scientific Rebel", commentators again and again depict her as a modern embodiment of the "scientific rebel", akin to Freeman Dyson's 1995 essay, "The Scientist as Rebel", a tradition Dyson saw embodied in Benjamin Franklin, and which he believed to be essential to good science. At times, Margulis could make highly provocative comments in interviews that appeared to support her most strident critics’ condemnation. The following describes three of these controversies.
In 2009, via a then-standard publication-process known as "communicated submission" (which bypassed traditional peer review), she was instrumental in getting the "Proceedings of the National Academy of Sciences" ("PNAS") to publish a paper by Donald I. Williamson rejecting "the Darwinian assumption that larvae and their adults evolved from a single common ancestor." Williamson's paper provoked immediate response from the scientific community, including a countering paper in "PNAS". Conrad Labandeira of the Smithsonian National Museum of Natural History said, "If I was reviewing [Williamson's paper] I would probably opt to reject it," he says, "but I'm not saying it's a bad thing that this is published. What it may do is broaden the discussion on how metamorphosis works and ... [on] ... the origin of these very radical life cycles." But Duke University insect developmental biologist Fred Nijhout said that the paper was better suited for the ""National Enquirer" than the National Academy." In September it was announced that "PNAS" would eliminate communicated submissions in July 2010. "PNAS" stated that the decision had nothing to do with the Williamson controversy.
In 2009 Margulis and seven others authored a position paper concerning research on the viability of round body forms of some spirochetes, "Syphilis, Lyme disease, & AIDS: Resurgence of 'the great imitator'?", which states that, "Detailed research that correlates life histories of symbiotic spirochetes to changes in the immune system of associated vertebrates is sorely needed," and urging the "reinvestigation of the natural history of mammalian, tick-borne, and venereal transmission of spirochetes in relation to impairment of the human immune system." The paper went on to suggest "that the possible direct causal involvement of spirochetes and their round bodies to symptoms of immune deficiency be carefully and vigorously investigated".
In a "Discover Magazine" interview which was published less than six months before her death, Margulis explained to writer Dick Teresi her reason for interest in the topic of 2009 "AIDS" paper: "I'm interested in spirochetes only because of our ancestry. I'm not interested in the diseases," and stated that she had called them "symbionts" because both the spirochete which causes syphilis ("Treponema") and the spirochete which causes Lyme disease ("Borrelia") only retain about 20% of the genes they would need to live freely, outside of their human hosts.
However, in the "Discover Magazine" interview Margulis said that "the set of symptoms, or syndrome, presented by syphilitics overlaps completely with another syndrome: AIDS," and also noted that Kary Mullis said that "he went looking for a reference substantiating that HIV causes AIDS and discovered, 'There is no such document.' "
This provoked a widespread supposition that Margulis had been an "AIDS denialist." Notably Jerry Coyne reacted on his "Why Evolution is True" blog against his interpretation that Margulis believed "that AIDS is really syphilis, not viral in origin at all." Seth Kalichman, a social psychologist who studies behavioral and social aspects of AIDS, cited her 2009 paper as an example of AIDS denialism "flourishing", and asserted that her "endorsement of HIV/AIDS denialism defies understanding."
Margulis argued that the September 11 attacks were a "false-flag operation, which has been used to justify the wars in Afghanistan and Iraq as well as unprecedented assaults on ... civil liberties." She claimed that there was "overwhelming evidence that the three buildings [of the World Trade Center] collapsed by controlled demolition." | https://en.wikipedia.org/wiki?curid=45473 |
Grand Teton National Park
Grand Teton National Park is an American national park in northwestern Wyoming. At approximately , the park includes the major peaks of the Teton Range as well as most of the northern sections of the valley known as Jackson Hole. Grand Teton National Park is only south of Yellowstone National Park, to which it is connected by the National Park Service-managed John D. Rockefeller, Jr. Memorial Parkway. Along with surrounding national forests, these three protected areas constitute the almost Greater Yellowstone Ecosystem, one of the world's largest intact mid-latitude temperate ecosystems.
The human history of the Grand Teton region dates back at least 11,000 years, when the first nomadic hunter-gatherer Paleo-Indians began migrating into the region during warmer months pursuing food and supplies. In the early 19th century, the first white explorers encountered the eastern Shoshone natives. Between 1810 and 1840, the region attracted fur trading companies that vied for control of the lucrative beaver pelt trade. U.S. Government expeditions to the region commenced in the mid-19th century as an offshoot of exploration in Yellowstone, with the first permanent white settlers in Jackson Hole arriving in the 1880s.
Efforts to preserve the region as a national park began in the late 19th century, and in 1929 Grand Teton National Park was established, protecting the Teton Range's major peaks. The valley of Jackson Hole remained in private ownership until the 1930s, when conservationists led by John D. Rockefeller, Jr. began purchasing land in Jackson Hole to be added to the existing national park. Against public opinion and with repeated Congressional efforts to repeal the measures, much of Jackson Hole was set aside for protection as Jackson Hole National Monument in 1943. The monument was abolished in 1950 and most of the monument land was added to Grand Teton National Park.
Grand Teton National Park is named for Grand Teton, the tallest mountain in the Teton Range. The naming of the mountains is attributed to early 19th-century French-speaking trappers—"les trois tétons" (the three teats) was later anglicized and shortened to "Tetons". At , Grand Teton abruptly rises more than above Jackson Hole, almost higher than Mount Owen, the second-highest summit in the range. The park has numerous lakes, including Jackson Lake as well as streams of varying length and the upper main stem of the Snake River. Though in a state of recession, a dozen small glaciers persist at the higher elevations near the highest peaks in the range. Some of the rocks in the park are the oldest found in any American national park and have been dated at nearly 2.7 billion years.
Grand Teton National Park is an almost pristine ecosystem and the same species of flora and fauna that have existed since prehistoric times can still be found there. More than 1,000 species of vascular plants, dozens of species of mammals, 300 species of birds, more than a dozen fish species and a few species of reptiles and amphibians inhabit the park. Due to various changes in the ecosystem, some of them human-induced, efforts have been made to provide enhanced protection to some species of native fish and the increasingly threatened whitebark pine.
Grand Teton National Park is a popular destination for mountaineering, hiking, fishing and other forms of recreation. There are more than 1,000 drive-in campsites and over of hiking trails that provide access to backcountry camping areas. Noted for world-renowned trout fishing, the park is one of the few places to catch Snake River fine-spotted cutthroat trout. Grand Teton has several National Park Service-run visitor centers, and privately operated concessions for motels, lodges, gas stations and marinas.
Paleo-Indian presence in what is now Grand Teton National Park dates back more than 11,000 years. Jackson Hole valley climate at that time was colder and more alpine than the semi-arid climate found today, and the first humans were migratory hunter-gatherers spending summer months in Jackson Hole and wintering in the valleys west of the Teton Range. Along the shores of Jackson Lake, fire pits, tools and what are thought to have been fishing weights have been discovered. One of the tools found is of a type associated with the Clovis culture, and tools from this cultural period date back at least 11,500 years. Some of the tools are made of obsidian which chemical analysis indicates came from sources near present-day Teton Pass, south of Grand Teton National Park. Though obsidian was also available north of Jackson Hole, virtually all the obsidian spear points found are from a source to the south, indicating that the main seasonal migratory route for the Paleo-Indian was from this direction. Elk, which winter on the National Elk Refuge at the southern end of Jackson Hole and northwest into higher altitudes during spring and summer, follow a similar migratory pattern to this day. From 11,000 to about 500 years ago, there is little evidence of change in the migratory patterns amongst the Native American groups in the region and no evidence that indicates any permanent human settlement.
When white American colonists first entered the region in the first decade of the 19th century, they encountered the eastern tribes of the Shoshone people. Most of the Shoshone that lived in the mountain vastness of the greater Yellowstone region continued to be pedestrian while other groups of Shoshone that resided in lower elevations had limited use of horses. The mountain-dwelling Shoshone were known as "Sheep-eaters" or ""Tukudika"" as they referred to themselves, since a staple of their diet was the Bighorn Sheep. The Shoshones continued to follow the same migratory pattern as their predecessors and have been documented as having a close spiritual relationship with the Teton Range. A number of stone enclosures on some of the peaks, including on the upper slopes of Grand Teton (known simply as "The Enclosure") are thought to have been used by Shoshone during vision quests. The Teton and Yellowstone region Shoshone relocated to the Wind River Indian Reservation after it was established in 1868. The reservation is situated southeast of Jackson Hole on land that was selected by Chief Washakie.
The Lewis and Clark Expedition (1804–1806) passed well north of the Grand Teton region. During their return trip from the Pacific Ocean, expedition member John Colter was given an early discharge so he could join two fur trappers who were heading west in search of beaver pelts. Colter was later hired by Manuel Lisa to lead fur trappers and to explore the region around the Yellowstone River. During the winter of 1807/08 Colter passed through Jackson Hole and was the first Caucasian to see the Teton Range. Lewis and Clark expedition co-leader William Clark produced a map based on the previous expedition and included the explorations of John Colter in 1807, apparently based on discussions between Clark and Colter when the two met in St. Louis, Missouri in 1810. Another map attributed to William Clark indicates John Colter entered Jackson Hole from the northeast, crossing the Continental Divide at either Togwotee Pass or Union Pass and left the region after crossing Teton Pass, following the well established Native American trails. In 1931, the Colter Stone, a rock carved in the shape of a head with the inscription "John Colter" on one side and the year "1808" on the other, was discovered in a field in Tetonia, Idaho, which is west of Teton Pass. The Colter Stone has not been authenticated to have been created by John Colter and may have been the work of later expeditions to the region.
John Colter is widely considered the first mountain man and, like those that came to the Jackson Hole region over the next 30 years, he was there primarily for the profitable fur trapping; the region was rich with the highly sought after pelts of beaver and other fur bearing animals. Between 1810 and 1812, the Astorians traveled through Jackson Hole and crossed Teton Pass as they headed east in 1812. After 1810, American and British fur trading companies were in competition for control of the North American fur trade, and American sovereignty over the region was not secured until the signing of the Oregon Treaty in 1846. One party employed by the British North West Company and led by explorer Donald Mackenzie entered Jackson Hole from the west in 1818 or 1819. The Tetons, as well as the valley west of the Teton Range known today as Pierre's Hole, may have been named by French speaking Iroquois or French Canadian trappers that were part of Mackenzie's party. Earlier parties had referred to the most prominent peaks of the Teton Range as the Pilot Knobs. The French trappers' "les trois tétons" (the three breasts) was later shortened to the Tetons.
Formed in the mid-1820s, the Rocky Mountain Fur Company partnership included Jedediah Smith, William Sublette and David Edward Jackson or "Davey Jackson". Jackson oversaw the trapping operations in the Teton region between 1826 and 1830. Sublette named the valley east of the Teton Range "Jackson's Hole" (later simply Jackson Hole) for Davey Jackson. As the demand for beaver fur declined and the various regions of the American West became depleted of beaver due to over trapping, American fur trading companies folded; however, individual mountain men continued to trap beaver in the region until about 1840. From the mid-1840s until 1860, Jackson Hole and the Teton Range were generally devoid of all but the small populations of Native American tribes that had already been there. Most overland human migration routes such as the Oregon and Mormon Trails crossed over South Pass, well to the south of the Teton Range, and Caucasian influence in the Teton region was minimal until the U.S. Government commenced organized explorations.
The first U.S. Government sponsored expedition to enter Jackson Hole was the 1859–60 Raynolds Expedition. Led by U.S. Army Captain William F. Raynolds and guided by mountain man Jim Bridger, it included naturalist F. V. Hayden, who later led other expeditions to the region. The expedition had been charged with exploring the Yellowstone region, but encountered difficulties crossing mountain passes due to snow. Bridger ended up guiding the expedition south over Union Pass then following the Gros Ventre River drainage to the Snake River and leaving the region over Teton Pass. Organized exploration of the region was halted during the American Civil War but resumed when F. V. Hayden led the well-funded Hayden Geological Survey of 1871. In 1872, Hayden oversaw explorations in Yellowstone, while a branch of his expedition known as the Snake River Division was led by James Stevenson and explored the Teton region. Along with Stevenson was photographer William Henry Jackson who took the first photographs of the Teton Range. The Hayden Geological Survey named many of the mountains and lakes in the region. The explorations by early mountain men and subsequent expeditions failed to identify any sources of economically viable mineral wealth. Nevertheless, small groups of prospectors set up claims and mining operations on several of the creeks and rivers. By 1900 all organized efforts to retrieve minerals had been abandoned.
Though the Teton Range was never permanently inhabited, pioneers began settling the Jackson Hole valley to the east of the range in 1884. These earliest homesteaders were mostly single men who endured long winters, short growing seasons and rocky soils that were hard to cultivate. The region was mostly suited for the cultivation of hay and cattle ranching. By 1890, Jackson Hole had an estimated permanent population of 60. Menor's Ferry was built in 1892 near present-day Moose, Wyoming to provide access for wagons to the west side of the Snake River. Ranching increased significantly from 1900 to 1920, but a series of agricultural related economic downturns in the early 1920s left many ranchers destitute. Beginning in the 1920s, the automobile provided faster and easier access to areas of natural beauty and old military roads into Jackson Hole over Teton and Togwotee Passes were improved to accommodate the increased vehicle traffic. In response to the increased tourism, dude ranches were established, some new and some from existing cattle ranches, so urbanized travelers could experience the life of a cattleman.
To the north of Jackson Hole, Yellowstone National Park had been established in 1872, and by the close of the 19th century, conservationists wanted to expand the boundaries of that park to include at least the Teton Range. By 1907, in an effort to regulate water flow for irrigation purposes, the U.S. Bureau of Reclamation had constructed a log crib dam at the Snake River outlet of Jackson Lake. This dam failed in 1910 and a new concrete Jackson Lake Dam replaced it by 1911. The dam was further enlarged in 1916, raising lake waters as part of the Minidoka Project, designed to provide irrigation for agriculture in the state of Idaho. Further dam construction plans for other lakes in the Teton Range alarmed Yellowstone National Park superintendent Horace Albright, who sought to block such efforts. Jackson Hole residents were opposed to an expansion of Yellowstone, but were more in favor of the establishment of a separate national park which would include the Teton Range and six lakes at the base of the mountains. After congressional approval, President Calvin Coolidge signed the executive order establishing the Grand Teton National Park on February 26, 1929.
The valley of Jackson Hole remained primarily in private ownership when John D. Rockefeller, Jr. and his wife visited the region in the late 1920s. Horace Albright and Rockefeller discussed ways to preserve Jackson Hole from commercial exploitation, and in consequence, Rockefeller started buying Jackson Hole properties through the Snake River Land Company for the purpose of later turning them over to the National Park Service. In 1930, this plan was revealed to the residents of the region and was met with strong disapproval. Congressional efforts to prevent the expansion of Grand Teton National Park ended up putting the Snake River Land Company's holdings in limbo. By 1942 Rockefeller had become increasingly impatient that his purchased property might never be added to the park, and wrote to the Secretary of the Interior Harold L. Ickes that he was considering selling the land to another party. Secretary Ickes recommended to President Franklin Roosevelt that the Antiquities Act, which permitted Presidents to set aside land for protection without the approval of Congress, be used to establish a national monument in Jackson Hole. Roosevelt created the Jackson Hole National Monument in 1943, using the land donated from the Snake River Land Company and adding additional property from Teton National Forest. The monument and park were adjacent to each other and both were administered by the National Park Service, but the monument designation ensured no funding allotment, nor provided a level of resource protection equal to the park. Members of Congress repeatedly attempted to have the new national monument abolished.
After the end of World War II national public sentiment was in favor of adding the monument to the park, and though there was still much local opposition, the monument and park were combined in 1950. In recognition of John D. Rockefeller, Jr.'s efforts to establish and then expand Grand Teton National Park, a parcel of land between Grand Teton and Yellowstone National Parks was added to the National Park Service in 1972. This land and the road from the southern boundary of the park to West Thumb in Yellowstone National Park was named the John D. Rockefeller, Jr. Memorial Parkway. The Rockefeller family owned the JY Ranch, which bordered Grand Teton National Park to the southwest. In November 2007, the Rockefeller family transferred ownership of the ranch to the park for the establishment of the Laurance S. Rockefeller Preserve, which was dedicated on June 21, 2008.
During the last 25 years of the 19th century, the mountains of the Teton Range became a focal point for explorers wanting to claim first ascents of the peaks. However, white explorers may not have been the first to climb many of the peaks and the earliest first ascent of even the formidable Grand Teton itself might have been achieved long before written history documented it. Native American relics remain including "The Enclosure", an obviously man-made structure that is located about below the summit of Grand Teton at a point near the Upper Saddle (). Nathaniel P. Langford and James Stevenson, both members of the Hayden Geological Survey of 1872, found The Enclosure during their early attempt to summit Grand Teton. Langford claimed that he and Stevenson climbed Grand Teton, but were vague as to whether they had made it to the summit. Their reported obstacles and sightings were never corroborated by later parties. Langford and Stevenson likely did not get much further than The Enclosure. The first ascent of Grand Teton that is substantiated was made by William O. Owen, Frank Petersen, John Shive and Franklin Spencer Spalding on August 11, 1898. Owen had made two previous attempts on the peak and after publishing several accounts of this first ascent, discredited any claim that Langford and Stevenson had ever reached beyond The Enclosure in 1872. The disagreement over which party first reached the top of Grand Teton may be the greatest controversy in the history of American mountaineering. After 1898 no other ascents of Grand Teton were recorded until 1923.
By the mid-1930s, more than a dozen different climbing routes had been established on Grand Teton including the northeast ridge in 1931 by Glenn Exum. Glenn Exum teamed up with another noted climber named Paul Petzoldt to found the Exum Mountain Guides in 1931. Of the other major peaks on the Teton Range, all were climbed by the late 1930s including Mount Moran in 1922 and Mount Owen in 1930 by Fritiof Fryxell and others after numerous previous attempts had failed. Both Middle and South Teton were first climbed on the same day, August 29, 1923, by a group of climbers led by Albert R. Ellingwood. New routes on the peaks were explored as safety equipment and skills improved and eventually climbs rated at above 5.9 on the Yosemite Decimal System difficulty scale were established on Grand Teton. The classic climb following the route first pioneered by Owen, known as the Owen-Spalding route, is rated at 5.4 due a combination of concerns beyond the gradient alone. Rock climbing and bouldering had become popular in the park by the mid 20th century. In the late 1950s, gymnast John Gill came to the park and started climbing large boulders near Jenny Lake. Gill approached climbing from a gymnastics perspective and while in the Tetons became the first known climber in history to use gymnastic chalk to improve handholds and to keep hands dry while climbing. During the latter decades of the 20th century, extremely difficult cliffs were explored including some in Death Canyon, and by the mid-1990s, 800 different climbing routes had been documented for the various peaks and canyon cliffs.
Grand Teton National Park is one of the ten most visited national parks in the U.S., with an annual average of 2.75 million visitors in the period from 2007 to 2016, with 3.27 million visiting in 2016. The National Park Service is a federal agency of the United States Department of the Interior and manages both Grand Teton National Park and the John D. Rockefeller, Jr. Memorial Parkway. Grand Teton National Park has an average of 100 permanent and 180 seasonal employees. The park also manages 27 concession contracts that provide services such as lodging, restaurants, mountaineering guides, dude ranching, fishing and a boat shuttle on Jenny Lake. The National Park Service works closely with other federal agencies such as the U.S. Forest Service, the U.S. Fish and Wildlife Service, the Bureau of Reclamation, and also, in consequence of Jackson Hole Airport's presence in the park, the Federal Aviation Administration. Initial construction of the airstrip north of the town of Jackson was completed in the 1930s. When Jackson Hole National Monument was designated, the airport was inside it. After the monument and park were combined, the Jackson Hole Airport became the only commercial airport within an American national park. Jackson Hole Airport has some of the strictest noise abatement regulations of any airport in the U.S. The airport has night flight curfews and overflight restrictions, with pilots being expected to approach and depart the airport along the east, south or southwest flight corridors. As of 2010, 110 privately owned property inholdings, many belonging to the state of Wyoming, are located within Grand Teton National Park. Efforts to purchase or trade these inholdings for other federal lands are ongoing and through partnerships with other entities, 10 million dollars is hoped to be raised to acquire private inholdings by 2016.
In December 2016 the Antelope Flats Parcel consisting of 640 acres (owned by the State of Wyoming as part of state school trust lands) was purchased and transferred to Grand Teton National Park. The purchase price amounted to 46 million dollars (23 million allocated from the Land and Water Conservation Fund and the last 23 million was raised in private funds from 5,421 donors). The proceeds of this sale will benefit Wyoming Public Schools. Grand Teton National Park is still in negotiations for the purchase of the Kelly Parcel which totals an additional 640 acres from Wyoming. Moulton Ranch Cabins a one acre inholding along the historic Mormon Row was sold to the Grand Teton National Park Foundation in 2018. In 2020 the National Park Service in partnership with the Conservation Fund acquired a 35 acre parcel located within Grand Teton National Park. This parcel is located near the Granite Canyon Entrance Station.
Grand Teton National Park is located in the northwestern region of the U.S. state of Wyoming. To the north the park is bordered by the John D. Rockefeller, Jr. Memorial Parkway, which is administered by Grand Teton National Park. The scenic highway with the same name passes from the southern boundary of Grand Teton National Park to West Thumb in Yellowstone National Park. Grand Teton National Park covers approximately , while the John D. Rockefeller, Jr. Memorial Parkway includes . Most of the Jackson Hole valley and virtually all the major mountain peaks of the Teton Range are within the park. The Jedediah Smith Wilderness of Caribou-Targhee National Forest lies along the western boundary and includes the western slopes of the Teton Range. To the northeast and east lie the Teton Wilderness and Gros Ventre Wilderness of Bridger-Teton National Forest. The National Elk Refuge is to the southeast, and migrating herds of elk winter there. Privately owned land borders the park to the south and southwest. Grand Teton National Park, along with Yellowstone National Park, surrounding National Forests and related protected areas constitute the () Greater Yellowstone Ecosystem. The Greater Yellowstone Ecosystem spans across portions of three states and is one of the largest intact mid-latitude ecosystems remaining on Earth. By road, Grand Teton National Park is from Salt Lake City, Utah and from Denver, Colorado.
The youngest mountain range in the Rocky Mountains, the Teton Range began forming between 6 and 9 million years ago. It runs roughly north to south and rises from the floor of Jackson Hole without any foothills along a by 7- to 9-mile-wide (11 to 14 km) active fault-block mountain front. The range tilts westward, rising abruptly above Jackson Hole valley which lies to the east but more gradually into Teton Valley to the west. A series of earthquakes along the Teton Fault slowly displaced the western side of the fault upward and the eastern side of the fault downward at an average of of displacement every 300–400 years. Most of the displacement of the fault occurred in the last 2 million years. While the fault has experienced up to 7.5-earthquake magnitude events since it formed, it has been relatively quiescent during historical periods, with only a few 5.0-magnitude or greater earthquakes known to have occurred since 1850.
In addition to Grand Teton, another nine peaks are over above sea level. Eight of these peaks between Avalanche and Cascade Canyons make up the often-photographed Cathedral Group. The most prominent peak north of Cascade Canyon is the monolithic Mount Moran () which rises above Jackson Lake. To the north of Mount Moran, the range eventually merges into the high altitude Yellowstone Plateau. South of the central Cathedral Group the Teton Range tapers off near Teton Pass and blends into the Snake River Range.
West to east trending canyons provide easier access by foot into the heart of the range as no vehicular roads traverse the range except at Teton Pass, which is south of the park. Carved by a combination of glacier activity as well as by numerous streams, the canyons are at their lowest point along the eastern margin of the range at Jackson Hole. Flowing from higher to lower elevations, the glaciers created more than a dozen U-shaped valleys throughout the range. Cascade Canyon is sandwiched between Mount Owen and Teewinot Mountain to the south and Symmetry Spire to the north and is situated immediately west of Jenny Lake. North to south, Webb, Moran, Paintbrush, Cascade, Death and Granite Canyons slice through Teton Range.
Jackson Hole is a by 6- to 13-mile-wide (10 to 21 km) graben valley with an average elevation of , its lowest point is near the southern park boundary at . The valley sits east of the Teton Range and is vertically displaced downward , making the Teton Fault and its parallel twin on the east side of the valley normal faults with the Jackson Hole block being the hanging wall and the Teton Mountain block being the footwall. Grand Teton National Park contains the major part of both blocks. Erosion of the range provided sediment in the valley so the topographic relief is only . Jackson Hole is comparatively flat, with only a modest increase in altitude south to north; however, a few isolated buttes such as Blacktail Butte and hills including Signal Mountain dot the valley floor. In addition to a few outcroppings, the Snake River has eroded terraces into Jackson Hole. Southeast of Jackson Lake, glacial depressions known as kettles are numerous. The kettles were formed when ice situated under gravel outwash from ice sheets melted as the glaciers retreated.
Most of the lakes in the park were formed by glaciers and the largest of these lakes are located at the base of the Teton Range. In the northern section of the park lies Jackson Lake, the largest lake in the park at in length, wide and deep. Though Jackson Lake is natural, the Jackson Lake Dam was constructed at its outlet before the creation of the park and the lake level was raised almost consequently. East of the Jackson Lake Lodge lies Emma Matilda and Two Ocean Lakes. South of Jackson Lake, Leigh, Jenny, Bradley, Taggart and Phelps Lakes rest at the outlets of the canyons which lead into the Teton Range. Within the Teton Range, small alpine lakes in cirques are common, and there are more than 100 scattered throughout the high country. Lake Solitude, located at an elevation of , is in a cirque at the head of the North Fork of Cascade Canyon. Other high altitude lakes can be found at over in elevation and a few, such as Icefloe Lake, remain ice clogged for much of the year. The park is not noted for large waterfalls; however, Hidden Falls just west of Jenny Lake is easy to reach after a short hike.
From its headwaters on Two Ocean Plateau in Yellowstone National Park, the Snake River flows north to south through the park, entering Jackson Lake near the boundary of Grand Teton National Park and John D. Rockefeller, Jr. Memorial Parkway. The Snake River then flows through the spillways of the Jackson Lake Dam and from there southward through Jackson Hole, exiting the park just west of the Jackson Hole Airport. The largest lakes in the park all drain either directly or by tributary streams into the Snake River. Major tributaries which flow into the Snake River include Pacific Creek and Buffalo Fork near Moran and the Gros Ventre River at the southern border of the park. Through the comparatively level Jackson Hole valley, the Snake River descends an average of , while other streams descending from the mountains to the east and west have higher gradients due to increased slope. The Snake River creates braids and channels in sections where the gradients are lower and in steeper sections, erodes and undercuts the cobblestone terraces once deposited by glaciers.
The major peaks of the Teton Range were carved into their current shapes by long vanished glaciers. Commencing 250,000–150,000 years ago, the Tetons went through several periods of glaciation with some areas of Jackson Hole covered by glaciers thick. This heavy glaciation is unrelated to the uplift of the range itself and is instead part of a period of global cooling known as the Quaternary glaciation. Beginning with the Buffalo Glaciation and followed by the Bull Lake and then the Pinedale glaciation, which ended roughly 15,000 years ago, the landscape was greatly impacted by glacial activity. During the Pinedale glaciation, the landscape visible today was created as glaciers from the Yellowstone Plateau flowed south and formed Jackson Lake, while smaller glaciers descending from the Teton Range pushed rock moraines out from the canyons and left behind lakes near the base of the mountains. The peaks themselves were carved into horns and arêtes and the canyons were transformed from water-eroded V-shapes to glacier-carved U-shaped valleys. Approximately a dozen glaciers currently exist in the park, but they are not ancient as they were all reestablished sometime between 1400 and 1850 AD during the Little Ice Age. Of these more recent glaciers, the largest is Teton Glacier, which sits below the northeast face of Grand Teton. Teton Glacier is long and wide, and nearly surrounded by the tallest summits in the range. Teton Glacier is also the best studied glacier in the range, and researchers concluded in 2005 that the glacier could disappear in 30 to 75 years. West of the Cathedral Group near Hurricane Pass, Schoolroom Glacier is tiny but has well defined terminal and lateral moraines, a small proglacial lake and other typical glacier features in close proximity to each other.
Grand Teton National Park has some of the most ancient rocks found in any American national park. The oldest rocks dated so far are 2,680 ± 12 million years old, though even older rocks are believed to exist in the park. Formed during the Archean Eon (4 to 2.5 billion years ago), these metamorphic rocks include gneiss, schist and amphibolites. Metamorphic rocks are the most common types found in the northern and southern sections of the Teton Range. 2,545 million years ago, the metamorphic rocks were intruded by igneous granitic rocks, which are now visible in the central Tetons including Grand Teton and the nearby peaks. The light colored granites of the central Teton Range contrast with the darker metamorphic gneiss found on the flanks of Mount Moran to the north. Magma intrusions of diabase rocks 765 million years ago left dikes that can be seen on the east face of Mount Moran and Middle Teton. Granite and pegmatite intrusions also worked their way into fissures in the older gneiss. Precambrian rocks in Jackson Hole are buried deep under comparatively recent Tertiary volcanic and sedimentary deposits, as well as Pleistocene glacial deposits.
By the close of the Precambrian, the region was intermittently submerged under shallow seas, and for 500 million years various types of sedimentary rocks were formed. During the Paleozoic (542 to 251 million years ago) sandstone, shale, limestone and dolomite were deposited. Though most of these sedimentary rocks have since eroded away from the central Teton Range, they are still evident on the northern, southern and western flanks of the range. One notable exception is the sandstone Flathead Formation which continues to cap Mount Moran. Sedimentary layering of rocks in Alaska Basin, which is on the western border of Grand Teton National Park, chronicles a 120-million-year period of sedimentary deposition. Fossils found in the sedimentary rocks in the park include algae, brachiopods and trilobites. Sedimentary deposition continued during the Mesozoic (250–66 million years ago) and the coal seams found in the sedimentary rock strata indicate the region was densely forested during that era. Numerous coal seams of in thickness are interspersed with siltstone, claystone and other sedimentary rocks. During the late Cretaceous, a volcanic arc west of the region deposited fine grained ash that later formed into bentonite, an important mineral resource.
From the end of the Mesozoic to present, the region went through a series of uplifts and erosional sequences. Commencing 66 million years ago the Laramide orogeny was a period of mountain-building and erosion in western North America that created the ancestral Rocky Mountains. This cycle of uplift and erosion left behind one of the most complete non-marine Cenozoic rock sequences found in North America. Conglomerate rocks composed of quartzite and interspersed with mudstone and sandstones were deposited during erosion from a now vanished mountain range that existed to the northwest of the current Teton Range. These deposits also have trace quantities of gold and mercury. During the Eocene and Oligocene, volcanic eruptions from the ancestral Absaroka Range buried the region under various volcanic deposits. Sedimentary basins developed in the region due to drop faulting, creating an ancestral Jackson Hole and by the Pliocene (10 million years ago), an ancestral Jackson Lake known as Teewinot Lake. During the Quaternary, landslides, erosion and glacial activity deposited soils and rock debris throughout the Snake River valley of Jackson Hole and left behind terminal moraines which impound the current lakes. The most recent example of rapid alteration to the landscape occurred in 1925 just east of the park, when the Gros Ventre landslide was triggered by spring melt from a heavy snowpack as well as heavy rain.
Grand Teton National Park and the surrounding region host over 1,000 species of vascular plants. With an altitude variance of over , the park has a number of different ecological zones including alpine tundra, the Rocky Mountains subalpine zone where spruce-fir forests are dominant, and the valley floor, where a mixed conifer and deciduous forest zone occupies regions with better soils intermixed with sagebrush plains atop alluvial deposits. Additionally, wetlands near some lakes and in the valley floor adjacent to rivers and streams cover large expanses, especially along the Snake River near Oxbow Bend near Moran and Willow Flats near the Jackson Lake Lodge. Altitude, available soils, wildfire incidence, avalanches and human activities have a direct impact on the types of plant species in an immediate area. Where these various niches overlap is known as an ecotone.
The range of altitude in Grand Teton National Park impacts the types of plant species found at various elevations. In the alpine zone above the tree line, which in Grand Teton National Park is at approximately , tundra conditions prevail. In this treeless region, hundreds of species of grass, wildflower, moss and lichen are found. In the subalpine region from the tree line to the base of the mountains, whitebark pine, limber pine, subalpine fir, and Engelmann spruce are dominant. In the valley floor, lodgepole pine is most common but Rocky Mountain Douglas-fir, and blue spruce inhabit drier areas, while aspen, cottonwood, alder, and willow are more commonly found around lakes, streams and wetlands. However, the tablelands above the Snake River channel are mostly sagebrush plains and in terms of acreage is the most widespread habitat in the park. The sagebrush plains or flats have 100 species of grasses and wildflowers. Slightly more elevated sections of the plains of the northern sections of Jackson Hole form forest islands with one such obvious example being Timbered Island. In this ecotone, forested islands surrounded by sagebrush expanses provide shelter for various animal species during the day and nearby grasses for night time foraging.
While the flora of Grand Teton National Park is generally healthy, the whitebark pine, and to a lesser degree the lodgepole pine, are considered at risk. In the case of the whitebark pine, an invasive species of fungus known as white pine blister rust weakens the tree, making it more susceptible to destruction from endemic mountain pine beetles. Whitebark pines generally thrive at elevations above and produce large seeds that are high in fat content and an important food source for various species such as the grizzly bear, red squirrel and Clark's nutcracker. The species is considered to be a keystone and a foundation species; keystone in that its ""ecological role (is) disproportionately large relative to its abundance"" and foundation in that it has a paramount role that ""defines ecosystem structure, function, and process"". Whitebark pine has generally had a lower incidence of blister rust infection throughout the Greater Yellowstone Ecosystem than in other regions such as Glacier National Park and the Cascade Range. The incidence of blister rust on whitebark pines in Yellowstone National Park is slightly lower than in Grand Teton. Though blister rust is not in itself the cause of increased mortality, its weakening effect on trees allows native pine beetles to more easily infest the trees, increasing mortality. While general practice in national parks is to allow nature to take its course, the alarming trend of increased disease and mortality of the vital whitebark pine trees has sparked a collaborative effort amongst various government entities to intervene to protect the species.
Sixty-one species of mammals have been recorded in Grand Teton National Park. This includes the gray wolf, which had been extirpated from the region by the early 1900s but migrated into the Grand Teton National Park from adjacent Yellowstone National Park after the species had been reintroduced there. The re-establishment of the wolves has ensured that every indigenous mammal species now exists in the park. In addition to gray wolves, another 17 species of carnivores reside within Grand Teton National Park including grizzlies and the more commonly seen American black bear. Relatively common sightings of coyote, river otter, marten and badger and occasional sightings of cougar, lynx and wolverine are reported annually. A number of rodent species exist including yellow-bellied marmot, least chipmunk, muskrat, beaver, Uinta ground squirrel, pika, snowshoe hare, porcupine, and six species of bats.
Of the larger mammals the most common are elk, which exist in the thousands. Their migration route between the National Elk Refuge and Yellowstone National Park is through Grand Teton National Park, so while easily seen anytime of the year, they are most numerous in the spring and fall. Other ungulates in the park include moose, bison, and pronghorn--the fastest land mammal in the western hemisphere. The park's moose tend to stay near waterways and wetlands. Between 100–125 bighorn sheep dwell in the alpine and rocky zones of the peaks.
Over 300 species of birds have been sighted in the park including the calliope hummingbird, the smallest bird species in North America, as well as trumpeter swans, which is North America's largest waterfowl. In addition to trumpeter swans, another 30 species of waterfowl have been recorded including blue-winged teal, common merganser, American wigeon and the colorful but reclusive harlequin duck which is occasionally spotted in Cascade Canyon. Both bald and golden eagles and other birds of prey such as the osprey, red-tailed hawk, American kestrel and occasional sightings of peregrine falcon have been reported. Of the 14 species of owls reported, the most common is the great horned owl, though the boreal owl and great grey owl are also seen occasionally. A dozen species of woodpeckers have been reported, as have a similar number of species of warblers, plovers and gulls. The vocal and gregarious black-billed magpie frequents campgrounds while Steller's jay and Clark's nutcracker are found in the backcountry. The sage covered plains of Jackson Hole are favored areas for sage grouse, Brewer's sparrow and sage thrashers, while the wetlands are frequented by great blue heron, American white pelican, sandhill crane and on rare occasions it's endangered relative, the whooping crane.
The Snake River fine-spotted cutthroat trout (or "Snake River cutthroat trout") is the only native trout species in Grand Teton National Park. It is also the only subspecies of cutthroat trout that is exclusively native to large streams and rivers. Various researchers have not been able to identify any genetic differences between the Snake River fine-spotted cutthroat trout and the Yellowstone cutthroat trout, though in terms of appearances, the Snake River subspecies has much smaller spots which cover a greater portion of the body, and the two subspecies inhabit different ecological niches. The Snake River fine-spotted cutthroat trout was identified by some researchers as a separate subspecies by the mid-1990s, and is managed as a distinct subspecies by the state of Wyoming, but is not yet recognized as such by the neighboring states of Idaho and Montana. Snake River fine-spotted cutthroat trout is found only in the Snake River and tributaries below the Jackson Lake dam to the Palisades Reservoir in Idaho. Other non-native species of trout such as the rainbow trout and lake trout were introduced by the Wyoming Fish and Game Department or migrated out of Yellowstone. Today five trout species inhabit park waters. Native species of fish include the mountain whitefish, longnose dace, mountain sucker and non-native species include the Utah chub and Arctic grayling.
Only four species of reptiles are documented in the park: three species of snakes which are the wandering garter snake, the less commonly seen valley garter snake and rubber boa, as well as one lizard species, the northern sagebrush lizard, that was first reported in 1992. None of the species are venomous. Six amphibian species have been documented including the Columbia spotted frog, boreal chorus frog, tiger salamander and the increasingly rare boreal toad and northern leopard frog. A sixth amphibian species, the bullfrog, was introduced. An estimated 10,000 insect species frequent the park; they pollinate plants, provide a food source for birds, fish, mammals and other animals, and help in the decomposition of wood. In one example of the importance of insects to the ecosystem, swarms of Army cutworm moths die in huge numbers after mating and provide a high fat and protein diet for bears and other predators. One study concluded that when this moth species is most available, bears consume 40,000 moths per day which is roughly 20,000 kcal/day.
Grand Teton National Park permits hunting of elk in an effort to keep the populations of that species regulated. This provision was included in the legislation that combined Jackson Hole National Monument and Grand Teton National Park in 1950. While some national parks in Alaska permit subsistence hunting by indigenous natives and a few other National Park Service managed areas allow hunting under highly regulated circumstances, hunting in American national parks is not generally allowed. In Grand Teton National Park, hunters are required to obtain Wyoming hunting licenses and be deputized as park rangers. Hunting is restricted to areas east of the Snake River and north of Moran, Wyoming, the hunt is permitted only east of U.S. Route 89. Proponents of continuing the elk hunt, which occurs in the fall, argue that the elk herd would become overpopulated without it, leading to vegetation degradation from overgrazing elk herds. Opponents cite that there has been an increase of predators such as the wolf and grizzly bear in Grand Teton National Park, rendering the annual hunt unnecessary and exposing hunters to attacks by grizzly bears as they become accustomed to feeding on remains left behind from the hunt.
The role of wildfire is an important one for plant and animal species diversity. Many tree species have evolved to mainly germinate after a wildfire. Regions of the park that have experienced wildfire in historical times have greater species diversity after reestablishment than those regions that have not been influenced by fire. Though the Yellowstone fires of 1988 had minimal impact on Grand Teton National Park, studies conducted before and reaffirmed after that event concluded that the suppression of natural wildfires during the middle part of the 20th century decreased plant species diversity and natural regeneration of plant communities. One study conducted 15 years before the 1988 Yellowstone National Park fires concluded that human suppression of wildfire had adversely impacted Aspen tree groves and other forest types. The majority of conifer species in Grand Teton National Park are heavily dependent on wildfire and this is particularly true of the Lodgepole Pine. Though extremely hot canopy or crown fires tend to kill Lodgepole Pine seeds, lower severity surface fires usually result in a higher post wildfire regeneration of this species. In accordance with a better understanding of the role wildfire plays in the environment, the National Park Service and other land management agencies have developed Fire Management Plans which provide a strategy for wildfire management and are expected to best enhance the natural ecosystem.
Grand Teton National Park is more than air distance from any major urban or industrial area, and localized human activities have generally had a very low environmental impact on the surrounding region. However, levels of ammonium and nitrogen have been trending slightly upwards due to deposition from rain and snow that is believed to originate from regional agricultural activities. Additionally, there has also been a slight increase in mercury and pesticides that have been detected in snow and some alpine lakes. Ozone and haze may be impacting overall visibility levels. Grand Teton National Park, in partnership with other agencies, erected the first air quality monitoring station in the park in 2011. The station is designed to check for various pollutants as well as ozone levels and weather.
A 2005 study of the water of Jackson, Jenny and Taggart Lakes indicated that all three of these lakes had virtually pristine water quality. Of the three lakes, only on Taggart Lake are motorized boats prohibited, yet little difference in water quality was detected in the three lakes. In a study published in 2002, the Snake River was found to have better overall water quality than other river systems in Wyoming, and low levels of pollution from anthropogenic sources.
According to the Köppen climate classification system, Grand Teton National Park has a subarctic climate with Cool Summers and Year Around Rainfall Climate (DFC). The plant hardiness zone at Jenny Lake Visitor Center is 4a with an average annual extreme minimum temperature of -28.3 °F (-33.5 °C).
Grand Teton National Park is a popular destination for mountain and rock climbers partly because the mountains are easily accessible by road. Trails are well marked and routes to the summits of most peaks are long established, and for the experienced and fit, most peaks can be climbed in one day. The highest maintained trails climb from the floor of Jackson Hole over to mountain passes that are sometimes called saddles or divides. From these passes, the climbs follow routes that require varying skill levels. Climbers do not need a permit but are encouraged to voluntarily register their climbing plans with the National Park Service and inform associates of their itinerary. Any climb requiring an overnight stay in the backcountry does require a permit. Climbers are essentially on their own to determine their own skill levels and are encouraged to not take unnecessary risks. The Exum Mountain Guides, which is considered one of the finest mountaineering guide services in the U.S., as well as the Jackson Hole Mountain Guides, offer instruction and climbing escorts for those who are less experienced or unfamiliar with various routes.
An average of 4,000 climbers per year make an attempt to summit Grand Teton and most ascend up Garnet Canyon to a mountain pass called the Lower Saddle, which is between Grand Teton and Middle Teton. From the Lower Saddle, climbers often follow the Owen-Spalding or Exum Ridge routes to the top of Grand Teton though there are 38 distinct routes to the summit. The north face route to the summit of Grand Teton is a world renowned climb involving a dozen distinct pitches and is rated at grade 5.8 in difficulty for the vertical ascent. On a connecting ridge and just north of Grand Teton lies Mount Owen, and though lower in altitude, this peak is considered more difficult to ascend. Middle Teton is another popular climb that is most easily summited from a saddle between it and South Teton. Well north of Grand Teton lies Mount Moran, which is further from trailheads and more difficult to access and ascend. The Direct South Buttress of Mount Moran provides a vertical mile of climbing that was considered the most difficult climb in the U.S. when first accomplished in 1953. Other popular climbing destinations include Buck Mountain, Symmetry Spire, Mount Saint John, Mount Wister, Teewinot Mountain and Nez Perce Peak and each mountain has at least six established routes to their summits.
Grand Teton National Park has five front-country vehicular access campgrounds. The largest are the Colter Bay and Gros Ventre campgrounds, and each has 350 campsites which can accommodate large recreational vehicles. Lizard Creek and Signal Mountain campgrounds have 60 and 86 campsites respectively, while the smaller Jenny Lake campground has only 49 sites for tent use only. Additionally, full hookups for recreational vehicles are at the concessionaire managed 112 campsites at Colter Bay Village and another 100 at Flagg Ranch in the John D. Rockefeller Memorial Parkway. Though all front-country campgrounds are only open from late spring to late fall, primitive winter camping is permitted at Colter Bay near the visitor center.
All campsites accessible only on foot or by horseback are considered backcountry campsites and they are available by permit only, but camping is allowed in most of these backcountry zones year-round. The National Park Service has a combination of specific sites and zones for backcountry camping with a set carrying capacity of overnight stays per zone to protect the resources from overcrowding. Open fires are not permitted in the backcountry and all food must be stored in an Interagency Grizzly Bear Committee approved bear-resistant container. As of 2012, only four brands of bear-resistant containers had been approved for use in the Grand Teton National Park backcountry. Additionally, hikers may use an approved bear spray to elude aggressive bears.
The park has of hiking trails, ranging in difficulty from easy to strenuous. The easiest hiking trails are located in the valley, where the altitude changes are generally minimal. In the vicinity of Colter Bay Village, the Hermitage Point Trail is long and considered easy. Several other trails link Hermitage Point with Emma Matilda Lake and Two Ocean Lake Trails, also considered to be relatively easy hikes in the Jackson Lake Lodge area. Other easy hikes include the Valley Trail which runs from Trapper Lake in the north to the south park boundary near Teton Village and the Jenny Lake Trail which circles the lake. Ranging from moderate to strenuous in difficulty, trails leading into the canyons are rated based on distance and more importantly on the amount of elevation change. The greatest elevation change is found on the Paintbrush Canyon, Alaska Basin and Garnet Canyon Trails, where elevation increases of over are typical. Horses and pack animals are permitted on almost all trails in the park; however, there are only five designated backcountry camping locations for pack animals and these campsites are far from the high mountain passes. Bicycles are limited to vehicle roadways only and the park has widened some roads to provide a safer biking experience. A paved multi-use pathway opened in 2009 and provides non-motorized biking access from the town of Jackson to South Jenny Lake.
Grand Teton National Park allows boating on all the lakes in Jackson Hole, but motorized boats can only be used on Jackson and Jenny Lakes. While there is no maximum horsepower limit on Jackson Lake (though there is a noise restriction), Jenny Lake is restricted to 10 horsepower. Only non-motorized boats are permitted on Bearpaw, Bradley, Emma Matilda, Leigh, Phelps, String, Taggart and Two Ocean Lakes. There are four designated boat launches located on Jackson Lake and one on Jenny Lake. Additionally, sailboats, windsurfers and water skiing are only allowed on Jackson Lake and no jet skis are permitted on any of the park waterways. All boats are required to comply with various safety regulations including personal flotation devices for each passenger. Only non-motorized watercraft are permitted on the Snake River. All other waterways in the park are off limits to boating, and this includes all alpine lakes and tributary streams of the Snake River.
In 2010, Grand Teton National Park started requiring all boats to display an Aquatic Invasive Species decal issued by the Wyoming Game and Fish Department or a Yellowstone National Park boat permit. In an effort to keep the park waterways free of various invasive species such as the Zebra mussel and whirling disease, boaters are expected to abide by certain regulations including displaying a self-certification of compliance on the dashboard of any vehicle attached to an empty boat trailer.
Grand Teton National Park fisheries are managed by the Wyoming Fish and Game Department and a Wyoming state fishing license is required to fish all waterways in Grand Teton National Park. The creel limit for trout is restricted to six per day, including no more than three cutthroat trout with none longer than , while the maximum length of other trout species may not exceed , except those taken from Jackson Lake, where the maximum allowable length is . There are also restrictions as to the seasonal accessibility to certain areas as well as the types of bait and fishing tackle permitted.
Visitors are allowed to snowshoe and do cross-country skiing and are not restricted to trails. The Teton Park Road between the Taggart Lake trailhead to Signal Mountain Campground is closed to vehicular traffic during the winter and this section of the road is groomed for skiing and snowshoeing traffic. The park service offers guided snowshoe tours daily from the main headquarters located in Moose, Wyoming. Overnight camping is allowed in the winter in the backcountry with a permit and visitors should inquire about avalanche dangers.
The only location in Grand Teton National Park where snowmobiles are permitted is on Jackson Lake. The National Park Service requires that all snowmobiles use "Best Available Technology" (BAT) and lists various models of snowmobiles that are permitted, all of which are deemed to provide the least amount of air pollution and maximize noise abatement. All snowmobiles must be less than 10 years old and have odometer readings of less than . Additionally, snowmobile use is for the purposes of accessing ice fishing locations only. Snowmobile access was permitted between Moran Junction and Flagg Ranch adjacent to the John D. Rockefeller, Jr. Memorial Parkway so that travelers using the Continental Divide Snowmobile Trail could traverse between Bridger-Teton National Forest and Yellowstone National Park. However, in 2009, winter use planners closed this since unguided snowmobile access into Yellowstone National Park was also discontinued.
The Craig Thomas Discovery and Visitor Center adjacent to the park headquarters at Moose, Wyoming, is open year-round. Opened in 2007 to replace an old, inadequate visitor center, the facility is named for the late U.S. Senator Craig Thomas and designed by acclaimed architect, Bohlin Cywinski Jackson. It was financed with a combination of federal grants and private donations. An adjoining 154-seat auditorium was opened to the public in April 2011. To the north at Colter Bay Village on Jackson Lake, the Colter Bay Visitor Center & Indian Arts Museum is open from the beginning of May to the early October. The Colter Bay Visitor Center & Indian Arts Museum has housed the David T. Vernon Indian Arts Exhibit since 1972. The Colter Bay Visitor Center was built in 1956 and was determined in 2005 to be substandard for the proper care and display of the Indian arts collection. During the winter of 2011–2012, a $150,000 renovation project was completed at the center and a portion of the arts collection was made available for viewing when the center opened for the season in May 2012.
South of Moose on the Moose–Wilson Road, the Laurance S. Rockefeller Preserve Center is located on land that was privately owned by Laurance S. Rockefeller and is situated on Phelps Lake. Donated to Grand Teton National Park and opened to the public in 2008, the property was once part of the JY Ranch, the first dude ranch in Jackson Hole. At Jenny Lake, the Jenny Lake Visitor Center is open from mid-May to mid-September. This visitor center is within the Jenny Lake Ranger Station Historic District and is the same structure photographer Harrison Crandall had constructed as an art studio in the 1920s.
Contracted through the National Park Service, various concessionaire entities manage lodging facilities inside the park. The largest such facility is the Jackson Lake Lodge, which is managed by the Grand Teton Lodge Company. Located near Jackson Lake Dam, the Jackson Lake Lodge has a total of 385 rooms, meeting facilities, a retail shop and a restaurant. The Grand Teton Lodge Company also manages the Jenny Lake Lodge, which consists of cabins and a restaurant and Colter Bay Village, which has cabins, a restaurant, a grocery store, a laundry and a marina. South of Jackson Lake Dam, the Signal Mountain Lodge is managed by Forever Resorts and provides cabins, a marina, a gas station and a restaurant. The American Alpine Club has hostel dormitory style accommodations primarily reserved for mountain climbers at the Grand Teton Climber's Ranch. Adjacent to the Snake River in Moose, Wyoming, Dornan's is an inholding on private land which has year-round cabin accommodations and related facilities. Lodging is also available at the Triangle X Ranch, another private inholding in the park and the last remaining dude ranch within park boundaries. | https://en.wikipedia.org/wiki?curid=45474 |
Marfan syndrome
Marfan syndrome (MFS) is a genetic disorder that affects the connective tissue. Those with the condition tend to be tall and thin, with long arms, legs, fingers and toes. They also typically have flexible joints and scoliosis. The most serious complications involve the heart and aorta, with an increased risk of mitral valve prolapse and aortic aneurysm. The lungs, eyes, bones, and the covering of the spinal cord are also commonly affected. The severity of the symptoms of MFS is variable.
MFS is caused by a mutation in "FBN1", one of the genes that makes fibrillin, which results in abnormal connective tissue. It is an autosomal dominant disorder. About 75% of the time, the condition is inherited from a parent with the condition, while 25% of the time it is a new mutation. Diagnosis is often based on the Ghent criteria.
There is no known cure for MFS. Many of those with the disorder have a normal life expectancy with proper treatment. Management often includes the use of beta blockers such as propranolol or atenolol or, if they are not tolerated, calcium channel blockers or ACE inhibitors. Surgery may be required to repair the aorta or replace a heart valve. Avoiding strenuous exercise is recommended for those with the condition.
About 1 in 5,000 to 1 in 10,000 people have MFS. Rates of the condition are similar between races and in different regions of the world. It is named after French pediatrician Antoine Marfan, who first described it in 1896.
More than 30 different signs and symptoms are variably associated with Marfan syndrome. The most prominent of these affect the skeletal, cardiovascular, and ocular systems, but all fibrous connective tissue throughout the body can be affected.
Most of the readily visible signs are associated with the skeletal system. Many individuals with Marfan syndrome grow to above-average height, and some have disproportionately long, slender limbs with thin, weak wrists and long fingers and toes. Besides affecting height and limb proportions, people with Marfan syndrome may have abnormal lateral curvature of the spine (scoliosis), thoracic lordosis, abnormal indentation (pectus excavatum) or protrusion (pectus carinatum) of the sternum, abnormal joint flexibility, a high-arched palate with crowded teeth and an overbite, flat feet, hammer toes, stooped shoulders, and unexplained stretch marks on the skin. It can also cause pain in the joints, bones, and muscles. Some people with Marfan have speech disorders resulting from symptomatic high palates and small jaws. Early osteoarthritis may occur. Other signs include limited range of motion in the hips due to the femoral head protruding into abnormally deep hip sockets.
In Marfan syndrome, the health of the eye can be affected in many ways, but the principal change is partial lens dislocation, where the lens is shifted out of its normal position. This occurs because of weakness in the ciliary zonules, the connective tissue strands which suspend the lens within the eye. The mutations responsible for Marfan syndrome weaken the zonules and cause them to stretch. The inferior zonules are most frequently stretched resulting in the lens shifting upwards and outwards, but it can shift in other directions as well. Nearsightedness (myopia), and blurred vision are common due to connective tissue defects in the eye. Farsightedness can also result particularly if the lens is highly subluxated. Subluxation (partial dislocation) of the lens can be detected clinically in about 60% of people with Marfan syndrome by the use of a slit-lamp biomicroscope. If the lens subluxation is subtle, then imaging with high-resolution ultrasound biomicroscopy might be used.
Other signs and symptoms affecting the eye include increased length along an axis of the globe, myopia, corneal flatness, strabismus, exotropia, and esotropia. Those with MFS are also at a high risk for early glaucoma and early cataracts.
The most serious signs and symptoms associated with Marfan syndrome involve the cardiovascular system: undue fatigue, shortness of breath, heart palpitations, racing heartbeats, or chest pain radiating to the back, shoulder, or arm. Cold arms, hands, and feet can also be linked to MFS because of inadequate circulation. A heart murmur, abnormal reading on an ECG, or symptoms of angina can indicate further investigation. The signs of regurgitation from prolapse of the mitral or aortic valves (which control the flow of blood through the heart) result from cystic medial degeneration of the valves, which is commonly associated with MFS (see mitral valve prolapse, aortic regurgitation). However, the major sign that would lead a doctor to consider an underlying condition is a dilated aorta or an aortic aneurysm. Sometimes, no heart problems are apparent until the weakening of the connective tissue (cystic medial degeneration) in the ascending aorta causes an aortic aneurysm or aortic dissection, a surgical emergency. An aortic dissection is most often fatal and presents with pain radiating down the back, giving a tearing sensation.
Because underlying connective tissue abnormalities cause MFS, the incidence of dehiscence of prosthetic mitral valve is increased. Care should be taken to attempt repair of damaged heart valves rather than replacement.
Pulmonary symptoms are not a major feature of MFS, but spontaneous pneumothorax is common. In spontaneous unilateral pneumothorax, air escapes from a lung and occupies the pleural space between the chest wall and a lung. The lung becomes partially compressed or collapsed. This can cause pain, shortness of breath, cyanosis, and, if not treated, death. Other possible pulmonary manifestations of MFS include sleep apnea and idiopathic obstructive lung disease. Pathologic changes in the lungs have been described such as cystic changes, emphysema, pneumonia, bronchiectasis, bullae, apical fibrosis and congenital malformations such as middle lobe hypoplasia.
Dural ectasia, the weakening of the connective tissue of the dural sac encasing the spinal cord, can result in a loss of quality of life. It can be present for a long time without producing any noticeable symptoms. Symptoms that can occur are lower back pain, leg pain, abdominal pain, other neurological symptoms in the lower extremities, or headachessymptoms which usually diminish when lying flat. On X-ray, however, dural ectasia is not often visible in the early stages. A worsening of symptoms might warrant an MRI of the lower spine. Dural ectasia that has progressed to this stage would appear in an MRI as a dilated pouch wearing away at the lumbar vertebrae. Other spinal issues associated with MFS include degenerative disc disease, spinal cysts, and dysfunction of the autonomic nervous system.
Each parent with the condition has a 50% risk of passing the genetic defect on to any child due to its autosomal dominant nature. Most individuals with MFS have another affected family member. In fact, about 75% of cases are inherited. On the other hand, about 15–30% of all cases are due to "de novo" genetic mutations; such spontaneous mutations occur in about one in 20,000 births. Marfan syndrome is also an example of dominant negative mutation and haploinsufficiency. It is associated with variable expressivity; incomplete penetrance has not been definitively documented.
Marfan syndrome is caused by mutations in the "FBN1" gene on chromosome 15, which encodes fibrillin 1, a glycoprotein component of the extracellular matrix. Fibrillin-1 is essential for the proper formation of the extracellular matrix, including the biogenesis and maintenance of elastic fibers. The extracellular matrix is critical for both the structural integrity of connective tissue, but also serves as a reservoir for growth factors. Elastic fibers are found throughout the body, but are particularly abundant in the aorta, ligaments and the ciliary zonules of the eye; consequently, these areas are among the worst affected. It can also be caused by a range of intravenous crystal treatments in those susceptible to the disorder.
A transgenic mouse has been created carrying a single copy of a mutant fibrillin-1, a mutation similar to that found in the human gene known to cause MFS. This mouse strain recapitulates many of the features of the human disease and promises to provide insights into the pathogenesis of the disease. Reducing the level of normal fibrillin 1 causes a Marfan-related disease in mice.
Transforming growth factor beta (TGF-β) plays an important role in MFS. Fibrillin-1 directly binds a latent form of TGF-β, keeping it sequestered and unable to exert its biological activity. The simplest model suggests reduced levels of fibrillin-1 allow TGF-β levels to rise due to inadequate sequestration. Although how elevated TGF-β levels are responsible for the specific pathology seen with the disease is not proven, an inflammatory reaction releasing proteases that slowly degrade the elastic fibers and other components of the extracellular matrix is known to occur. The importance of the TGF-β pathway was confirmed with the discovery of the similar Loeys–Dietz syndrome involving the "TGFβR2" gene on chromosome 3, a receptor protein of TGF-β. Marfan syndrome has often been confused with Loeys–Dietz syndrome, because of the considerable clinical overlap between the two pathologies.
Marfanoid–progeroid–lipodystrophy syndrome (MPL), also referred to as Marfan lipodystrophy syndrome (MFLS), is a variant of MFS in which Marfan symptoms are accompanied by features usually associated with neonatal progeroid syndrome (also referred to as Wiedemann–Rautenstrauch syndrome) in which the levels of white adipose tissue are reduced. Since 2010, evidence has been accumulating that MPL is caused by mutations near the 3'-terminus of the "FBN1" gene. It has been shown that these people are also deficient in asprosin, a gluco-regulatory protein hormone which is the C-terminal cleavage product of profibrillin. The levels of asprosin seen in these people were lower than expected for a heterozygous genotype, consistent with a dominant negative effect.
Diagnostic criteria of MFS were agreed upon internationally in 1996. However, Marfan syndrome is often difficult to diagnose in children, as they typically do not show symptoms until reaching pubescence. A diagnosis is based on family history and a combination of major and minor indicators of the disorder, rare in the general population, that occur in one individualfor example: four skeletal signs with one or more signs in another body system such as ocular and cardiovascular in one individual. The following conditions may result from MFS, but may also occur in people without any known underlying disorder.
In 2010, the Ghent nosology was revised, and new diagnostic criteria superseded the previous agreement made in 1996. The seven new criteria can lead to a diagnosis:
In the absence of a family history of MFS:
In the presence of a family history of MFS (as defined above):
The thumb sign (Steinberg's sign) is elicited by asking the person to flex the thumb as far as possible and then close the fingers over it. A positive thumb sign is where the entire distal phalanx is visible beyond the ulnar border of the hand, caused by a combination of hypermobility of the thumb as well as a thumb which is longer than usual.
The wrist sign (Walker-Murdoch sign) is elicited by asking the person to curl the thumb and fingers of one hand around the other wrist. A positive wrist sign is where the little finger and the thumb overlap, caused by a combination of thin wrists and long fingers.
Many other disorders can produce the same type of body characteristics as Marfan syndrome. Genetic testing and evaluating other signs and symptoms can help to differentiate these. The following are some of the disorders that can manifest as "marfanoid":
There is no cure for Marfan syndrome, but life expectancy has increased significantly over the last few decades and is now similar to that of the average person.
Regular checkups are recommended to monitor the health of the heart valves and the aorta. Marfan syndrome is treated by addressing each issue as it arises and, in particular, preventive medication even for young children to slow progression of aortic dilation. The goal of this treatment strategy is to slow the progression of aortic dilation and prevent any damage to heart valves by eliminating heart arrythmias, minimizing the heart rate, and lowering the person's blood pressure.
The American Heart Association made the following recommendations for people with Marfan syndrome with no or mild aortic dilation:
Management often includes the use of beta blockers such as propranolol or if not tolerated calcium channel blockers or ACE inhibitors. Beta blockers are used to reduce the stress exerted on the aorta and to decrease aortic dilation.
If the dilation of the aorta progresses to a significant-diameter aneurysm, causes a dissection or a rupture, or leads to failure of the aortic or other valve, then surgery (possibly a composite aortic valve graft or valve-sparing aortic root replacement) becomes necessary. Although aortic graft surgery (or any vascular surgery) is a serious undertaking it is generally successful if undertaken on an elective basis. | https://en.wikipedia.org/wiki?curid=45475 |
Utility
Within economics, the concept of utility is used to model worth or value. Its usage has evolved significantly over time. The term was introduced initially as a measure of pleasure or satisfaction within the theory of utilitarianism by moral philosophers such as Jeremy Bentham and John Stuart Mill. The term has been adapted and reapplied within neoclassical economics, which dominates modern economic theory, as a utility function that represents a consumer's preference ordering over a choice set. Utility has thus become a more abstract concept, that is not necessarily solely based on the satisfaction/pleasure received.
Consider a set of alternatives facing an individual, and over which the individual has a preference ordering. A utility function is able to represent those preferences if it is possible to assign a real number to each alternative, in such a way that "alternative a" is assigned a number greater than "alternative b" if, and only if, the individual prefers "alternative a" to "alternative b". In this situation an individual that selects the most preferred alternative available is necessarily also selecting the alternative that maximises the associated utility function. In general economic terms, a utility function measures preferences concerning a set of goods and services. Often, utility is correlated with words such as happiness, satisfaction, and welfare, and these are hard to measure mathematically. Thus, economists utilize consumption baskets of preferences in order to measure these abstract, non quantifiable ideas.
Gérard Debreu precisely defined the conditions required for a preference ordering to be representable by a utility function. For a finite set of alternatives these require only that the preference ordering is complete (so the individual is able to determine which of any two alternatives is preferred, or that they are equally preferred), and that the preference order is transitive.
Utility is usually applied by economists in such constructs as the indifference curve, which plot the combination of commodities that an individual or a society would accept to maintain a given level of satisfaction. Utility and indifference curves are used by economists to understand the underpinnings of demand curves, which are half of the supply and demand analysis that is used to analyze the workings of goods markets.
Individual utility and social utility can be construed as the value of a utility function and a social welfare function respectively. When coupled with production or commodity constraints, under some assumptions these functions can be used to analyze Pareto efficiency, such as illustrated by Edgeworth boxes in contract curves. Such efficiency is a central concept in welfare economics.
In finance, utility is applied to generate an individual's price for an asset called the indifference price. Utility functions are also related to risk measures, with the most common example being the entropic risk measure.
In the field of artificial intelligence, utility functions are used to convey the value of various outcomes to intelligent agents. This allows the agents to plan actions with the goal of maximizing the utility (or "value") of available choices.
It was recognized that utility could not be measured or observed directly, so instead economists devised a way to infer underlying relative utilities from observed choice. These 'revealed preferences', as they were named by Paul Samuelson, were revealed e.g. in people's willingness to pay: Utility is taken to be correlative to Desire or Want. It has been already argued that desires cannot be measured directly, but only indirectly, by the outward phenomena to which they give rise: and that in those cases with which economics is chiefly concerned the measure is found in the price which a person is willing to pay for the fulfillment or satisfaction of his desire.
There has been some controversy over the question whether the utility of a commodity can be measured or not. At one time, it was assumed that the consumer was able to say exactly how much utility he got from the commodity. The economists who made this assumption belonged to the 'cardinalist school' of economics. Today utility functions, expressing utility as a function of the amounts of the various goods consumed, are treated as either "cardinal" or "ordinal", depending on whether they are or are not interpreted as providing more information than simply the rank ordering of preferences over bundles of goods, such as information on the strength of preferences.
When cardinal utility is used, the magnitude of utility differences is treated as an ethically or behaviorally significant quantity. For example, suppose a cup of orange juice has utility of 120 utils, a cup of tea has a utility of 80 utils, and a cup of water has a utility of 40 utils. With cardinal utility, it can be concluded that the cup of orange juice is better than the cup of tea by exactly the same amount by which the cup of tea is better than the cup of water. Formally speaking, this means that if one has a cup of tea, she would be willing to take any bet with a probability, p, greater than .5 of getting a cup of juice, with a risk of getting a cup of water equal to 1-p. One cannot conclude, however, that the cup of tea is two thirds of the goodness of the cup of juice, because this conclusion would depend not only on magnitudes of utility differences, but also on the "zero" of utility. For example, if the "zero" of utility was located at -40, then a cup of orange juice would be 160 utils more than zero, a cup of tea 120 utils more than zero. Cardinal utility, to economics, can be seen as the assumption that utility can be measured through quantifiable characteristics, such as height, weight, temperature, etc.
Neoclassical economics has largely retreated from using cardinal utility functions as the basis of economic behavior. A notable exception is in the context of analyzing choice under conditions of risk (see below).
Sometimes cardinal utility is used to aggregate utilities across persons, to create a social welfare function.
When ordinal utilities are used, differences in utils (values taken on by the utility function) are treated as ethically or behaviorally meaningless: the utility index encodes a full behavioral ordering between members of a choice set, but tells nothing about the related "strength of preferences". In the above example, it would only be possible to say that juice is preferred to tea to water, but no more. Thus, ordinal utility utilizes comparisons, such as "preferred to", "no more", "less than", etc.
Ordinal utility functions are unique up to increasing monotone (or monotonic) transformations. For example, if a function formula_1 is taken as ordinal, it is equivalent to the function formula_2, because taking the 3rd power is an increasing monotone transformation (or monotonic transformation). This means that the ordinal preference induced by these functions is the same (although they are two different functions). In contrast, cardinal utilities are unique only up to increasing linear transformations, so if formula_1 is taken as cardinal, it is not equivalent to formula_2.
Although preferences are the conventional foundation of microeconomics, it is often convenient to represent preferences with a utility function and analyze human behavior indirectly with utility functions. Let "X" be the consumption set, the set of all mutually-exclusive baskets the consumer could conceivably consume. The consumer's utility function formula_5 ranks each package in the consumption set. If the consumer strictly prefers "x" to "y" or is indifferent between them, then formula_6.
For example, suppose a consumer's consumption set is "X" = {nothing, 1 apple,1 orange, 1 apple and 1 orange, 2 apples, 2 oranges}, and its utility function is "u"(nothing) = 0, "u"(1 apple) = 1, "u"(1 orange) = 2, "u"(1 apple and 1 orange) = 4, "u"(2 apples) = 2 and "u"(2 oranges) = 3. Then this consumer prefers 1 orange to 1 apple, but prefers one of each to 2 oranges.
In micro-economic models, there are usually a finite set of L commodities, and a consumer may consume an arbitrary amount of each commodity. This gives a consumption set of formula_7, and each package formula_8 is a vector containing the amounts of each commodity. In the previous example, we might say there are two commodities: apples and oranges. If we say apples is the first commodity, and oranges the second, then the consumption set formula_9 and "u"(0, 0) = 0, "u"(1, 0) = 1, "u"(0, 1) = 2, "u"(1, 1) = 4, "u"(2, 0) = 2, "u"(0, 2) = 3 as before. Note that for "u" to be a utility function on "X", it must be defined for every package in "X".
A utility function formula_5 represents a preference relation formula_11 on X iff for every formula_12, formula_13 implies formula_14. If u represents formula_11, then this implies formula_11 is complete and transitive, and hence rational.
In financial applications, e.g. portfolio optimization, an investor chooses financial portfolio which maximizes his/her own utility function, or, equivalently, minimizes his/her risk measure. For example, modern portfolio theory selects variance as a measure of risk; other popular theories are expected utility theory, and prospect theory. To determine specific utility function for any given investor, one could design a questionnaire procedure with questions in the form: How much would you pay for "x%" chance of getting "y"? Revealed preference theory suggests a more direct approach: observe a portfolio "X*" which an investor currently holds, and then find a utility function/risk measure such that "X*" becomes an optimal portfolio.
In order to simplify calculations, various alternative assumptions have been made concerning details of human preferences, and these imply various alternative utility functions such as:
Most utility functions used in modeling or theory are well-behaved. They are usually monotonic and quasi-concave. However, it is possible for preferences not to be representable by a utility function. An example is lexicographic preferences which are not continuous and cannot be represented by a continuous utility function.
The expected utility theory deals with the analysis of choices among risky projects with multiple (possibly multidimensional) outcomes.
The St. Petersburg paradox was first proposed by Nicholas Bernoulli in 1713 and solved by Daniel Bernoulli in 1738. D. Bernoulli argued that the paradox could be resolved if decision-makers displayed risk aversion and argued for a logarithmic cardinal utility function. (Analyses of international survey data in the 21st century have shown that insofar as utility represents happiness, as in utilitarianism, it is indeed proportional to log income.)
The first important use of the expected utility theory was that of John von Neumann and Oskar Morgenstern, who used the assumption of expected utility maximization in their formulation of game theory.
Von Neumann and Morgenstern addressed situations in which the outcomes of choices are not known with certainty, but have probabilities attached to them.
A notation for a "lottery" is as follows: if options A and B have probability "p" and 1 − "p" in the lottery, we write it as a linear combination:
More generally, for a lottery with many possible options:
where formula_19.
By making some reasonable assumptions about the way choices behave, von Neumann and Morgenstern showed that if an agent can choose between the lotteries, then this agent has a utility function such that the desirability of an arbitrary lottery can be calculated as a linear combination of the utilities of its parts, with the weights being their probabilities of occurring.
This is called the "expected utility theorem". The required assumptions are four axioms about the properties of the agent's preference relation over 'simple lotteries', which are lotteries with just two options. Writing formula_20 to mean 'A is weakly preferred to B' ('A is preferred at least as much as B'), the axioms are:
Axioms 3 and 4 enable us to decide about the relative utilities of two assets or lotteries.
In more formal language: A von Neumann–Morgenstern utility function is a function from choices to the real numbers:
which assigns a real number to every outcome in a way that captures the agent's preferences over simple lotteries. Under the four assumptions mentioned above, the agent will prefer a lottery formula_42 to a lottery formula_43 if and only if, for the utility function characterizing that agent, the expected utility of formula_42 is greater than the expected utility of formula_43:
Of all the axioms, independence is the most often discarded. A variety of generalized expected utility theories have arisen, most of which drop or relax the independence axiom.
Castagnoli and LiCalzi (1996) and Bordley and LiCalzi (2000) provided another interpretation for Von Neumann and Morgenstern's theory. Specifically for any utility function, there exists a hypothetical reference lottery with the expected utility of an arbitrary lottery being its probability of performing no worse than the reference lottery. Suppose success is defined as getting an outcome no worse than the outcome of the reference lottery. Then this mathematical equivalence means that maximizing expected utility is equivalent to maximizing the probability of success. In many contexts, this makes the concept of utility easier to justify and to apply. For example, a firm's utility might be the probability of meeting uncertain future customer expectations.
An indirect utility function gives the optimal attainable value of a given utility function, which depends on the prices of the goods and the income or wealth level that the individual possesses.
One use of the indirect utility concept is the notion of the utility of money. The (indirect) utility function for money is a nonlinear function that is bounded and asymmetric about the origin. The utility function is concave in the positive region, reflecting the phenomenon of diminishing marginal utility. The boundedness reflects the fact that beyond a certain point money ceases being useful at all, as the size of any economy at any point in time is itself bounded. The asymmetry about the origin reflects the fact that gaining and losing money can have radically different implications both for individuals and businesses. The non-linearity of the utility function for money has profound implications in decision making processes: in situations where outcomes of choices influence utility through gains or losses of money, which are the norm in most business settings, the optimal choice for a given decision depends on the possible outcomes of all other decisions in the same time-period.
Cambridge economist Joan Robinson famously criticized utility for being a circular concept: "Utility is the quality in commodities that makes individuals want to buy them, and the fact that individuals want to buy commodities shows that they have utility." Robinson also pointed out that because the theory assumes that preferences are fixed this means that utility is not a testable assumption. This is so because if we take changes in peoples' behavior in relation to a change in prices or a change in the underlying budget constraint we can never be sure to what extent the change in behavior was due to the change in price or budget constraint and how much was due to a change in preferences. This criticism is similar to that of the philosopher Hans Albert who argued that the ceteris paribus conditions on which the marginalist theory of demand rested rendered the theory itself an empty tautology and completely closed to experimental testing. In essence, demand and supply curve (theoretical line of quantity of a product which would have been offered or requested for given price) is purely ontological and could never been demonstrated empirically.
Another criticism comes from the assertion that neither cardinal nor ordinal utility is empirically observable in the real world. In the case of cardinal utility it is impossible to measure the level of satisfaction "quantitatively" when someone consumes or purchases an apple. In case of ordinal utility, it is impossible to determine what choices were made when someone purchases, for example, an orange. Any act would involve preference over a vast set of choices (such as apple, orange juice, other vegetable, vitamin C tablets, exercise, not purchasing, etc.).
Other questions of what arguments ought to enter into a utility function are difficult to answer, yet seem necessary to understanding utility. Whether people gain utility from coherence of wants, beliefs or a sense of duty is key to understanding their behavior in the utility organon. Likewise, choosing between alternatives is itself a process of determining what to consider as alternatives, a question of choice within uncertainty.
An evolutionary psychology perspective is that utility may be better viewed as due to preferences that maximized evolutionary fitness in the ancestral environment but not necessarily in the current one. | https://en.wikipedia.org/wiki?curid=45479 |
Soylent Green
Soylent Green is a 1973 American dystopian thriller film directed by Richard Fleischer and starring Charlton Heston and Leigh Taylor-Young. Edward G. Robinson appears in his final film. Loosely based on the 1966 science fiction novel "Make Room! Make Room!" by Harry Harrison, it combines both police procedural and science fiction genres: the investigation into the murder of a wealthy businessman; and a dystopian future of dying oceans and year-round humidity due to the greenhouse effect, resulting in suffering from pollution, poverty, overpopulation, euthanasia and depleted resources.
In 1973, it won the Nebula Award for Best Dramatic Presentation and the Saturn Award for Best Science Fiction Film.
In the year 2022, the cumulative effects of overpopulation, pollution and some apparent climate catastrophe have caused severe worldwide shortages of food, water and housing. There are 40 million people in New York City alone, where only the city's elite can afford spacious apartments, clean water and natural food, and even then at horrendously high prices. The homes of the elite usually include concubines who are referred to as "furniture" and serve the tenants as slaves.
Within the city lives New York City Police Department detective Frank Thorn with his aged friend Sol Roth, a highly intelligent analyst, referred to as a "Book". Roth remembers the world when it had animals and real food, and possesses a small library of reference materials to assist Thorn. Thorn is tasked with investigating the murder of the wealthy and influential William R. Simonson, and quickly learns that Simonson had been assassinated and was a board member of Soylent Industries.
Soylent Industries, which derives its name from a combination of "soy" and "lentil", controls the food supply of half of the world and sells the artificially produced wafers, including "Soylent Red" and "Soylent Yellow". Their latest product is the far more flavorful and nutritious "Soylent Green", advertised as being made from ocean plankton, but is in short supply. As a result of the weekly supply bottlenecks, the hungry masses regularly riot, and they are brutally removed from the streets by means of police vehicles that scoop the rioters with large shovels and dump them within the vehicle's container.
With the help of "furniture" Shirl, with whom Thorn begins a relationship, his investigation leads to a priest that Simonson had visited and confessed to shortly before his death. The priest is only able to hint at a gruesome truth before he himself is murdered. By order of the governor, Thorn is instructed to end the investigation, but he presses on. He is attacked during a riot, by the same assassin who killed Simonson, but the killer is crushed by a police vehicle.
Roth brings two volumes of oceanographic reports Thorn had procured from Simonson's apartment to the team of Books at the Supreme Exchange. The books confirm that the oceans no longer produce plankton, and deduce that Soylent Green is produced from some inconceivable supply of protein. They also deduce that Simonson's murder was ordered by his fellow Soylent Industries board members, knowing he was increasingly troubled by the truth. Roth is so disgusted with his life in a degraded world that he decides to "return to the home of God" and seeks assisted suicide at a government clinic. Thorn finds a message left by Roth and rushes to stop him, but arrives too late. Roth and Thorn are mesmerized by the euthanasia process's visual and musical montage—long-gone forests, wild animals, rivers and ocean life. Before dying, Roth whispers what he has learned to Thorn, begging him to find proof, so that the Council of Nations can take action.
Thorn boards a truck transporting bodies from the euthanasia center to a recycling plant, where the secret is revealedhuman corpses are being converted into Soylent Green. He is spotted and kills his attackers, but is himself wounded. As Thorn is tended to by paramedics, he urges his police chief to spread the truth he has discovered and initiate proceedings against the company. While being taken away, Thorn shouts out to the surrounding crowd, "Soylent Green is people!"
The screenplay was based on Harry Harrison's novel "Make Room! Make Room!" (1966), which is set in the year 1999 with the theme of overpopulation and overuse of resources leading to increasing poverty, food shortages, and social disorder. Harrison was contractually denied control over the screenplay and was not told during negotiations that Metro-Goldwyn-Mayer was buying the film rights. He discussed the adaptation in "Omni's Screen Flights/Screen Fantasies" (1984), noting, the "murder and chase sequences [and] the 'furniture' girls are not what the film is about — and are completely irrelevant", and answered his own question, "Am I pleased with the film? I would say fifty percent".
While the book refers to "soylent steaks", it makes no reference to "Soylent Green", the processed food rations depicted in the film. The book's title was not used for the movie on grounds that it might have confused audiences into thinking it a big-screen version of "Make Room for Daddy".
This was the 101st and last movie in which Edward G. Robinson appeared; he died of bladder cancer twelve days after the completion of filming, on January 26, 1973. Robinson had previously worked with Heston in "The Ten Commandments" (1956) and the make-up tests for "Planet of the Apes" (1968). In his book "The Actor's Life: Journal 1956-1976", Heston wrote "He knew while we were shooting, though we did not, that he was terminally ill. He never missed an hour of work, nor was late to a call. He never was less than the consummate professional he had been all his life. I'm still haunted, though, by the knowledge that the very last scene he played in the picture, which he knew was the last day's acting he would ever do, was his death scene. I know why I was so overwhelmingly moved playing it with him."
The film's opening sequence, depicting America becoming more crowded with a series of archive photographs set to music, was created by filmmaker Charles Braverman. The "going home" score in Roth's death scene was conducted by Gerald Fried and consists of the main themes from Symphony No. 6 ("Pathétique") by Tchaikovsky, Symphony No. 6 ("Pastoral") by Beethoven, and the "Peer Gynt Suite" ("Morning Mood" and "Åse's Death") by Edvard Grieg.
A custom cabinet unit of the early arcade game "Computer Space" was used in Soylent Green and is considered to be the first video game appearance in a movie.
The film was released April 19, 1973, and met with mixed reactions from critics. "Time" called it "intermittently interesting", noting that "Heston forsak[es] his granite stoicism for once", and asserting the film "will be most remembered for the last appearance of Edward G. Robinson... In a rueful irony, his death scene, in which he is hygienically dispatched with the help of piped-in light classical music and movies of rich fields flashed before him on a towering screen, is the best in the film." "New York Times" critic A. H. Weiler wrote ""Soylent Green" projects essentially simple, muscular melodrama a good deal more effectively than it does the potential of man's seemingly witless destruction of the Earth's resources"; Weiler concludes "Richard Fleischer's direction stresses action, not nuances of meaning or characterization. Mr. Robinson is pitiably natural as the realistic, sensitive oldster facing the futility of living in dying surroundings. But Mr. Heston is simply a rough cop chasing standard bad guys. Their 21st-century New York occasionally is frightening but it is rarely convincingly real."
Roger Ebert gave the film three stars out of four, calling it "a good, solid science-fiction movie, and a little more." Gene Siskel gave the film one-and-a-half stars out of four and called it "a silly detective yarn, full of juvenile Hollywood images. Wait 'til you see the giant snow shovel scoop the police use to round up rowdies. You may never stop laughing." Arthur D. Murphy of "Variety" wrote, "The somewhat plausible and proximate horrors in the story of 'Soylent Green' carry the Russell Thacher-Walter Seltzer production over its awkward spots to the status of a good futuristic exploitation film." Charles Champlin of the "Los Angeles Times" called it "a clever, rough, modestly budgeted but imaginative work." Penelope Gilliatt of "The New Yorker" was negative, writing, "This pompously prophetic thing of a film hasn't a brain in its beanbag. Where is democracy? Where is the popular vote? Where is women's lib? Where are the uprising poor, who would have suspected what was happening in a moment?"
On Rotten Tomatoes the film has an approval rating of 71% rating, based on 38 reviews, with an average rating of 7/10.
"Soylent Green" was released on Capacitance Electronic Disc by MGM/CBS Home Video and later on laserdisc by MGM/UA in 1992 (, ). In November 2007, Warner Home Video released the film on DVD concurrent with the DVD releases of two other science fiction films; "Logan's Run" (1976), a film that covers similar themes of dystopia and overpopulation, and "Outland" (1981). A Blu-ray Disc release followed on March 29, 2011. | https://en.wikipedia.org/wiki?curid=45481 |
I Am Legend (novel)
I Am Legend is a 1954 post-apocalyptic horror novel by American writer Richard Matheson that was influential in the modern development of zombie and vampire literature and in popularizing the concept of a worldwide apocalypse due to disease. The novel was a success and was adapted into the films "The Last Man on Earth" (1964), "The Omega Man" (1971), and "I Am Legend" (2007). It was also an inspiration behind "Night of the Living Dead" (1968).
Robert Neville appears to be the sole survivor of a pandemic that has killed most of the human population and turned the remainder into "vampires" that largely conform to their stereotypes in fiction and folklore: they are blood-sucking, pale-skinned, and nocturnal, though otherwise indistinguishable from normal humans. Implicitly set in Los Angeles, the novel details Neville's life in the months and eventually years after the outbreak as he attempts to comprehend, research, and possibly cure the disease. Swarms of vampires surround his house nightly and try to find ways to get inside, which includes the females exposing themselves and his vampire neighbor relentlessly shouting for him to come out. Neville survives by barricading himself inside his house every night; he is further protected by the traditional vampire repellents of garlic, mirrors, and crucifixes. Weekly dust storms ravage the city, and during the day, when the vampires are inactive, Neville drives around to search them out in order to kill them with wooden stakes (since they seem impervious to his guns' bullets) and to scavenge for supplies. Neville's past is occasionally revealed through flashbacks; the disease claimed his daughter, whose body the government forced him to burn, as well as his wife, whose body he secretly buried but then had to kill after she rose from the dead as a vampire.
After bouts of depression and alcoholism, Neville finally determines there must be some scientific reasons behind the vampires' origins, behaviors, and aversions, so he sets out to investigate. He obtains books and other research materials from a library and through gradual research discovers the root of the disease is probably a "Bacillus" strain of bacteria capable of infecting both deceased and living hosts. His experiments with microscopes also reveal that the bacteria are deadly sensitive to garlic and sunlight. One day, a stray, injured dog finds its way to his street, filling Neville with amazed joy. Desperate for company, Neville painstakingly earns the nervous dog's trust with food and brings it into the home. Despite his efforts, the sickly dog dies a week later, and Neville, robbed of all hope, resignedly returns to learning more about the vampires.
Neville's continued readings and experiments on incapacitated vampires help him create new theories. He believes vampires are affected by mirrors and crosses because of "hysterical blindness", the result of previous psychological conditioning of the infected. Driven insane by the disease, the infected now react as they believe they should when confronted with these items. Even then, their reaction is constrained to the beliefs of the particular person; for example, a Christian vampire would fear the cross, but a Jewish vampire would not. Neville additionally discovers more efficient means of killing the vampires, other than just driving a stake into their hearts. This includes exposing vampires to direct sunlight or inflicting wide, oxygen-exposing wounds anywhere on their bodies so that the bacteria switch from being anaerobic symbionts to aerobic parasites, rapidly consuming their hosts when exposed to air, which gives the appearance of the vampires instantly liquefying. However, the bacteria also produce resilient "body glue" that instantly seals blunt or narrow wounds, making the vampires bulletproof. With his new knowledge, Neville is killing such large numbers of vampires in his daily forays that his nightly visitors have diminished significantly. Neville further believes the pandemic was spread not so much by direct vampire bites as by bacteria-bearing mosquitos and dust storms in the cities following a recent war. The inconsistency of Neville's results in handling vampires also leads him to realize that there are in fact two differently-reacting types of vampires: those conscious and living with a worsening infection and those who have died but been reanimated by the bacteria (i.e. undead).
After three years, Neville sees a terrified woman in broad daylight. Neville is immediately suspicious after she recoils violently in the presence of garlic, but they slowly win each other's trust. Eventually, the two comfort each other romantically and he explains some of his findings, including his theory that he developed immunity against the infection after being bitten by an infected vampire bat years ago. He wants to know if the woman, named Ruth, is infected or immune, vowing to treat her if she is infected, and she reluctantly allows him to take a blood sample but suddenly knocks him unconscious as he views the results. When Neville wakes, he discovers a note from Ruth confessing that she is indeed a vampire sent to spy on him and that he was responsible for the death of her husband, another vampire. The note further suggests that only the undead vampires are pathologically violent but not those who were alive at the time of infection and who still survive due to chance mutations in their bacteria. These living-infected have slowly overcome their disease and are attempting to build a new society. They have developed medication that diminishes the worst of their symptoms. Ruth warns Neville that her feelings for him are true but that her people will attempt to capture him and that he should try to escape the city.
However, assuming he will be treated fairly by the new society, Neville stays at his house until infected members arrive and violently dispatch the undead vampires outside his house with fiendish glee. Realizing the infected attackers may intend to kill him after all, he fires on them and in turn is shot and captured. Fatally wounded, Neville is placed in a barred cell where he is visited by Ruth, who informs him that she is a senior member of the new society but, unlike the others, does not resent him. After discussing the effects of Neville's vampire-killing activities on the new society, she acknowledges the public need for Neville's execution but, out of mercy, gives him a packet of fast-acting suicide pills. Neville accepts his fate and asks Ruth not to let this society become too heartless. Ruth promises to try, kisses him, and leaves. Neville goes to his prison window and sees the infected staring back at him with the same hatred and fear that he once felt for them; he realizes that he, a remnant of old humanity, is now a legend to the new race born of the infection. He recognizes that their desire to kill him, after he has killed so many of their loved ones, is not something he can condemn. As the pills take effect, he is amused by the thought that he will become their new superstition and legend, just as vampires once were to humans.
As related in "In Search of Wonder" (1956), Damon Knight wrote:
"Galaxy" reviewer Groff Conklin described "Legend" as "a weird [and] rather slow-moving first novel… a horrid, violent, sometimes exciting but too often overdone tour de force." Anthony Boucher praised the novel, saying "Matheson has added a new variant on the Last Man theme… and has given striking vigor to his invention by a forceful style of storytelling which derives from the best hard-boiled crime novels".
Dan Schneider from "International Writers Magazine: Book Review" wrote in 2005:
In 2012, the Horror Writers Association gave "I Am Legend" the special Vampire Novel of the Century Award.
Although Matheson calls the assailants in his novel "vampires" and though their condition is transmitted through bacteria in the blood and garlic is a repellant to this strain of bacteria, there is little similarity between them and vampires as developed by John William Polidori and his successors, who come straight out of the gothic fiction tradition. In "I Am Legend", the "vampires" share more similarities with zombies, and the novel influenced the zombie genre and popularized the concept of a worldwide zombie apocalypse. Although the idea has now become commonplace, a scientific origin for vampirism or zombies was fairly original when written. According to Clasen,
Though referred to as "the first modern vampire novel", it is as a novel of social theme that "I Am Legend" made a lasting impression on the cinematic zombie genre, by way of director George A. Romero, who acknowledged its influence and that of its 1964 adaptation, "The Last Man on Earth", upon his seminal film "Night of the Living Dead" (1968). Discussing the creation of "Night of the Living Dead", Romero remarked, "I had written a short story, which I basically had ripped off from a Richard Matheson novel called "I Am Legend"." Moreover, film critics noted similarities between "Night of the Living Dead" (1968) and "The Last Man on Earth" (1964).
Stephen King said, "Books like "I Am Legend" were an inspiration to me". Film critics noted that the British film "28 Days Later" (2002) and its sequel "28 Weeks Later" both feature a rabies-type plague ravaging Great Britain, analogous to "I Am Legend".
Tim Cain, the producer, lead programmer and one of the main designers of the 1997 computer game "Fallout" said,
This book was how a [sic] individual would handle thinking that he was the last survivor on Earth. This is why in "Fallout 1" when you're voted to leave the Vault, we really wanted that sense of isolationism; that sense of: You are the only person out here on the Wasteland who is, quote, "a normal person", and we wanted you to feel, like, special in that way.
The book has also been adapted into a comic book miniseries titled "Richard Matheson's I Am Legend" by Steve Niles and Elman Brown. It was published in 1991 by Eclipse Comics and collected into a trade paperback by IDW Publishing.
An unrelated film tie-in was released in 2007 as a one-shot "I Am Legend: Awakening" published in a San Diego Comic-Con special by Vertigo.
A nine-part abridged reading of the novel performed by Angus MacInnes was originally broadcast on BBC Radio 7 in January 2006 and repeated in January 2018.
"I Am Legend" has been adapted into a feature-length film three times, as well as into a direct-to-video feature film called "I Am Omega". Differing from the book, each of them portrays the Neville character as an accomplished scientist. The three adaptations show him finding a remedy and passing it on. Adaptations differ from the novel by setting the events three years after the disaster, instead of happening “in the span of” three years. Also adaptations are set in the near future, a few years after the film's release, while the novel is set 20 years after its publication date.
It has also been adapted as the Spanish short student film "Soy leyenda".
In 1964, Vincent Price starred as Dr. Robert Morgan (rather than "Neville") in "The Last Man on Earth" (the original title of this Italian production was "L'ultimo uomo della Terra"). Matheson wrote the original screenplay for this adaptation, but due to later rewrites did not wish his name to appear in the credits; as a result, Matheson is credited under the pseudonym "Logan Swanson".
In 1971, a far different version was produced, titled "The Omega Man". It starred Charlton Heston (as Robert Neville) and Anthony Zerbe. Matheson had no influence on the screenplay for this film, and although the premise remains, it deviates from the novel in several ways, removing the infected people's vampiric characteristics, except their sensitivity to light. In this version, the infected are portrayed as nocturnal, black-robed, albino mutants, known as the Family. Though intelligent, they eschew modern technology, believing it (and those who use it, such as Neville) to be evil and the cause of humanity's downfall.
In 2007, a third adaptation of the novel was produced, this time titled "I Am Legend". Directed by Francis Lawrence and starring Will Smith as Robert Neville, this film uses both Matheson's novel and the 1971 "Omega Man" film as its sources. This adaptation also deviates significantly from the novel. In this version, the infection is caused by a virus originally intended to cure cancer. Some vampiric elements are retained, such as sensitivity to UV light and attraction to blood. The infected are portrayed as nocturnal, feral creatures of limited intelligence who hunt the uninfected with berserker-like rage. Other creatures, such as dogs, are also infected by the virus. The ending of the film was also altered to portray Neville as sacrificing his life to save humanity, rather than being executed for crimes against the surviving vampiric humans, although a deleted ending for the film was closer in spirit to the book. The film takes place in New York City in the years 2009 and 2012 rather than Los Angeles in 1975–1977. | https://en.wikipedia.org/wiki?curid=45492 |
Dennis Bergkamp
Dennis Nicolaas Maria Bergkamp (; born 10 May 1969) is a Dutch professional football manager and former player. Originally a wide midfielder, Bergkamp was moved to main striker and then to second striker, where he remained throughout his playing career. Nicknamed the "Non-Flying Dutchman" by Arsenal supporters due to his fear of flying, Bergkamp is widely regarded as one of the greatest players of his generation.
The son of an electrician, Bergkamp was born in Amsterdam and played as an amateur in the lower leagues. He was spotted by Ajax at age 11 and made his professional debut in 1986. Prolific form led to an international call-up with the Netherlands a year later, attracting the attention of several European clubs. Bergkamp signed for Italian club Inter Milan in 1993, where he had two underwhelming seasons. After joining Arsenal in 1995, he rejuvenated his career, helping the club to win three Premier League titles, four FA Cup trophies, and reach the 2006 UEFA Champions League Final, which marked his last appearance as a player. Despite noting a desire to not go into coaching, Berkgamp served as an assistant at Ajax between 2011 and 2017.
With the Netherlands national team, Bergkamp was selected for Euro 1992, where he impressed, scoring three goals as his country reached the semi-finals. At the 1998 FIFA World Cup, he scored a memorable winning goal in the final minute of the quarterfinal against Argentina which has been regarded as one of the greatest FIFA World Cup goals. Bergkamp surpassed Faas Wilkes's record to become the country's top scorer of all time in 1998, a record later eclipsed by Patrick Kluivert, Klaas-Jan Huntelaar, and Robin van Persie.
Bergkamp has been described by Jan Mulder as having "the finest technique" of any Dutch international and a "dream for a striker" by teammate Thierry Henry. Bergkamp finished third twice in the FIFA World Player of the Year award and was selected by Pelé as one of the FIFA 100 greatest living players. In 2007, he was inducted into the English Football Hall of Fame, the first and only Dutch player ever to receive the honour. In 2017, Bergkamp's goal against Newcastle United in 2002 was voted as the best Premier League goal of all-time in the league's 25-year history.
Born in Amsterdam, Bergkamp was the last of Wim and Tonnie Bergkamp's four sons. He was brought up in a working-class suburb, in a family aspiring to reach middle-class status. His father, an electrician and amateur footballer in the lower leagues, named him in honour of Scottish striker Denis Law. To comply with Dutch given name customs, an extra "n" was inserted in Bergkamp's first name by his father after it was not accepted by the registrar. Bergkamp was raised as a Roman Catholic by his family and regularly attended church during his childhood. Although in later years he said visits to church did not appeal to him, Bergkamp still maintains his faith. According to Bergkamp, his childhood footballing heroes were Glenn Hoddle, whom he admired for his soft precise touch, and Johan Cruyff, who once coached him when he was twelve.
Bergkamp was brought up through Ajax's youth system, joining the club at age 11. Manager Johan Cruyff gave him his professional debut on 14 December 1986 against Roda JC; the match ended in a 2–0 victory for Ajax. Bergkamp scored his first senior goal for the club against HFC Haarlem on 22 February 1987 in a match Ajax won 6–0. He went on to make 23 appearances in the 1986–87 season, including a European debut against Malmö FF in the 1986–87 European Cup Winners' Cup, earning him praise. Ajax won the competition, beating Lokomotive Leipzig 1–0 as Bergkamp made an appearance as a substitute.
In later seasons, Bergkamp established himself as a first-team player for Ajax. This culminated in a period of success for the club, which won the Eredivisie title in the 1989–90 season for the first time in five years. Bergkamp scored 29 goals in 36 matches the following season and became the joint top scorer in the league, sharing the accolade with PSV striker Romário.
Ajax won the 1992 UEFA Cup Final, beating Torino through the away goals ruling. They then defeated Heerenveen 6–2 in the final of the KNVB Cup on 20 May 1993. Bergkamp was the top scorer in the Eredivisie from 1991 to 1993, and was voted Dutch Footballer of the Year in 1992 and 1993. In total, he scored 122 goals in 239 matches for his hometown club.
Bergkamp attracted the attention of several European clubs as a result of his performances for Ajax. Johan Cruyff advised him not to join Real Madrid, one of the teams said to have been interested in him. But Bergkamp was insistent on playing in Italy. He considered Serie A "the biggest league at the time" and preferred a move to either Juventus or Inter Milan. On 16 February 1993, Bergkamp agreed a £7.1 million move to the latter club in a deal which included his Ajax teammate Wim Jonk. Upon signing, Bergkamp said Inter "met all my demands. The most important thing for me was the stadium, the people at the club and their style of play."
Bergkamp made his debut against Reggiana on 29 August 1993 at the San Siro in a 2–1 victory. He scored his first goal for the club against Cremonese in September 1993 but had a difficult time against the highly organised and resolute Italian defences, scoring a further seven goals in the league. This was partly due to manager Osvaldo Bagnoli's inability to find a stable forward partnership, preferring Bergkamp in a three with Rubén Sosa and Salvatore Schillaci. Inter's poor league form culminated in the sacking of Bagnoli in February 1994 and his replacement by Gianpiero Marini, a member of Italy's 1982 FIFA World Cup-winning squad. The club finished 13th in Serie A, one point away from relegation, but enjoyed success in the UEFA Cup, beating Austria Salzburg in the final over two legs. Bergkamp was the competition's joint top scorer with eight goals and scored a hat-trick against Rapid București in the first round.
In Bergkamp's second season at Inter, the club changed managers again, appointing Ottavio Bianchi. Bergkamp endured a disappointing campaign, troubled with stress injuries and fatigue from the 1994 World Cup. He managed to score 5 goals in 26 appearances. Off the field, Bergkamp's relationship with the Italian press and fans became uncomfortable. His shy persona and his propensity to go home after matches was interpreted as apathy. Because of his poor performance on the pitch, one Italian publication renamed their award given to the worst performance of the week, "L'asino della settimana" (Donkey of the Week) to "Bergkamp della settimana". Inter ended the league season in sixth position and failed to retain the UEFA Cup, with the club eliminated in the second round. In February 1995, the club was purchased by Italian businessman and fan Massimo Moratti, who promised to invest heavily in the squad. Bergkamp's future in the first team was uncertain following the signing of Maurizio Ganz a month after the takeover.
As Moratti prepared to make wholesale changes at the club, Bergkamp left Inter and signed with Arsenal in June 1995 for a transfer fee estimated at £7.5 million. He became manager Bruce Rioch's first signing at Arsenal and broke the club's transfer fee record set at £2.5 million. Bergkamp's arrival at the club was significant not only because he was an established international footballer who looked to have his best years ahead of him, but also because he was a major contributor to Arsenal's return to success after much decline in the mid-1990s. On the opening day of the 1995–96 league season, Bergkamp made his full debut against Middlesbrough. He struggled to adapt to the English game and failed to score in the club's next six league matches, prompting ridicule by the national press. On 23 September 1995, Bergkamp scored his first and second goals for Arsenal against Southampton at Highbury. Bergkamp ended his first season with 33 appearances and 11 goals, helping Arsenal finish fifth and earn a place in the UEFA Cup by scoring the winner against Bolton Wanderers on the final day of the season.
The appointment of Arsène Wenger as Arsenal manager in September 1996 marked a turning point in Bergkamp's career. Wenger, who had moderate success coaching in France and Japan, recognised Bergkamp's talent and wanted to use him as a fulcrum of the team's forward play. Both were advocates of a continental style of attacking football, and Wenger's decision to impose a strict fitness and health regimen pleased Bergkamp. Despite making fewer appearances in the 1996–97 season, Bergkamp was more influential in the first team, creating 13 assists. Against Tottenham Hotspur in November 1996, he set up an 88th-minute winner for captain Tony Adams to volley in using his left foot. He then scored in injury time, controlling a high ball with his left foot and evading his marker Stephen Carr in a tight area to set up his shot. Bergkamp received his first red card against Sunderland in January 1997 for a high tackle on midfielder Paul Bracewell in the 26th minute. Arsenal went on to lose the match 1–0, but a run of 8 wins in their final 16 matches gave the club a third-place finish, missing out on a spot in the UEFA Champions League via goal difference.
Bergkamp was instrumental the following season in helping Arsenal complete a domestic league and cup double. He became the club's top scorer with 22 goals and recorded a strike rate of 0.57. Arsenal's achievement was all the more astonishing given the team, written off by many in December 1997, had made ground on reigning Premier League champions Manchester United. Early in the season away to Leicester City at Filbert Street on 23 August 1997, Bergkamp scored his first hat-trick for the club. The third goal, which he regarded as his favourite for Arsenal, required just one touch to control the ball in the penalty box, another to flick it past his marker Matt Elliott before juggling it with his feet and shooting past goalkeeper Kasey Keller. After the match, Leicester manager Martin O'Neill was gracious enough to admit Bergkamp's was "the best hat-trick I've ever seen". In an FA Cup quarter-final replay against West Ham United on 17 March 1998, Bergkamp was sent off for elbowing midfielder Steve Lomas and missed three matches due to suspension. He played no further part in Arsenal's season after overstretching his hamstring against Derby County on 29 April 1998, missing the 1998 FA Cup Final. Bergkamp was consoled with the PFA Players' Player of the Year award, becoming only the third non-British player to be recognised by his fellow professionals as the outstanding performer in English football.
After an effective 1998 World Cup campaign with the national team, Bergkamp had another productive season in 1998–99. Although Arsenal failed to retain the Premier League after losing the title on the final day of the season to Manchester United, Bergkamp was the club's second-top scorer in all competitions, with 16 goals, and finished the season as the top assist provider in the Premier League, alongside Jimmy Floyd Hasselbaink, with 13 assists. Arsenal were also defeated in a FA Cup semi-final replay against Manchester United in April 1999. With the score 1–1 heading into injury time, Arsenal were awarded a penalty after midfielder Ray Parlour was brought down by Phil Neville inside the 18-yard box. Bergkamp took the penalty shot, but it was saved by goalkeeper Peter Schmeichel. In the second half of extra time, Ryan Giggs scored the winner, a goal regarded by many as the greatest in the competition's history. After this miss, Bergkamp did not take another penalty for the remainder of his career.
The 1999–2000 season proved to be a frustrating one for both Arsenal and Bergkamp. The club finished second in the league, 18 points behind Manchester United, and lost in the 2000 UEFA Cup Final to Turkish opponents Galatasaray on penalties. The departure of compatriot Marc Overmars and French midfielder Emmanuel Petit in the close season led to speculation over Bergkamp's future. He ultimately agreed terms on a contract extension in December 2000. Despite an array of new signings made in the 2000–01 season, Arsenal were runners-up in the league for a third year in succession. The emergence of Thierry Henry and Sylvain Wiltord as the main strikers saw Bergkamp's first-team opportunities limited as a result. He was used as a late substitute in Liverpool's win over Arsenal in the 2001 FA Cup Final.
Success finally came in the 2001–02 season. Arsenal regained the league, beating Manchester United at Old Trafford in the penultimate game of the season to complete the club's second double under Wenger; Arsenal defeated Chelsea 2–0 to win the FA Cup four days prior. Bergkamp played in 33 league matches, setting up 15 goals, one of which was against Juventus in the second group stage of the Champions League. Holding off two markers, he twisted and turned before feeding the ball to Freddie Ljungberg in the penalty box to score. Bergkamp headed in the winner against Liverpool in a FA Cup fourth-round tie on 27 January 2002, but was shown a red card for a two-footed lunge on defender Jamie Carragher, who himself was sent off for throwing a coin into the crowd. He was subsequently banned for three matches (two league, one FA Cup round). Bergkamp appealed for his ban, but was unsuccessful. He made his return against Newcastle United on 3 March 2002. Early in the match, Arsenal midfielder Robert Pires played a low pass from the left flank to Bergkamp in the edge of the opponent area with his back to goal. Under pressure from his marker Nikos Dabizas, Bergkamp controlled the ball with one flick and went around the other side before placing the ball precisely into the bottom right-hand corner to score. Wenger described the goal as "unbelievable", adding "It was not only a magnificent goal but a very important one – I enjoyed it a lot". Bergkamp featured in nine out of the last ten league games, forming a productive partnership with Ljungberg.
Bergkamp reached a personal landmark during the 2002–03 season, scoring his 100th goal for Arsenal against Oxford United in a FA Cup third-round tie on 4 January 2003. In the league, Arsenal failed to retain the championship despite having led by eight points in March 2003. However, they did win the FA Cup for a second successive year, beating Southampton in the 2003 FA Cup Final. On 20 July 2003, Bergkamp signed a one-year extension at the club. The 2003–04 season ended on a high point for Bergkamp as Arsenal reclaimed the league title, becoming the first English team in more than a century to go through the entire domestic league season unbeaten. Against Leicester City in the final league match of the campaign with the score tied at 1–1, Bergkamp set up the winner with a pass to captain Patrick Vieira. Vieira rounded the goalkeeper and scored. The team, dubbed "The Invincibles" did not achieve similar dominance in Europe; Arsenal were beaten by Chelsea in the quarter-finals of the Champions League over two legs. Bergkamp committed himself to Arsenal at the end of the season, signing a further extension to his contract.
Bergkamp started in 29 league matches in the 2004–05 season, but Arsenal's title defence ended unsuccessfully. The team finished second, 12 points behind Chelsea. At home against Middlesbrough on 22 August 2004, Bergkamp acted as captain for the injured Vieira in a match where Arsenal came back from 1–3 down to win 5–3 and equal Nottingham Forest's record of 42 league matches undefeated. Against Sheffield United in the FA Cup on 19 February 2005, Bergkamp was shown a straight red card by referee Neale Barry for shoving defender Danny Cullip. His appeal of the decision was rejected by The Football Association (FA), meaning he missed the club's next three domestic games. In Arsenal's final home match of the season, against Everton, Bergkamp had a man of the match game, scoring once and assisting three of the goals in a 7–0 win. Bergkamp was moved by Arsenal supporters chanting "one more year", describing it as "quite special". "They obviously feel there is another year left in me, so that's great as it shows they're really behind me," he said. Following Arsenal's penalty shootout victory over Manchester United in the 2005 FA Cup Final, he signed a one-year contract extension.
The team finished fourth in the league in Bergkamp's final season at Arsenal. Bergkamp scored an injury-time winner against Thun on Matchday 1 of the Champions League, having come on as a substitute in the 72nd minute. After much campaigning from Arsenal supporters, the club designated one of its Highbury matchday themes, organised to commemorate the stadium's final season as home of Arsenal, to Dennis Bergkamp. "Bergkamp Day" took place on 15 April 2006 and saw Arsenal up against West Bromwich Albion. It celebrated the player's contribution to Arsenal; fans were given commemorative orange "DB10" T-shirts – the colour of his national team, his initials and his squad number. Bergkamp himself came on as a second-half substitute and set up the winning Robert Pires goal moments after Nigel Quashie had levelled the scoreline. Fittingly, Bergkamp's 89th-minute goal proved to be his last for Arsenal in competitive football. Bergkamp was an unused substitute in his final match for Arsenal against Barcelona in the Champions League final; Barcelona scored twice in the last 13 minutes to overturn Arsenal's early lead and win the competition.
Bergkamp was the focus of the first match at Arsenal's new ground, the Emirates Stadium. On 22 July 2006, a testimonial was played in his honour at the new stadium as Arsenal played his old club Ajax. Bergkamp kicked off the match with his father, Wim, and son, Mitchel. All four children acted as the match's mascots. The first half was played by members of Arsenal and Ajax's current squads, while the second was played by famous ex-players from both sides, including Ian Wright, Patrick Vieira, Marc Overmars, Emmanuel Petit and David Seaman for Arsenal; and Johan Cruyff, Marco van Basten, Danny Blind, Frank and Ronald de Boer for Ajax. Arsenal won the match 2–1 with goals from Henry and Nwankwo Kanu. Klaas-Jan Huntelaar had earlier opened the scoring for Ajax, making him the first goalscorer at the Emirates Stadium.
Bergkamp made his international debut for the Netherlands national team against Italy on 26 September 1990 as a substitute for Frank de Boer. He scored his first goal for the team against Greece on 21 November 1990. Bergkamp was selected for Euro 1992, where his national team were the defending champions. Although Bergkamp impressed, scoring three goals in the tournament, the team lost on penalties to eventual champions Denmark.
In the qualification for the 1994 FIFA World Cup, Bergkamp scored five goals and was selected for the finals, staged in the United States. He featured in every game for the national team, getting goals against Morocco in the group stages and the Republic of Ireland in the round-of-16. Bergkamp scored the second goal for the Netherlands against Brazil, but the team lost 3–2, exiting in the quarter-finals. At Euro 1996, Bergkamp scored against Switzerland and set up striker Patrick Kluivert's consolation goal against England, who advanced into the quarter-finals.
Against Wales in the 1998 FIFA World Cup qualification on 9 November 1996, he scored his first hat-trick for the national team. The Netherlands finished first in their group and qualified for the 1998 FIFA World Cup, held in France. Bergkamp scored three times in the competition, including a memorable winning goal in the final minute of the quarterfinal against Argentina.
He took one touch to control a long 60-yard aerial pass from Frank de Boer, brought the ball down through Argentine defender Roberto Ayala's legs, and finally finished by firing a volley with the outside of his right foot, past keeper Carlos Roa at a tight angle from the right.
The goal, cited by Bergkamp as his favourite in his career, was his 36th for the national team, overtaking Faas Wilkes as the record scorer. In the semi-finals, the Netherlands lost to Brazil on penalties after drawing 1–1 in normal time. Bergkamp made the All-Star team of the tournament, alongside Frank de Boer and Edgar Davids.
On 9 October 1999, Bergkamp scored his final goal for the Netherlands, against Brazil. As the Netherlands were co-hosts for Euro 2000, the team automatically qualified for the tournament and were considered favourites. In the semi-finals, the Netherlands lost 3–1 on penalties to Italy. Following the defeat, Bergkamp announced his retirement from international football, choosing to focus on his club career. His final goal tally of 37 goals in 79 appearances was overtaken by Patrick Kluivert in June 2003.
Bergkamp was schooled in Total Football, a playing style and philosophy which relied on versatility. This was primarily to maximise the footballer's potential; players tried out every outfield position before finding one that suited them best. Every age group at Ajax played in the same style and formation as the first team – 3–4–3 – to allow individuals to slot in without effort when moving up the pyramid. Bergkamp "played in every position apart from goalie" and believed he benefited from the experience of playing as a defender, as it helped him "know how they think and how to beat them". When he made his debut as a substitute against Roda JC, Bergkamp was positioned on the right wing, where he remained for three years.
During his time at Inter Milan, Bergkamp was switched to the position of a main striker, but failed to cooperate with his offensive partner Rubén Sosa, whom he later called "selfish". Furthermore, due to his introverted character, he was accused of lacking consistency and leadership skills by the Italian press, and struggled to replicate his previous form during his time with Inter. When Bergkamp joined Arsenal in 1995, he enjoyed a successful strike partnership with Wright, and in later seasons Anelka and Henry, playing in his preferred position as a creative second striker. The arrival of Overmars in the 1997–98 season enhanced Bergkamp's play, as he was getting more of the ball. Between August and October 1997, he scored seven goals in seven league matches. A similar rapport developed between him and Ljungberg during the 2001–02 season.
Although he was known for his composure and ability to score several goals for his team as a forward, Bergkamp was also capable of playing in a free role behind a lone striker, where he essentially functioned in the number 10 role as a playmaking attacking midfielder or deep-lying forward, due to his ball skills and creative ability, which enabled him to drop deep between the lines and link-up play, and operate across all attacking areas of the pitch. A quick, elegant, intelligent, and gifted player, who was regarded as one of the most technically accomplished players of all time, he possessed an excellent first touch, which – allied with his quick feet, dribbling ability, and change of pace – enabled him to beat defenders in one on one situations, while his attacking movement, physique, balance, and close control allowed him to hold up the ball and create space for teammates; his vision and passing range with both feet, despite being naturally right-footed, subsequently allowed him to provide assists for on-running strikers. Bergkamp often stated he preferred playing in this deeper role, as he derived more pleasure from assisting goals, rather than scoring them himself.
Throughout his playing career, Bergkamp was accused of diving, and was referred to as a "cheat" and "dirty player" for retaliating against players who had previously challenged him, something his former manager Wenger denied. In an interview with "The Times" in 2004, he said that while he was at Inter, he realised the importance of being mentally tough in order to survive: "A lot of people there try to hurt you, not just physically but mentally as well, and coming from the easygoing culture in Holland, I had to adopt a tougher approach. There, it was a case of two strikers up against four or five hard defenders who would stop at nothing." Bergkamp says his aggression often stems from frustration.
Bergkamp has received several accolades during his playing career. He twice finished in third place for the 1993 and 1996 FIFA World Player of the Year award and was named in FIFA 100, a list compiled by footballer Pelé of the 125 greatest living footballers. In his club career, Bergkamp won two successive Dutch Footballer of the Year awards in 1991 and 1992 and was the Eredivisie top scorer for three consecutive seasons (1990–91 to 1992–93). He was named the FWA Footballer of the Year and PFA Players' Player of the Year in April and May 1998 and made the PFA Team of the Year for the 1997–98 season. Bergkamp also achieved a unique feat in being voted first, second and third on Match of the Day's Goal of the Month competition in August 1997. For his national team, Bergkamp was the top scorer at Euro 1992 and was selected in the FIFA World Cup All-Star Team for the 1998 World Cup.
In April 2007, Bergkamp was inducted into the English Football Hall of Fame by viewers of BBC's "Football Focus". A year later, he was voted second by Arsenal fans behind Thierry Henry in a list of the "50 Gunners Greatest Players". In February 2014, Arsenal unveiled a statue of Bergkamp outside the Emirates Stadium to honour his time at the club. A statue of Dennis Bergkamp will be erected outside the KNVB headquarters in Zeist, as he has been chosen as the best Dutch international player from 1990 to 2015. The statue will join those of "the eleven of the century", erected in 1999, alongside statues of Johan Cruyff, Ruud Gullit, Frank Rijkaard and Marco van Basten, amongst others.
Upon retiring, Bergkamp insisted he would not move into coaching. He turned down an offer to scout for Arsenal and instead concentrated on travelling and spending time with his family. However, in April 2008, he began a fast-track coaching diploma for former Dutch international footballers and undertook a trainee role at Ajax. Having completed the Coach Betaald Voetbal course by the Royal Dutch Football Association (KNVB), Bergkamp was appointed assistant to Johan Neeskens for the newly formed Netherlands B team on 26 October 2008. For the 2008–09 season, Bergkamp returned to Ajax in a formal coaching position with responsibility for the D2 (U12) youth team. Following the promotion of Frank de Boer as manager of Ajax in December 2010, Bergkamp was appointed assistant manager to Fred Grim, dealing with Ajax' flagship A1 (U19) youth team.
In August 2011, Bergkamp was named De Boer's assistant at Ajax. However, after the arrival of Peter Bosz as the new head coach at Ajax, Bergkamp's role at Ajax slightly changed. He no longer sat on the bench during first-team matches, but instead focused more on field training and on helping youth players reach the first team. He and fellow assistant Hennie Spijkerman were sacked from their roles in December 2017.
Bergkamp has been married to Henrita Ruizendaal since 16 June 1993. The couple have four children: Estelle Deborah, Mitchel Dennis, Yasmin Naomi and Saffron Rita. His nephew, Roland Bergkamp, currently plays for RKC Waalwijk, having previously played for Brighton & Hove Albion. He speaks fluent Dutch (his mother tongue), English and Italian.
Bergkamp's nickname is the "Non-Flying Dutchman" due to his fear of flying. Contemporary sources believed that this stemmed from incidents with the Netherlands national team at the 1994 World Cup where the engine of the plane cut out during a flight, and when a flight was delayed because a journalist made a joke about having a bomb in his bag. In his 2013 autobiography, Bergkamp stated that his phobia was in fact caused by his time at Inter Milan, in which they regularly travelled to away games in small aeroplanes. Bergkamp decided he would never fly again but did consider seeking psychiatric help:I've got this problem and I have to live with it. I can't do anything about it, it is a psychological thing and I can't explain it. I have not flown on a plane for two years. The Dutch FA has been sympathetic, so have Arsenal, so far. I am considering psychiatric help. I can't fly. I just freeze. I get panicky. It starts the day before, when I can't sleep.
The condition severely limited his ability to play in away matches in European competitions and to travel with the national team. In some cases, he would travel overland by car or train, but the logistics of some matches were such that he would not travel at all. In the build-up to Arsenal's Champions League match against Lyon in February 2001, Wenger spoke of his concerns for Bergkamp travelling by train and car, because of the exertions involved.
Bergkamp features in EA Sports' "FIFA" video game series; he was on the cover for the International edition of "FIFA 99", and was named in the Ultimate Team Legends in "FIFA 14".
†Includes cup competitions: the KNVB Cup, Coppa Italia, Football League Cup and FA Cup. Super Cups such as the FA Community Shield are not included.
Ajax
Inter Milan
Arsenal
Individual
Works cited | https://en.wikipedia.org/wiki?curid=45497 |
Coelacanth
The coelacanths ( ) constitute a now-rare order of fish that includes two extant species in the genus "Latimeria": the West Indian Ocean coelacanth ("Latimeria chalumnae") primarily found near the Comoro Islands off the east coast of Africa and the Indonesian coelacanth ("Latimeria menadoensis"). They follow the oldest-known living lineage of Sarcopterygii (lobe-finned fish and tetrapods), which means they are more closely related to lungfish and tetrapods than to ray-finned fish. They are found along the coastline of Indonesia and in the Indian Ocean. The West Indian Ocean coelacanth is a critically endangered species.
Coelacanths belong to the subclass Actinistia, a group of lobed-finned fish related to lungfish and certain extinct Devonian fish such as osteolepiforms, porolepiforms, rhizodonts, and "Panderichthys". Coelacanths were thought to have become extinct in the Late Cretaceous, around 66 million years ago, but were rediscovered in 1938 off the coast of South Africa.
The coelacanth was long considered a "living fossil" because scientists thought it was the sole remaining member of a taxon otherwise known only from fossils, with no close relations alive, and that it evolved into roughly its current form approximately 400 million years ago. However, several recent studies have shown that coelacanth body shapes are much more diverse than previously thought.
The word "Coelacanth" is an adaptation of the Modern Latin "Cœlacanthus" ("hollow spine"), from the Greek κοῖλ-ος ("koilos ""hollow" + ἄκανθ-α "akantha" "spine"). It is a common name for the oldest living line of Sarcopterygii, referring to the hollow caudal fin rays of the first fossil specimen described and named by Louis Agassiz in 1839. The genus name "Latimeria" commemorates Marjorie Courtenay-Latimer, who discovered the first specimen.
The coelacanth, which is related to lungfishes and tetrapods, was believed to have been extinct since the end of the Cretaceous period. More closely related to tetrapods than to the ray-finned fish, coelacanths were considered transitional species between fish and tetrapods. On 23 December 1938, the first "Latimeria" specimen was found off the east coast of South Africa, off the Chalumna River (now Tyolomnqa). Museum curator Marjorie Courtenay-Latimer discovered the fish among the catch of a local angler, Captain Hendrick Goosen. Latimer contacted a Rhodes University ichthyologist, J. L. B. Smith, sending him drawings of the fish, and he confirmed the fish's importance with a famous cable: "MOST IMPORTANT PRESERVE SKELETON AND GILLS = FISH DESCRIBED."
Its discovery 66 million years after it was believed to have become extinct makes the coelacanth the best-known example of a Lazarus taxon, an evolutionary line that seems to have disappeared from the fossil record only to reappear much later. Since 1938, West Indian Ocean coelacanth have been found in the Comoros, Kenya, Tanzania, Mozambique, Madagascar, in iSimangaliso Wetland Park, and off the South Coast of Kwazulu-Natal in South Africa.
The Comoro Islands specimen was discovered in December 1952. Between 1938 and 1975, 84 specimens were caught and recorded.
The second extant species, the Indonesian coelacanth, was described from Manado, North Sulawesi, Indonesia in 1999 by Pouyaud et al. based on a specimen discovered by Mark V. Erdmann in 1998 and deposited at the Indonesian Institute of Sciences (LIPI). Erdmann and his wife Arnaz Mehta first encountered a specimen at a local market in September 1997, but took only a few photographs of the first specimen of this species before it was sold. After confirming that it was a unique discovery, Erdmann returned to Sulawesi in November 1997 to interview fishermen and look for further examples. A second specimen was caught by a fisherman in July 1998, which was then handed to Erdmann.
"Latimeria chalumnae" and "L. menadoensis" are the only two known living coelacanth species. Coelacanths are large, plump, lobe-finned fish that can grow to more than 2 meters (6 feet 6 inches) and weigh around 90 kilograms (200 pounds). They are estimated to live for 60 years or more. Modern coelacanths appear larger than those found as fossils.
They are nocturnal piscivorous drift-hunters. The body is covered in cosmoid scales that act as armor. Coelacanths have eight fins – 2 dorsal fins, 2 pectoral fins, 2 pelvic fins, 1 anal fin and 1 caudal fin. The tail is very nearly equally proportioned and is split by a terminal tuft of fin rays that make up its caudal lobe. The eyes of the coelacanth are very large, while the mouth is very small. The eye is acclimatized to seeing in poor light by rods that absorb mostly short wavelengths. Coelacanth vision has evolved to a mainly blue-shifted color capacity. Pseudomaxillary folds surround the mouth and replace the maxilla, a structure absent in coelacanths. Two nostrils, along with four other external openings, appear between the premaxilla and lateral rostral bones. The nasal sacs resemble those of many other fish and do not contain an internal nostril. The coelacanth's rostral organ, contained within the ethmoid region of the braincase, has three unguarded openings into the environment and is used as a part of the coelacanth's laterosensory system. The coelacanth's auditory reception is mediated by its inner ear, which is very similar to that of tetrapods because it is classified as being a basilar papilla.
Coelacanths are a part of the clade Sarcopterygii, or the lobe-finned fishes. Externally, several characteristics distinguish the coelacanth from other lobe-finned fish. They possess a three-lobed caudal fin, also called a trilobate fin or a diphycercal tail. A secondary tail extending past the primary tail separates the upper and lower halves of the coelacanth. Cosmoid scales act as thick armor to protect the coelacanth's exterior. Several internal traits also aid in differentiating coelacanths from other lobe-finned fish. At the back of the skull, the coelacanth possesses a hinge, the intracranial joint, which allows it to open its mouth extremely wide. Coelacanths also retain an oil-filled notochord, a hollow, pressurized tube which is replaced by the vertebral column early in embryonic development in most other vertebrates. The coelacanth heart is shaped differently from that of most modern fish, with its chambers arranged in a straight tube. The coelacanth braincase is 98.5% filled with fat; only 1.5% of the braincase contains brain tissue. The cheeks of the coelacanth are unique because the opercular bone is very small and holds a large soft-tissue opercular flap. A spiracular chamber is present, but the spiracle is closed and never opens during development. Coelacanth also possess a unique rostral organ within the ethmoid region of the braincase. Also unique to extant coelacanths is the presence of a "fatty lung" or a fat-filled single-lobed vestigial lung, homologous to other fishes' swim bladder. The parallel development of a fatty organ for buoyancy control suggest a unique specialization for deep-water habitats. There are small, hard but flexible plates around the vestigial lung in adult specimen, though not around the fatty organ. The plates most likely had a regulation function for the volume of the lung. Due to the size of the fatty organ, researchers assume that it is responsible for the kidney's unusual relocation. The two kidneys, which are fused into one, are located ventrally within the abdominal cavity, posterior to the cloaca.
In 2013, a group led by Chris Amemiya and Neil Shubin published the genome sequence of the coelacanth in the journal "Nature". The African coelacanth genome was sequenced and assembled using DNA from a Comoros Islands "Latimeria chalumnae" specimen. It was sequenced by Illumina sequencing technology and assembled using the short read genome assembler ALLPATHS-LG.
Due to their lobed fins and other features, it was once hypothesized that the coelacanth might be the most recent shared ancestor between terrestrial and marine vertebrates. But after sequencing the full genome of the coelacanth, it was discovered that the lungfish is the most recent shared ancestor. Coelacanths and lungfish had already diverged from a common ancestor before the lungfish made the transition to land.
Another important discovery made from the genome sequencing is that the coelacanths are still evolving today (but at a relatively slow rate). One reason the coelacanths are evolving so slowly is the lack of evolutionary pressure on these organisms. They have few predators, and live deep in the ocean where conditions are very stable. Without much pressure for these organisms to adapt to survive, the rate at which they have evolved is much slower compared to other organisms.
The following is a classification of some of the known coelacanth genera and families:
According to genetic analysis of current species, the divergence of coelacanths, lungfish and tetrapods is thought to have occurred about 390 million years ago. Coelacanths were once thought to have become extinct 66 million years ago during the Cretaceous–Paleogene extinction event. The first recorded coelacanth fossil, found in Australia, was of a jaw that dated back 360 million years, named "Eoachtinistia foreyi". The most recent genus of coelacanth in the fossil record is "Megalocoelacanthus", whose disarticulated remains are found in Campanian to possibly earliest Maastrichtian-aged marine strata of the Eastern and Central United States. A small bone fragment from the European Paleocene has been considered the only plausible post-Cretaceous record, but this identification is based on comparative bone histology methods of doubtful reliability.
The fossil record is unique because coelacanth fossils were found 100 years before the first live specimen was identified. In 1938, Courtenay-Latimer rediscovered the first live specimen, "L. chalumnae", caught off the coast of East London, South Africa. In 1997, a marine biologist on honeymoon discovered the second live species, "Latimeria menadoensis", in an Indonesian market.
In July 1998, the first live specimen of "Latimeria menadoensis" was caught in Indonesia. Approximately 80 species of coelacanth have been described, including the two extant species. Before the discovery of a live specimen, the coelacanth time range was thought to have spanned from the Middle Devonian to the Upper Cretaceous period. Although fossils found during that time were claimed to demonstrate a similar morphology, recent studies have expressed the view that coelacanth morphologic conservatism is a belief not based on data.
The following cladogram is based on multiple sources.
The current coelacanth range is primarily along the eastern African coast, although "Latimeria menadoensis" was discovered off Indonesia. Coelacanths have been found in the waters of Kenya, Tanzania, Mozambique, South Africa, Madagascar, Comoros and Indonesia. Most "Latimeria chalumnae" specimens that have been caught have been captured around the islands of Grande Comore and Anjouan in the Comoros Archipelago (Indian Ocean). Though there are cases of "L. chalumnae" caught elsewhere, amino acid sequencing has shown no big difference between these exceptions and those found around Comore and Anjouan. Even though these few may be considered strays, there are several reports of coelacanths being caught off the coast of Madagascar. This leads scientists to believe that the endemic range of "Latimeria chalumnae" coelacanths stretches along the eastern coast of Africa from the Comoros Islands, past the western coast of Madagascar to the South African coastline. Mitochondrial DNA sequencing of coelacanths caught off the coast of southern Tanzania suggests a divergence of the two populations some 200,000 years ago. This could refute the theory that the Comoros population is the main population while others represent recent offshoots. A live specimen was seen and recorded on video on November 2019 at 69 m off the village of Umzumbe on the South Coast of KwaZulu-Natal, about 325 km south of the iSimangaliso Wetland Park. This is the furthest south since the original discovery, and the second shallowest record after 54 m in the Diepgat Canyon. These sightings suggest that they may live shallower than previously thought, at least at the southern end of their range, where colder, better oxygenated water is available at shallower depths.
The geographical range of the Indonesia coelacanth, "Latimeria menadoensis", is believed to be off the coast of Manado Tua Island, Sulawesi, Indonesia in the Celebes Sea. Key components confining coelacanths to these areas are food and temperature restrictions, as well as ecological requirements such as caves and crevices that are well-suited for drift feeding. Teams of researchers using submersibles have recorded live sightings of the fish in the Sulawesi Sea as well as in the waters of Biak in Papua.
Anjouan Island and the Grande Comore provide ideal underwater cave habitats for coelacanths. The islands' underwater volcanic slopes, steeply eroded and covered in sand, house a system of caves and crevices which allow coelacanths resting places during the daylight hours. These islands support a large benthic fish population that help to sustain coelacanth populations.
During the daytime, coelacanths rest in caves anywhere from 100 to 500 meters deep. Others migrate to deeper waters. The cooler waters (below 120 meters) reduce the coelacanths' metabolic costs. Drifting toward reefs and night feeding saves vital energy. Resting in caves during the day also saves energy otherwise used to fight currents.
Coelacanth locomotion is unique. To move around they most commonly take advantage of up- or down-wellings of current and drift. Their paired fins stabilize movement through the water. While on the ocean floor, they do not use the paired fins for any kind of movement. Coelacanths create thrust with their caudal fins for quick starts. Due to the abundance of its fins, the coelacanth has high maneuverability and can orient its body in almost any direction in the water. They have been seen doing headstands as well as swimming belly up. It is thought that the rostral organ helps give the coelacanth electroperception, which aids in movement around obstacles.
Coelacanths are fairly peaceful when encountering others of their kind, remaining calm even in a crowded cave. They do avoid body contact, however, withdrawing immediately if contact occurs. When approached by foreign potential predators (e.g. a submersible), they show panic flight reactions, suggesting that coelacanths are most likely prey to large deepwater predators. Shark bite marks have been seen on coelacanths; sharks are common in areas inhabited by coelacanths. Electrophoresis testing of 14 coelacanth enzymes shows little genetic diversity between coelacanth populations. Among the fish that have been caught were about equal numbers of males and females. Population estimates range from 210 individuals per population all the way to 500 per population. Because coelacanths have individual color markings, scientists think that they recognize other coelacanths via electric communication.
Coelacanths are nocturnal piscivores who feed mainly on benthic fish populations and various cephalopods. They are "passive drift feeders", slowly drifting along currents with only minimal self-propulsion, eating whatever prey they encounter.
Coelacanths are ovoviviparous, meaning that the female retains the fertilized eggs within her body while the embryos develop during a gestation period of over a year. Typically, females are larger than the males; their scales and the skin folds around the cloaca differ. The male coelacanth has no distinct copulatory organs, just a cloaca, which has a urogenital papilla surrounded by erectile caruncles. It is hypothesized that the cloaca everts to serve as a copulatory organ.
Coelacanth eggs are large with only a thin layer of membrane to protect them. Embryos hatch within the female and eventually are given live birth, which is a rarity in fish. This was only discovered when the American Museum of Natural History dissected its first coelacanth specimen in 1975 and found it pregnant with five embryos. Young coelacanths resemble the adult, the main differences being an external yolk sac, larger eyes relative to body size and a more pronounced downward slope of the body. The juvenile coelacanth's broad yolk sac hangs below the pelvic fins. The scales and fins of the juvenile are completely matured; however, it does lack odontodes, which it gains during maturation.
A study that assessed the paternity of the embryos inside two Coelacanth females indicated that each clutch was sired by a single male. This could mean that females mate monandrously, i.e. with one male only. Polyandry, female mating with multiple males, is common in both plants and animals and can be advantageous (e.g. insurance against mating with an infertile or incompatible mate), but also confers costs (increased risk of infection, danger of falling prey to predators, increased energy input when searching for new males). Alternatively, the study's results could indicate that, despite female polyandry, one male is used to fertilise all the eggs, potentially through female sperm choice or last-male sperm precedence.
Because little is known about the coelacanth, the conservation status is difficult to characterize. According to Fricke "et al." (1995), there should be some stress put on the importance of conserving this species. From 1988 to 1994, Fricke counted some 60 individuals of "L. chalumnae" on each dive. In 1995 that number dropped to 40. Even though this could be a result of natural population fluctuation, it also could be a result of overfishing. The IUCN currently classifies "L. chalumnae" as Critically Endangered, with a total population size of 500 or fewer individuals. "L. menadoensis" is considered Vulnerable, with a significantly larger population size (fewer than 10,000 individuals).
Currently, the major threat towards the coelacanth is the accidental capture by fishing operations, especially commercial deep-sea trawling. Coelacanths usually are caught when local fishermen are fishing for oilfish. Fishermen sometimes snag a coelacanth instead of an oilfish because they traditionally fish at night, when oilfish (and coelacanths) feed.
Before scientists became interested in coelacanths, they were thrown back into the water if caught. Now that there is an interest in them, fishermen trade them in to scientists or other officials once they have been caught. Before the 1980s, this was a problem for coelacanth populations. In the 1980s, international aid gave fiberglass boats to the local fishermen, which resulted in fishing beyond the coelacanth territories into more fish-productive waters. Since then, most of the motors on the boats have broken down so the local fishermen are now back in the coelacanth territory, putting the species at risk again.
Different methods to minimize the number of coelacanths caught include moving fishers away from the shore, using different laxatives and malarial salves to reduce the quantity of oilfish needed, using coelacanth models to simulate live specimens, and increasing awareness of the need to protect the species. In 1987 the Coelacanth Conservation Council advocated the conservation of coelacanths. The CCC has many branches of its agency located in Comoros, South Africa, Canada, the United Kingdom, the U.S., Japan and Germany. The agencies were established to help protect and encourage population growth of coelacanths.
A "Deep Release Kit" was developed in 2014 and distributed by private initiative, consisting of a weighted hook assembly that allows a fisherman to return an accidentally caught coelacanth to deep waters where the hook can be detached once it hits the sea floor. Conclusive reports about the effectiveness of this method are still pending.
In 2002, the South African Coelacanth Conservation and Genome Resource Programme was launched to help further the studies and conservation of the coelacanth. This program focuses on biodiversity conservation, evolutionary biology, capacity building, and public understanding. The South African government committed to spending R10 million on the program.
In 2011, a plan for a Tanga Coelacanth Marine Park was designed to conserve marine biodiversity for marine animals including the coelacanth. The park was designed to reduce habitat destruction and improve prey availability for endangered species.
Coelacanths are considered a poor source of food for humans and likely most other fish-eating animals. Coelacanth flesh has high amounts of oil, urea, wax esters, and other compounds that give the flesh a distinctly unpleasant flavor, make it difficult to digest and can cause diarrhea. Their scales themselves emit mucus, which combined with the excessive oil their bodies produce, make coelacanths a slimy food. Where the coelacanth is more common, local fishermen avoid it because of its potential to sicken consumers. As a result, the coelacanth has no real commercial value apart from being coveted by museums and private collectors.
Because of the surprising nature of the coelacanth's discovery, they have been a frequent source of inspiration in modern artwork, craftsmanship, and literature. At least 22 countries have depicted them on their postage stamps, particularly the Comoros, where they have issued twelve different sets of coelacanth stamps. The coelacanth is also depicted on the 1000 Comorian franc banknote, as well as the 55 CF coin.
Coelacanths have appeared in the video game series "Animal Crossing" as one of the rarest fish species the player is able to catch using a fishing rod. The coelacanth was also the inspiration for the fish Pokémon Relicanth, which shares similarities in its namesake and appearance. | https://en.wikipedia.org/wiki?curid=45503 |
UNCF
UNCF, the United Negro College Fund, also known as the United Fund, is an American philanthropic organization that funds scholarships for black students and general scholarship funds for 37 private historically black colleges and universities. UNCF was incorporated on April 25, 1944 by Frederick D. Patterson (then president of what is now Tuskegee University), Mary McLeod Bethune, and others. UNCF is headquartered at 1805 7th Street, NW in Washington, D.C. In 2005, UNCF supported approximately 65,000 students at over 900 colleges and universities with approximately $113 million in grants and scholarships. About 60% of these students are the first in their families to attend college, and 62% have annual family incomes of less than $25,000. UNCF also administers over 450 named scholarships.
UNCF's president and chief executive officer is Michael Lomax. Past presidents of the UNCF included William H. Gray and Vernon Jordan.
Though founded to address funding inequities in education resources for African Americans, UNCF-administered scholarships are open to all ethnicities; the great majority of recipients are still African-American. It provides scholarships to students attending its member colleges as well as to those going elsewhere.
Graduates of UNCF member institutions and scholarships have included many blacks in the fields of business, politics, health care and the arts. Some prominent UNCF alumni include Dr. Martin Luther King, Jr., a Nobel Peace Prize recipient and leader in the Civil Rights Movement; Alexis Herman, former U.S. Secretary of Labor; noted movie director Spike Lee; actor Samuel L. Jackson; General Chappie James, the U.S. Air Force’s first black four-star general; and Dr. David Satcher, a former U.S. Surgeon General and director of the Centers for Disease Control.
In 1944 William J. Trent, a long time activist for education for blacks, joined with Tuskegee Institute President Frederick D. Patterson and Mary McLeod Bethune to found the UNCF, a nonprofit that united college presidents to raise money collectively through an "appeal to the national conscience". As the first executive director from the organization's start in 1944 until 1964, Trent raised $78 million for historically black colleges so they could become "strong citadels of learning, carriers of the American dream, seedbeds of social evolution and revolution". In 2008, reflecting shifting attitudes towards the second word in its name, the UNCF shifted from using its full name to using its initials, releasing a new logo with the initials alone and featuring their slogan more prominently.
The UNCF has received charitable donations for its scholarship programs. One of the more high-profile donations made was by then-senator and future U.S. President John F. Kennedy who donated the money from the Pulitzer Prize for his book "Profiles in Courage" to the Fund. The largest ever single donation was made in 1990 by Walter Annenberg, who donated $50 million to the fund.
Beginning in 1980, singer Lou Rawls began the "Lou Rawls Parade of Stars" telethon to benefit the UNCF. The annual event, now known as "An Evening of Stars", consists of stories of successful African-American students who have graduated or benefited from one of the many historically black colleges and universities and who received support from the UNCF. The telethon featured comedy and musical performances from various artists in support of the UNCF's and Rawls' efforts. The event has raised over $200 million in 27 shows for the fund through 2006.
In January 2004, Rawls was honored by the United Negro College Fund for his more than 25 years of charity work with the organization. Instead of Rawls' hosting and performing, he was given the seat of honor and celebrated by his performing colleagues, including Stevie Wonder, The O'Jays, Gerald Levert, Ashanti, and several others. Before his death in January 2006, Rawls' last performance was a taping for the 2006 telethon that honored Wonder, months before entering the hospital after being diagnosed with cancer earlier in the year.
In addition to the telethon, there are a number of other fundraising activities, including the "Walk for Education" held annually in Los Angeles, California, which includes a five kilometer walk/run. In Houston, Texas, the Cypresswood Golf Club hosts an annual golf tournament in April.
In 2014, Koch Industries Inc. and the Charles Koch Foundation made a $25 million grant to UNCF. In protest of the Kochs, the American Federation of State, County and Municipal Employees, a major labor union, ended its yearly $50,000–60,000 support for UNCF.
In June 2020, philanthropists Reed Hastings and his wife Patty Quillin donated $40 million to the UNCF to be used as scholarship funds for students enrolled at UNCF institutions. Their single donation is the largest in UNCF history.
In 1972, the UNCF adopted as its motto the maxim "A mind is a terrible thing to waste." This maxim has become one of the most widely recognized slogans in advertising history. The motto was notably mangled in a 1989 address to the organization by then–Vice President of the United States Dan Quayle, who stated: "And you take the U.N.C.F. model that what a waste it is to lose one's mind or not to have a mind is being very wasteful. How true that is."
The motto, which has been used in numerous award-winning UNCF ad campaigns, was created by Forest Long, of the advertising agency Young & Rubicam, in partnership with the Ad Council.
A lesser-known slogan the UNCF also uses, in reference to its intended beneficiaries, points out that they're "not asking for a handout, just a hand." | https://en.wikipedia.org/wiki?curid=45504 |
World peace
World peace, or peace on Earth, is the concept of an ideal state of happiness, freedom and peace within and among all people and nations on Planet Earth. This idea of world nonviolence is one motivation for people and nations to look cooperate, either voluntarily or by virtue of a system of governance that has this objective. Different cultures, religions, philosophies, and organizations have varying concepts on how such a state would come about.
Various religious and secular organizations have the stated aim of achieving world peace through addressing human rights, technology, education, engineering, medicine or diplomacy used as an end to all forms of fighting. Since 1945, the United Nations and the five permanent members of its Security Council (China, France, Russia, the United Kingdom and the United States) have operated under the aim to resolve conflicts without war or declarations of war. Nonetheless, nations have entered numerous military conflicts since then.
Many theories as to how world peace could be achieved have been proposed. Several of these are listed below.
The term is traced back to the Roman Emperor Hadrian (reigned AD 117 – 138) but the concept is as old as recorded history. In 1943, at the peak of World War II, the founder of the Paneuropean Union, Richard von Coudenhove-Kalergi, argued that after the war the United States is bound to take "command of the skies" to ensure the lasting world peace:
In fact, near the entrance to the headquarters of the SAC at Offutt Air Force Base stands a large sign with a SAC emblem and its motto: "Peace is our profession." The motto "was a staggering paradox that was also completely accurate". One SAC Bomber—Convair B-36—is called "Peacemaker" and one inter-continental missile-LGM-118-"Peacekeeper".
In 2016, former US Secretary of Defense Ash Carter envisaged that the rebalance to the Asia-Pacific will make the region "peaceful" through "strength":
Introduction to US National Security and Defense Strategies of 2018 states: The US force posture combined with the allies will "preserve peace through strength". The document proceeds to detail what "achieving peace through strength requires".
Associated with "peace through strength" are concepts of "preponderance of power" (as opposed to balance of power), hegemonic stability theory, "unipolar stability", and imperial peace (such as Pax Romana, Pax Britannica, or Pax Americana).
According to the dialectic materialist theory of Karl Marx, humanity under capitalism is divided in just two classes: the proletariat - who do not possess the means of production, and the bourgeoisie - who do possess the means of production. Once the communist revolution occurs and consequently abolishes the private propriety of the means of production, humanity will not be divided and the tension created between these two classes will cease. Through a period called socialism, the rule of the proletariat will dissolve the last vestiges of capitalism, and will help to make the revolution worldwide. Once private propriety has been abolished worldwide, the state will no longer be needed to act as a monopoly of violence and will therefore disappear. Organizations of workers will take its place and manage the production of things, but no organization will have any military power, a police force, nor prisons.
The main principle of Marx's theory is that material conditions limit spiritual conditions. Should their material conditions allow it, people around the world will not be violent but respectful, peaceful and altruistic. In a state of communism, they will no longer need to live for survival, but for their own spiritual fulfillment.
Leon Trotsky argued that a proletariat world revolution would lead to world peace.
Proponents of the democratic peace theory claim that strong empirical evidence exists that democracies never or rarely wage war against each other.
There are, however, several wars between democracies that have taken place, historically.
In her essay "The Roots of War", Ayn Rand held that the major wars of history were started by the more controlled economies of the time against the freer ones and that capitalism gave mankind the longest period of peace in history—a period during which there were no wars involving the entire civilized world—from the end of the Napoleonic wars in 1815 to the outbreak of World War I in 1914, with the exceptions of the Franco-Prussian War (1870), the Spanish–American War (1898), and the American Civil War (1861–1865), which notably occurred in perhaps the most liberal economy in the world at the beginning of the industrial revolution.
Proponents of Cobdenism claim that by removing tariffs and creating international free trade wars would become impossible, because free trade prevents a nation from becoming self-sufficient, which is a requirement for long wars.
However, free trade does not prevent a nation from establishing some sort of emergency plan to become temporarily self-sufficient in case of war or that a nation could simply acquire what it needs from a different nation. A good example of this is World War I, during which both Britain and Germany became partially self-sufficient. This is particularly important because Germany had no plan for creating a war economy.
More generally, free trade—while not making wars impossible—can make wars, and restrictions on trade caused by wars, very costly for international companies with production, research, and sales in many different nations. Thus, a powerful lobby—unless there are only national companies—will argue against wars.
Mutual assured destruction is a doctrine of military strategy in which a full-scale use of nuclear weapons by two opposing sides would effectively result in the destruction of both belligerents. Proponents of the policy of mutual assured destruction during the Cold War attributed this to the increase in the lethality of war to the point where it no longer offers the possibility of a net gain for either side, thereby making wars pointless.
After World War II, the United Nations was established by the United Nations Charter to "save successive generations from the scourge of war which twice in our lifetime has brought untold sorrow to mankind" (Preamble). The Preamble to the United Nations Charter also aims to further the adoption of fundamental human rights, to respect obligations to sources of international law as well as to unite the strength of independent countries in order to maintain international peace and security. All treaties on international human rights law make reference to or consider "the principles proclaimed in the Charter of the United Nations, recognition of the inherent dignity and of the equal and inalienable rights of all members of the human family is the foundation of freedom, justice and "peace in the world".
Gordon B. Hinckley saw a trend in national politics by which city-states and nation-states have unified and suggests that the international arena will eventually follow suit. Many countries such as China, Italy, the United States, Australia, Germany, India and Britain have unified into single nation-states with others like the European Union following suit, suggesting that further globalization will bring about a world state.
World peace has been depicted as a consequence of local, self-determined behaviors that inhibit the institutionalization of power and ensuing violence. The solution is not so much based on an agreed agenda, or an investment in higher authority whether divine or political, but rather a self-organized network of mutually supportive mechanisms, resulting in a viable politico-economic social fabric. The principal technique for inducing convergence is thought experiment, namely backcasting, enabling anyone to participate no matter what cultural background, religious doctrine, political affiliation or age demographic. Similar collaborative mechanisms are emerging from the Internet around open-source projects, including Wikipedia, and the evolution of other social media.
Economic norms theory links economic conditions with institutions of governance and conflict, distinguishing personal clientelist economies from impersonal market-oriented ones, identifying the latter with permanent peace within and between nations.
Through most of human history, societies have been based on personal relations: individuals in groups know each other and exchange favors. Today in most lower-income societies hierarchies of groups distribute wealth based on personal relationships among group leaders, a process often linked with clientelism and corruption. Michael Mousseau argues that in this kind of socio-economy conflict is always present, latent or overt, because individuals depend on their groups for physical and economic security and are thus loyal to their groups rather than their states, and because groups are in a constant state of conflict over access to state coffers. Through processes of bounded rationality, people are conditioned towards strong in-group identities and are easily swayed to fear outsiders, psychological predispositions that make possible sectarian violence, genocide, and terrorism.
Market-oriented socio-economies are integrated not with personal ties but the impersonal force of the market where most individuals are economically dependent on trusting strangers in contracts enforced by the state. This creates loyalty to a state that enforces the rule of law and contracts impartially and reliably and provides equal protection in the freedom to contract – that is, liberal democracy. Wars cannot happen within or between nations with market-integrated economies because war requires the harming of others, and in these kinds of economies everyone is always economically better off when others in the market are also better off, not worse off. Rather than fight, citizens in market-oriented socio-economies care deeply about everyone's rights and welfare, so they demand economic growth at home and economic cooperation and human rights abroad. In fact, nations with market-oriented socio-economies tend to agree on global issues and not a single fatality has occurred in any dispute between them.
Economic norms theory should not be confused with classical liberal theory. The latter assumes that markets are natural and that freer markets promote wealth. In contrast, Economic norms theory shows how market-contracting is a learned norm, and state spending, regulation, and redistribution are necessary to ensure that almost everyone can participate in the "social market" economy, which is in everyone's interests. One proposed mechanism for world peace involves consumer purchasing of renewable and equitable local food and power sources involving artificial photosynthesis ushering in a period of social and ecological harmony known as the Sustainocene.
Nonkilling, popularized in the 2002 book "Nonkilling Global Political Science" by Glenn D. Paige, builds on nonviolence theory and encompasses the concepts of peace (absence of war and conditions conducive to war), nonviolence (psychological, physical, and structural), and ahimsa (noninjury in thought, word and deed). Nonkilling provides a distinct approach characterized by the measurability of its goals and the open-ended nature of its realization. it can be quantified and related to specific causes, for example by following a public health perspective (prevention, intervention and post-traumatic transformation toward the progressive eradication of killing).
The International Day of Peace, sometimes called World Peace Day, is observed annually on 21 September. It is dedicated to peace, and specifically the absence of war and violence, and can be celebrated by a temporary ceasefire in a combat zone. The International Day of Peace was established in 1981 by the United Nations General Assembly. Two decades later, in 2001, the General Assembly unanimously voted to designate the day as a day of preventing violence and a cease-fire. The celebration of this day is recognized by many nations and people. In 2013, for the first time, the day has been dedicated to peace education, i.e. by the key preventive means to reduce war sustainably.
Many religions and religious leaders have expressed a desire for an end to violence.
The central aim of the Bahá'í Faith is the establishment of the unity of the peoples of the world. Bahá'u'lláh, the founder of the Bahá'í Faith, stated in no uncertain terms, "the fundamental purpose animating the Faith of God and His Religion is to safeguard the interests and promote the unity of the human race ..." In his writings, Bahá'u'lláh described two distinct stages of world peace – a lesser peace and a most great peace.
The lesser peace is essentially a collective security agreement between the nations of the world. In this arrangement, nations agree to protect one another by rising up against an aggressor nation, should it seek the usurpation of territory or the destruction of its neighbors. The lesser peace is limited in scope and is concerned with the establishment of basic order and the universal recognition of national borders and the sovereignty of nations. Bahá'ís believe that the lesser peace is taking place largely through the operation of the Divine Will, and that Bahá'í influence on the process is relatively minor.
The most great peace is the eventual end goal of the lesser peace and is envisioned as a time of spiritual and social unity – a time when the peoples of the world genuinely identify with and care for one another, rather than simply tolerating one other's existence. The Bahá'ís view this process as taking place largely as a result of the spread of Bahá'í teachings, principles and practices throughout the world. The larger world peace process and its foundational elements are addressed in the document "The Promise of World Peace", written by the Universal House of Justice.
Many Buddhists believe that world peace can only be achieved if we first establish peace within our minds. The idea is that anger and other negative states of mind are the cause of wars and fighting. Buddhists believe people can live in peace and harmony only if we abandon negative emotions such as anger in our minds and cultivate positive emotions such as love and compassion. As with all Dharmic religions (Hinduism, Jainism, Buddhism and Sikhism), ahimsa (avoidance of violence) is a central concept.
Peace pagodas are monuments that are built to symbolize and inspire world peace and have been central to the peace movement throughout the years. These are typically of Buddhist origin, being built by the Japanese Buddhist organisation Nipponzan Myohoji. They exist around the world in cities such as London, Vienna, New Delhi, Tokyo and Lumbini.
The basic Christian ideal specifies that peace can only come by the Word and love of God, which is perfectly demonstrated in the life of Christ:
As christologically interpreted from , whereupon the "Word of the Lord" is established on the earth, the material human-political result will be 'nation not taking up sword against nation; nor will they train for war anymore'. Christian world peace necessitates the living of a proactive life replete with all good works in direct light of the Word of God. The details of such a life can be observed in the Gospels, especially the historically renowned Sermon on the Mount, where forgiving those who do wrong things against oneself is advocated among other pious precepts.
However, not all Christians expect a lasting world peace on this earth:
Many Christians believe that world peace is expected to be manifest upon the "new earth" that is promised in Christian scripture such as .
The Roman Catholic religious conception of "Consecration of Russia", related to the Church's high-priority Fátima Marian apparitions, promises a temporary "world peace" as a result of this process being fulfilled, though before the coming of the Antichrist. This period of temporary peace is called "the triumph of the Immaculate Heart".
Traditionally, Hinduism has adopted an ancient Sanskrit phrase "Vasudhaiva kutumbakam", which translates as "The world is one family". The essence of this concept is the observation that only base minds see dichotomies and divisions. The more we seek wisdom, the more we become inclusive and free our internal spirit from worldly illusions or "Maya". World peace is hence only achieved through internal means—by liberating ourselves from artificial boundaries that separate us all. As with all Dharmic religions (Hinduism, Jainism, Buddhism and Sikhism), ahimsa (avoidance of violence) is a central concept.
According to Islamic eschatology, the whole world will be united under the leadership of imam Mahdi. At that time love, justice and peace will be so abundant that the world will be in likeness of paradise.
The concept of "Tikkun olam" (Repairing the World) is central to modern Rabbinic Judaism. "Tikkun olam" is accomplished through various means, such as ritualistically performing God's commandments, charity and social justice, as well as through example persuading the rest of the world to behave morally. According to some views, "Tikkun olam" would result in the beginning of the Messianic Age. It has been said that in every generation, a person is born with the potential to be the spiritual Messiah. If the time is right for the Messianic Age within that person's lifetime, then that person will be the mashiach. But if that person dies before he completes the mission of the Messiah, then that person is not the Messiah (Mashiach).
Specifically, in Jewish messianism it is considered that at some future time a "Messiah" (literally "a King appointed by God") will rise up to bring all Jews back to the Land of Israel, followed by everlasting global peace and prosperity. This idea originates from passages in the Old Testament and the Talmud.
Compassion for all life, human and non-human, is central to Jainism. They have adopted the wordings of Lord Mahvira Jiyo aur Jeeno Do. Human life is valued as a unique, rare opportunity to reach enlightenment; to kill any person, no matter what crime he may have committed, is considered unimaginably abhorrent. It is a religion that requires monks and laity, from all its sects and traditions, to be vegetarian. Some Indian regions, such as Gujarat, have been strongly influenced by Jains and often the majority of the local Hindus of every denomination have also become vegetarian. Famous quote on world peace as per Jainism by a 19th-century Indian legend, Virchand Gandhi: "May peace rule the universe; may peace rule in kingdoms and empires; may peace rule in states and in the lands of the potentates; may peace rule in the house of friends and may peace also rule in the house of enemies." As with all Dharmic religions (Hinduism, Jainism, Buddhism and Sikhism), ahimsa (avoidance of violence) is a central concept.
Peace comes from God. Meditation, the means of communicating with God, is unfruitful without the noble character of a devotee, there can be no worship without performing good deeds. Guru Nanak stressed now "kirat karō": that a Sikh should balance work, worship, and charity, and should defend the rights of all creatures, and in particular, fellow human beings. They are encouraged to have a "chaṛdī kalā", or "optimistic" – "resilience", view of life. Sikh teachings also stress the concept of sharing—"vaṇḍ chakkō"—through the distribution of free food at Sikh gurdwaras ("laṅgar"), giving charitable donations, and working for the good of the community and others ("sēvā"). Sikhs believe that no matter what race, sex, or religion one is, all are equal in God's eyes. Men and women are equal and share the same rights, and women can lead in prayers. As with all Dharmic religions (Hinduism, Jainism, Buddhism and Sikhism), ahimsa (avoidance of violence) is a central concept.
A report in June 2015 on the Global Peace Index highlighted that the impact of violence on the global economy reached US$14.3 trillion. The report also found that the economic cost of violence is 13.4% of world GDP, equal to the total economic output of Brazil, Canada, France, Germany, Spain and the UK combined. | https://en.wikipedia.org/wiki?curid=45506 |
Treeshrew
The treeshrews (or tree shrews or banxrings) are small mammals native to the tropical forests of Southeast Asia. They make up the entire order Scandentia, which split into two families: the Tupaiidae (19 species, "ordinary" treeshrews), and the Ptilocercidae (one species, the pen-tailed treeshrew).
Though called 'treeshrews', and despite having previously been classified in Insectivora, they are not true shrews, and not all species live in trees. They are omnivores; among other things, treeshrews eat fruit.
Treeshrews have a higher brain to body mass ratio than any other mammal, including humans, but high ratios are not uncommon for animals weighing less than .
Among orders of mammals, treeshrews are closely related to primates, and have been used as an alternative to primates in experimental studies of myopia, psychosocial stress, and hepatitis.
The name "Tupaia" is derived from "tupai", the Indonesian word for squirrel, and was provided by Sir Stamford Raffles.
Treeshrews are slender animals with long tails and soft, greyish to reddish-brown fur. The terrestrial species tend to be larger than the arboreal forms, and to have larger claws, which they use for digging up insect prey. They have poorly developed canine teeth and unspecialised molars, with an overall dental formula of
Treeshrews have good vision, which is binocular in the case of the more arboreal species.
Female treeshrews have a gestation period of 45–50 days and give birth to up to three young in nests lined with dry leaves inside tree hollows. The young are born blind and hairless, but are able to leave the nest after about a month. During this period, the mother provides relatively little maternal care, visiting her young only for a few minutes every other day to suckle them.
Treeshrews reach sexual maturity after around four months, and breed for much of the year, with no clear breeding season in most species.
Treeshrews live in small family groups, which defend their territory from intruders. Most are diurnal, although the pen-tailed treeshrew is nocturnal.
They mark their territories using various scent glands or urine, depending on the particular species.
Treeshrews are omnivorous, feeding on insects, small vertebrates, fruit, and seeds. Among other things, treeshrews eat "Rafflesia" fruit.
The pen-tailed treeshrew in Malaysia is able to consume large amounts of naturally fermented nectar (with up to 3.8% alcohol content) the entire year without it having any effects on behaviour.
Treeshrews have also been observed intentionally eating foods high in capsaicin, a behavior unique among mammals other than humans. A single TRPV1 mutation reduces their pain response to capsaicinoids, which scientists believe is an evolutionary adaptation to be able to consume spicy foods in their natural habitats.
They make up the entire order Scandentia, split into the families Tupaiidae, the treeshrews, and Ptilocercidae, the pen-tailed treeshrew. The 20 species are placed in five genera.
Treeshrews were moved from the order Insectivora into the order Primates because of certain internal similarities to primates (for example, similarities in the brain anatomy, highlighted by Sir Wilfrid Le Gros Clark), and classified as a "primitive prosimian", however they were soon split from the primates and moved into their own clade. The treeshrews’ relations to primates and other closely related clades are still being refined.
Molecular phylogenetic studies have suggested that the treeshrews should be given the same rank (order) as the primates and, with the primates and the flying lemurs (colugos), belong to the grandorder Euarchonta. According to this classification, the Euarchonta are sister to the Glires (lagomorphs and rodents), and the two groups are combined into the superorder Euarchontoglires. However, the alternative placement of treeshrews as sister to both Glires and Primatomorpha cannot be ruled out. Recent studies place Scandentia as sister of the Glires, invalidating Euarchonta: It is this organization that is shown in the tree diagram below.
Several other arrangements of these orders have been proposed in the past, and the above tree is only a well-favored proposal. Although it is known that Scandentia is one of the most basal Euarchontoglire clades, the exact phylogenetic position is not yet considered resolved: It may be a sister of Glires, Primatomorpha, or Dermoptera, or separate from and sister to all other Euarchontoglires.
The 20 species are placed in five genera, which are divided into two families. The majority are in the "ordinary" treeshrew family, Tupaiidae, but one species, the pen-tailed treeshrew, is different enough to warrant placement in its own family, Ptilocercidae.
Genus "Anathana"
Genus "Dendrogale"
Genus "Tupaia"
Genus "Urogale"
Genus "Ptilocercus"
The fossil record of treeshrews is poor. The oldest putative treeshrew, "Eodendrogale parva", is from the Middle Eocene of Henan, China, but the identity of this animal is uncertain. Other fossils have come from the Miocene of Thailand, Pakistan, India, and Yunnan, China, as well as the Pliocene of India. Most belong to the family Tupaiidae, but some still-undescribed fossils from Yunnan are thought to be closer to the pen-tailed treeshrew.
Named fossil species include "Prodendrogale yunnanica", "Prodendrogale engesseri", and "Tupaia storchi" from Yunnan, "Tupaia miocenica" from Thailand, and "Palaeotupaia sivalicus" from India. | https://en.wikipedia.org/wiki?curid=45511 |
Free good
A free good is a good that is not scarce, and therefore is available without limit. A free good is available in as great a quantity as desired with zero opportunity cost to society.
A good that is made available at zero price is not necessarily a free good. For example, a shop might give away its stock in its promotion, but producing these goods would still have required the use of scarce resources.
Examples of free goods are ideas and works that are reproducible at zero cost, or almost zero cost. For example, if someone invents a new device, many people could copy this invention, with no danger of this "resource" running out. Other examples include computer programs and web pages.
Earlier schools of economic thought proposed a third type of free good: resources that are scarce but so abundant in nature that there is enough for everyone to have as much as they want. Examples in textbooks included seawater and air.
Intellectual property laws such as copyrights and patents have the effect of converting some intangible goods to scarce goods. Even though these works are free goods by definition and can be reproduced at minimal cost, the production of these works does require scarce resources, such as skilled labour. Thus these laws are used to give exclusive rights to the creators, in order to encourage resources to be appropriately allocated to these activities.
Many post scarcity futurists theorize that advanced nanotechnology with the ability to turn any kind of material automatically into any other combination of equal mass will make all goods essentially free goods, since all raw materials and manufacturing time will become perfectly interchangeable. | https://en.wikipedia.org/wiki?curid=45515 |
Seven deadly sins
The seven deadly sins, also known as the capital vices, or cardinal sins, is a grouping and classification of vices within Christian teachings, although it does not appear explicitly in the Bible. Behaviours or habits are classified under this category if they directly give birth to other immoralities. According to the standard list, they are pride, greed, wrath, envy, lust, gluttony, and sloth, which are also contrary to the seven heavenly virtues. These sins are often thought to be abuses or excessive versions of one's natural faculties or passions (for example, gluttony abuses one's desire to eat, to consume).
This classification originated with the desert fathers, especially Evagrius Ponticus, who identified seven or eight evil thoughts or spirits that one needed to overcome. Evagrius' pupil John Cassian, with his book "The Institutes," brought the classification to Europe, where it became fundamental to Catholic confessional practices as evident in penitential manuals, sermons like "The Parson's Tale" from Chaucer's "Canterbury Tales," and artworks like Dante's "Purgatory" (where the penitents of Mount Purgatory are depicted as being grouped and penanced according to the worst capital sin they committed). The Catholic Church used the concept of the deadly sins in order to help people curb their inclination towards evil before dire consequences and misdeeds could occur; the leader-teachers especially focused on pride (which is thought to be the sin that severs the soul from grace, and the one that is representative and the very essence of all evil) and greed, both of which are seen as inherently sinful and as underlying all other sins to be prevented. To inspire people to focus on the seven deadly sins, the vices are discussed in treatises and depicted in paintings and sculpture decorations on Catholic churches as well as older textbooks.
The seven deadly sins, along with the sins against the Holy Ghost and the sins that cry to Heaven for vengeance, are considered especially serious in the Western Christian traditions.
While the seven deadly sins as we know them did not originate with the Greeks or Romans, there were ancient precedents for them. Aristotle's "Nicomachean Ethics" lists several positive, healthy human qualities, excellences, or virtues. Aristotle argues that for each positive quality two negative vices are found on each extreme of the virtue. Courage, for example, is human excellence or virtue in facing fear and risk. Excessive courage makes one rash, while a deficiency of courage makes one cowardly. This principle of virtue found in the middle or "mean" between excess and deficiency is Aristotle's notion of the golden mean. Aristotle lists virtues like courage, temperance or self-control, generosity, "greatness of soul," proper response to anger, friendliness, and wit or charm.
Roman writers like Horace extolled the value of virtue while listing and warning against vices. His first epistles say that "to flee vice is the beginning of virtue, and to have got rid of folly is the beginning of wisdom."
The modern concept of the seven deadly sins is linked to the works of the fourth-century monk Evagrius Ponticus, who listed eight "evil thoughts" in Greek as follows:
They were translated into the Latin of Western Christianity (largely due to the writings of John Cassian), thus becoming part of the Western tradition's spiritual pietas (or Catholic devotions), as follows:
These "evil thoughts" can be categorized into three types:
In AD 590 Pope Gregory I revised this list to form the more common list. Gregory combined "tristitia" with "acedia", and "vanagloria" with "superbia", and added "envy," in Latin, "invidia". Gregory's list became the standard list of sins. Thomas Aquinas uses and defends Gregory's list in his "Summa Theologica" although he calls them the "capital sins" because they are the head and form of all the others. The Anglican Communion, Lutheran Church, and Methodist Church, among other Christian denominations, continue to retain this list. Moreover, modern day evangelists, such as Billy Graham have explicated the seven deadly sins.
Most of the capital sins, with the sole exception of sloth, are defined by Dante Alighieri as perverse or corrupt versions of love for something or another: lust, gluttony, and greed are all excessive or disordered love of good things; sloth is a deficiency of love; wrath, envy, and pride are perverted love directed toward other's harm. In the seven capital sins are seven ways of eternal death. The capital sins from lust to envy are generally associated with pride, which has been labeled as the father of all sins.
Lust, or lechery (Latin: "luxuria" (carnal)), is intense longing. It is usually thought of as intense or unbridled sexual desire, which leads to fornication, adultery, rape, bestiality, and other sinful sexual acts. However, lust could also mean simply desire in general; thus, lust for money, power, and other things are sinful. In accordance with the words of Henry Edward Manning, the impurity of lust transforms one into "a slave of the devil".
Dante defined lust as the disordered love for individuals. It is generally thought to be the least serious capital sin as it is an abuse of a faculty that humans share with animals, and sins of the flesh are less grievous than spiritual sins.
In Dante's "Purgatorio", the penitent walks within flames to purge himself of lustful thoughts and feelings. In Dante's "Inferno", unforgiven souls of the sin of lust are blown about in restless hurricane-like winds symbolic of their own lack of self-control to their lustful passions in earthly life, for all eternity and unto the ages of ages.
Gluttony (Latin: ) is the overindulgence and overconsumption of anything to the point of waste. The word derives from the Latin "gluttire", meaning to gulp down or swallow.
In Christianity, it is considered a sin if the excessive desire for food causes it to be withheld from the needy.
Because of these scripts, gluttony can be interpreted as selfishness; essentially placing concern with one's own impulses or interests above the well-being or interests of others.
During times of famine, war, and similar periods when food is scarce, it is possible for one to indirectly kill other people through starvation just by eating too much or even too soon.
Medieval church leaders (e.g., Thomas Aquinas) took a more expansive view of gluttony, arguing that it could also include an obsessive anticipation of meals, and the constant eating of delicacies and excessively costly foods. Aquinas went so far as to prepare a list of five ways to commit gluttony, comprising:
Of these, "ardenter" is often considered the most serious, since it is extreme attachment to the pleasure of mere eating, which can make the committer eat impulsively; absolutely and without qualification live merely to eat and drink; lose attachment to health-related, social, intellectual, and spiritual pleasures; and lose proper judgement: an example is Esau selling his birthright for ordinary food of bread and pottage of lentils. His punishment was that of the "profane person ... who, for a morsel of meat sold his birthright". It is later revealed that "he found no place for repentance, though he sought it carefully, with tears".
Greed (Latin: ), also known as "avarice", "cupidity", or "covetousness", is, like lust and gluttony, a sin of desire. However, greed (as seen by the Church) is applied to an artificial, rapacious desire and pursuit of material possessions. Thomas Aquinas wrote, "Greed is a sin against God, just as all mortal sins, in as much as man condemns things eternal for the sake of temporal things." In Dante's Purgatory, the penitents are bound and laid face down on the ground for having concentrated excessively on earthly thoughts. Hoarding of materials or objects, theft and robbery, especially by means of violence, trickery, or manipulation of authority are all actions that may be inspired by greed. Such misdeeds can include simony, where one attempts to purchase or sell sacraments, including Holy Orders and, therefore, positions of authority in the Church hierarchy.
In the words of Henry Edward, avarice "plunges a man deep into the mire of this world, so that he makes it to be his god".
As defined outside Christian writings, greed is an inordinate desire to acquire or possess more than one needs, especially with respect to material wealth. Like pride, it can lead to not just some, but all evil.
Sloth (Latin: "tristitia" or "" ("without care")) refers to a peculiar jumble of notions, dating from antiquity and including mental, spiritual, pathological, and physical states. It may be defined as absence of interest or habitual disinclination to exertion.
In his "Summa Theologica", Saint Thomas Aquinas defined sloth as "sorrow about spiritual good".
The scope of sloth is wide. Spiritually, "acedia" first referred to an affliction attending religious persons, especially monks, wherein they became indifferent to their duties and obligations to God. Mentally, "acedia" has a number of distinctive components of which the most important is affectlessness, a lack of any feeling about self or other, a mind-state that gives rise to boredom, rancor, apathy, and a passive inert or sluggish mentation. Physically, "acedia" is fundamentally associated with a cessation of motion and an indifference to work; it finds expression in laziness, idleness, and indolence.
Sloth includes ceasing to utilize the seven gifts of grace given by the Holy Spirit (Wisdom, Understanding, Counsel, Knowledge, Piety, Fortitude, and Fear of the Lord); such disregard may lead to the slowing of one's spiritual progress towards eternal life, to the neglect of manifold duties of charity towards the neighbor, and to animosity towards those who love God.
Sloth has also been defined as a failure to do things that one should do. By this definition, evil exists when "good" people fail to act.
Edmund Burke (1729–1797) wrote in "Present Discontents" (II. 78) "No man, who is not inflamed by vain-glory into enthusiasm, can flatter himself that his single, unsupported, desultory, unsystematic endeavours are of power to defeat the subtle designs and united Cabals of ambitious citizens. When bad men combine, the good must associate; else they will fall, one by one, an unpitied sacrifice in a contemptible struggle."
Unlike the other capital sins, which are sins of committing immorality, sloth is a sin of omitting responsibilities. It may arise from any of the other capital vices; for example, a son may omit his duty to his father through anger. While the state and habit of sloth is a mortal sin, the habit of the soul tending towards the last mortal state of sloth is not mortal in and of itself except under certain circumstances.
Emotionally and cognitively, the evil of "acedia" finds expression in a lack of any feeling for the world, for the people in it, or for the self. "Acedia" takes form as an alienation of the sentient self first from the world and then from itself. Although the most profound versions of this condition are found in a withdrawal from all forms of participation in or care for others or oneself, a lesser but more noisome element was also noted by theologians. From "tristitia", asserted Gregory the Great, "there arise malice, rancour, cowardice, [and] despair". Chaucer, too, dealt with this attribute of "acedia", counting the characteristics of the sin to include despair, somnolence, idleness, tardiness, negligence, indolence, and "wrawnesse", the last variously translated as "anger" or better as "peevishness". For Chaucer, human's sin consists of languishing and holding back, refusing to undertake works of goodness because, he/she tells him/her self, the circumstances surrounding the establishment of good are too grievous and too difficult to suffer. "Acedia" in Chaucer's view is thus the enemy of every source and motive for work.
Sloth not only subverts the livelihood of the body, taking no care for its day-to-day provisions, but also slows down the mind, halting its attention to matters of great importance. Sloth hinders the man in his righteous undertakings and thus becomes a terrible source of human's undoing.
In his "Purgatorio" Dante portrayed the penance for acedia as running continuously at top speed. Dante describes acedia as the "failure to love God with all one's heart, all one's mind and all one's soul"; to him it was the "middle sin", the only one characterised by an absence or insufficiency of love. Some scholars have said that the ultimate form of acedia was despair which leads to suicide.
Wrath (Latin: ) can be defined as uncontrolled feelings of anger, rage, and even hatred. Wrath often reveals itself in the wish to seek vengeance. In its purest form, wrath presents with injury, violence, and hate that may provoke feuds that can go on for centuries. Wrath may persist long after the person who did another a grievous wrong is dead. Feelings of wrath can manifest in different ways, including impatience, hateful misanthropy, revenge, and self-destructive behavior, such as drug abuse or suicide.
According to the Catechism of the Catholic Church, the neutral act of anger becomes the sin of wrath when it is directed against an innocent person, when it is unduly strong or long-lasting, or when it desires excessive punishment. "If anger reaches the point of a deliberate desire to kill or seriously wound a neighbor, it is gravely against charity; it is a mortal sin." (CCC 2302) Hatred is the sin of desiring that someone else may suffer misfortune or evil, and is a mortal sin when one desires grave harm. (CCC 2302-03)
People feel angry when they sense that they or someone they care about has been offended, when they are certain about the nature and cause of the angering event, when they are certain someone else is responsible, and when they feel they can still influence the situation or cope with it.
In her introduction to Purgatory, Dorothy L. Sayers describes wrath as "love of justice perverted to revenge and spite".
In accordance with Henry Edward, angry people are "slaves to themselves".
Envy (Latin: ), like greed and lust, is characterized by an insatiable desire. It can be described as a sad or resentful covetousness towards the traits or possessions of someone else. It arises from vainglory, and severs a man from his neighbor.
Malicious envy is similar to jealousy in that they both feel discontent towards someone's traits, status, abilities, or rewards. A difference is that the envious also desire the entity and covet it. Envy can be directly related to the Ten Commandments, specifically, "Neither shall you covet ... anything that belongs to your neighbour"—a statement that may also be related to greed. Dante defined envy as "a desire to deprive other men of theirs". In Dante's Purgatory, the punishment for the envious is to have their eyes sewn shut with wire because they gained sinful pleasure from seeing others brought low. According to St. Thomas Aquinas, the struggle aroused by envy has three stages: during the first stage, the envious person attempts to lower another's reputation; in the middle stage, the envious person receives either "joy at another's misfortune" (if he succeeds in defaming the other person) or "grief at another's prosperity" (if he fails); the third stage is hatred because "sorrow causes hatred".
Envy is said to be the motivation behind Cain murdering his brother, Abel, as Cain envied Abel because God favored Abel's sacrifice over Cain's.
Bertrand Russell said that envy was one of the most potent causes of unhappiness, bringing sorrow to committers of envy whilst giving them the urge to inflict pain upon others.
In accordance with the most widely accepted views, only pride weighs down the soul more than envy among the capital sins. Just like pride, envy has been associated directly with the devil, for Wisdom 2:24 states: "the envy of the devil brought death to the world".
Pride (Latin: ) is considered, on almost every list, the original and most serious of the seven deadly sins: the perversion of the faculties that make humans more like God—dignity and holiness. It is also thought to be the source of the other capital sins. Also known as "hubris" (from ancient Greek ), or "futility", it is identified as dangerously corrupt selfishness, the putting of one's own desires, urges, wants, and whims before the welfare of other people.
In even more destructive cases, it is irrationally believing that one is essentially and necessarily better, superior, or more important than others, failing to acknowledge the accomplishments of others, and excessive admiration of the personal image or self (especially forgetting one's own lack of divinity, and refusing to acknowledge one's own limits, faults, or wrongs as a human being).
As pride has been labelled the father of all sins, it has been deemed the devil's most prominent trait. C.S. Lewis writes, in "Mere Christianity", that pride is the "anti-God" state, the position in which the ego and the self are directly opposed to God: "Unchastity, anger, greed, drunkenness, and all that, are mere fleabites in comparison: it was through Pride that the devil became the devil: Pride leads to every other vice: it is the complete anti-God state of mind." Pride is understood to sever the spirit from God, as well as His life-and-grace-giving Presence.
One can be prideful for different reasons. Author Ichabod Spencer states that "spiritual pride is the worst kind of pride, if not worst snare of the devil. The heart is particularly deceitful on this one thing." Jonathan Edwards said "remember that pride is the worst viper that is in the heart, the greatest disturber of the soul's peace and sweet communion with Christ; it was the first sin that ever was, and lies lowest in the foundation of Satan's whole building, and is the most difficultly rooted out, and is the most hidden, secret and deceitful of all lusts, and often creeps in, insensibly, into the midst of religion and sometimes under the disguise of humility."
In Ancient Athens, hubris was considered one of the greatest crimes and was used to refer to insolent contempt that can cause one to use violence to shame the victim. This sense of hubris could also characterize rape. Aristotle defined hubris as shaming the victim, not because of anything that happened to the committer or might happen to the committer, but merely for the committer's own gratification. The word's connotation changed somewhat over time, with some additional emphasis towards a gross over-estimation of one's abilities.
The term has been used to analyse and make sense of the actions of contemporary heads of government by Ian Kershaw (1998), Peter Beinart (2010) and in a much more physiological manner by David Owen (2012). In this context the term has been used to describe how certain leaders, when put to positions of immense power, seem to become irrationally self-confident in their own abilities, increasingly reluctant to listen to the advice of others and progressively more impulsive in their actions.
Dante's definition of pride was "love of self perverted to hatred and contempt for one's neighbour".
Pride is generally associated with an absence of humility.
In accordance with the Sirach's author's wording, the heart of a proud man is "like a partridge in its cage acting as a decoy; like a spy he watches for your weaknesses. He changes good things into evil, he lays his traps. Just as a spark sets coals on fire, the wicked man prepares his snares in order to draw blood. Beware of the wicked man for he is planning evil. He might dishonor you forever." In another chapter, he says that "the acquisitive man is not content with what he has, wicked injustice shrivels the heart."
Benjamin Franklin said "In reality there is, perhaps no one of our natural passions so hard to subdue as "pride". Disguise it, struggle with it, stifle it, mortify it as much as one pleases, it is still alive and will every now and then peep out and show itself; you will see it, perhaps, often in this history. For even if I could conceive that I had completely overcome it, I should probably be proud of my humility." Joseph Addison states that "There is no passion that steals into the heart more imperceptibly and covers itself under more disguises than pride."
The proverb "pride goeth (goes) before destruction, a haughty spirit before a fall" (from the biblical Book of Proverbs, 16:18)(or pride goeth before the fall) is thought to sum up the modern use of pride. Pride is also referred to as "pride that blinds," as it often causes a committer of pride to act in foolish ways that belie common sense. In other words, the modern definition may be thought of as, "that pride that goes just before the fall." In his two-volume biography of Adolf Hitler, historian Ian Kershaw uses both 'hubris' and 'nemesis' as titles. The first volume, "Hubris", describes Hitler's early life and rise to political power. The second, "Nemesis", gives details of Hitler's role in the Second World War, and concludes with his fall and suicide in 1945.
Much of the 10th and part of 11th chapter of the Book of Sirach discusses and advises about pride, hubris, and who is rationally worthy of honor. It goes:
Jacob Bidermann's medieval miracle play, "Cenodoxus", pride is the deadliest of all the sins and leads directly to the damnation of the titulary famed Parisian doctor. In Dante's "Divine Comedy", the penitents are burdened with stone slabs on their necks to keep their heads bowed.
Acedia (Latin, "without care") (from Greek ἀκηδία) is the neglect to take care of something that one should do. It is translated to apathetic listlessness; depression without joy. It is related to melancholy: "acedia" describes the behaviour and "melancholy" suggests the emotion producing it. In early Christian thought, the lack of joy was regarded as a willful refusal to enjoy the goodness of God; by contrast, apathy was considered a refusal to help others in time of need.
Acēdia is negative form of the Greek term κηδεία ('Kēdeia'), which has a more restricted usage. 'Kēdeia' refers specifically to spousal love and respect for the dead. The positive term 'kēdeia' thus indicates love for one's family, even through death. It also indicates love for those outside one's immediate family, specifically forming a new family with one's "beloved". Seen in this way, "acēdia" indicates a rejection of familial love. Nonetheless, the meaning of "acēdia" is far more broad, signifying indifference to everything one experiences.
Pope Gregory combined this with "tristitia" into sloth for his list. When Thomas Aquinas described "acedia" in his interpretation of the list, he described it as an "uneasiness of the mind", being a progenitor for lesser sins such as restlessness and instability. Dante refined this definition further, describing acedia as the "failure to love God with all one's heart, all one's mind and all one's soul"; to him it was the "middle sin", the only one characterised by an absence or insufficiency of love. Some scholars have said that the ultimate form of acedia was despair which leads to suicide.
Acedia is currently defined in the Catechism of the Catholic Church as spiritual sloth, which would be believing spiritual tasks to be too difficult. In the fourth century, Christian monks believed acedia was not primarily caused by laziness, but by a state of depression that caused spiritual detachment.
Vainglory (Latin, ) is unjustified boasting. Pope Gregory viewed it as a form of pride, so he folded "vainglory" into pride for his listing of sins. According to Thomas Aquinas, it is the progenitor of envy.
The Latin term "gloria" roughly means "boasting", although its English cognate – "glory" – has come to have an exclusively positive meaning; historically, the term "vain" roughly meant "futile" (a meaning retained in the modern expression "in vain"), but by the 14th century had come to have the strong narcissistic undertones, that it still retains today. As a result of these semantic changes, "vainglory" has become a rarely used word in itself, and is now commonly interpreted as referring to "vanity" (in its modern narcissistic sense).
With Christianity, historic Christian denominations such as the Catholic Church and Protestant Churches, including the Lutheran Church, recognize seven virtues, which correspond inversely to each of the seven deadly sins.
Confession is the act of admitting the commission of a sin to a priest, who in turn will forgive the person in the name (in the person) of Christ, give a penance to (partially) make up for the offense, and advise the person on what he or she should do afterwards.
According to a 2009 study by Fr. Roberto Busa, a Jesuit scholar, the most common deadly sin confessed by men is lust, and by women, pride. It was unclear whether these differences were due to the actual number of transgressions committed by each sex, or whether differing views on what "counts" or should be confessed caused the observed pattern.
The second book of Dante's epic poem "The Divine Comedy" is structured around the seven deadly sins. The most serious sins, found at the lowest level, are the abuses of the most divine faculty. For Dante and other thinkers, a human's rational faculty makes humans more like God. Abusing that faculty with pride or envy weighs down the soul the most (though abuse is gluttonous). Abusing one's passions with wrath or a lack of passion as with sloth also weighs down the soul but not as much as the abuse of one's rational faculty. Finally, abusing one's desires to have one's physical wants met via greed, gluttony, or lust abuses a faculty that humans share with animals. This is still an abuse that weighs down the soul, but it does not weigh it down like other abuses. Thus, the top levels of the Mountain of Purgatory have the top listed sins, while the lowest levels have the more serious sins of wrath, envy, and pride.
The last tale of Chaucer's "Canterbury Tales", the "Parson's Tale", is not a tale but a sermon that the parson gives against the seven deadly sins. This sermon brings together many common ideas and images about the seven deadly sins. This tale and Dante's work both show how the seven deadly sins were used for confessional purposes or as a way to identify, repent of, and find forgiveness for one's sins.
The Dutch artist Pieter Bruegel the Elder created a series of prints showing each of the seven deadly sins. Each print features a central, labeled image that represents the sin. Around the figure are images that show the distortions, degenerations, and destructions caused by the sin. Many of these images come from contemporary Dutch aphorisms.
Spenser's "The Faerie Queene", which was meant to educate young people to embrace virtue and avoid vice, includes a colourful depiction of the House of Pride. Lucifera, the lady of the house, is accompanied by advisers who represent the other seven deadly sins.
The seven sins are personified and they give a confession to the personification of Repentance in William Langland's "Piers Plowman". Only pride is represented by a woman, the others all represented by male characters.
Kurt Weill and Bertolt Brecht's "The Seven Deadly Sins" satirized capitalism and its painful abuses as its central character, the victim of a split personality, travels to seven different cities in search of money for her family. In each city she encounters one of the seven deadly sins, but those sins ironically reverse one's expectations. When the character goes to Los Angeles, for example, she is outraged by injustice, but is told that wrath against capitalism is a sin that she must avoid.
Between 1945 and 1949, the American painter Paul Cadmus created a series of vivid, powerful, and gruesome paintings of each of the seven deadly sins.
Ferdinand Mount maintains that liquid currentness, especially through tabloids, has surprisingly given valor to vices, causing society to regress into that of primitive pagans: "covetousness has been rebranded as retail therapy, sloth is downtime, lust is exploring your sexuality, anger is opening up your feelings, vanity is looking good because you're worth it and gluttony is the religion of foodies". | https://en.wikipedia.org/wiki?curid=45519 |
Pseudorandom number generator
A pseudorandom number generator (PRNG), also known as a deterministic random bit generator (DRBG), is an algorithm for generating a sequence of numbers whose properties approximate the properties of sequences of random numbers. The PRNG-generated sequence is not truly random, because it is completely determined by an initial value, called the PRNG's "seed" (which may include truly random values). Although sequences that are closer to truly random can be generated using hardware random number generators, "pseudorandom" number generators are important in practice for their speed in number generation and their reproducibility.
PRNGs are central in applications such as simulations (e.g. for the Monte Carlo method), electronic games (e.g. for procedural generation), and cryptography. Cryptographic applications require the output not to be predictable from earlier outputs, and more elaborate algorithms, which do not inherit the linearity of simpler PRNGs, are needed.
Good statistical properties are a central requirement for the output of a PRNG. In general, careful mathematical analysis is required to have any confidence that a PRNG generates numbers that are sufficiently close to random to suit the intended use. John von Neumann cautioned about the misinterpretation of a PRNG as a truly random generator, and joked that "Anyone who considers arithmetical methods of producing random digits is, of course, in a state of sin."
In practice, the output from many common PRNGs exhibit artifacts that cause them to fail statistical pattern-detection tests. These include:
Defects exhibited by flawed PRNGs range from unnoticeable (and unknown) to very obvious. An example was the RANDU random number algorithm used for decades on mainframe computers. It was seriously flawed, but its inadequacy went undetected for a very long time.
In many fields, research work prior to the 21st century that relied on random selection or on Monte Carlo simulations, or in other ways relied on PRNGs, were much less reliable than ideal as a result of using poor-quality PRNGs. Even today, caution is sometimes required, as illustrated by the following warning in the "International Encyclopedia of Statistical Science" (2010).
As an illustration, consider the widely used programming language Java. , Java still relies on a linear congruential generator (LCG) for its PRNG, which are of low quality—see further below.
One well-known PRNG to avoid major problems and still run fairly quickly was the Mersenne Twister (discussed below), which was published in 1998. Other higher-quality PRNGs, both in terms of computational and statistical performance, were developed before and after this date; these can be identified in the List of pseudorandom number generators.
In the second half of the 20th century, the standard class of algorithms used for PRNGs comprised linear congruential generators. The quality of LCGs was known to be inadequate, but better methods were unavailable. Press et al. (2007) described the result thusly: "If all scientific papers whose results are in doubt because of [LCGs and related] were to disappear from library shelves, there would be a gap on each shelf about as big as your fist."
A major advance in the construction of pseudorandom generators was the introduction of techniques based on linear recurrences on the two-element field; such generators are related to linear feedback shift registers.
The 1997 invention of the Mersenne Twister, in particular, avoided many of the problems with earlier generators. The Mersenne Twister has a period of 219 937−1 iterations (≈4.3), is proven to be equidistributed in (up to) 623 dimensions (for 32-bit values), and at the time of its introduction was running faster than other statistically reasonable generators.
In 2003, George Marsaglia introduced the family of xorshift generators, again based on a linear recurrence. Such generators are extremely fast and, combined with a nonlinear operation, they pass strong statistical tests.
In 2006 the WELL family of generators was developed. The WELL generators in some ways improves on the quality of the Mersenne Twister—which has a too-large state space and a very slow recovery from state spaces with a large number of zeros.
A PRNG suitable for cryptographic applications is called a "cryptographically secure PRNG" (CSPRNG). A requirement for a CSPRNG is that an adversary not knowing the seed has only negligible advantage in distinguishing the generator's output sequence from a random sequence. In other words, while a PRNG is only required to pass certain statistical tests, a CSPRNG must pass all statistical tests that are restricted to polynomial time in the size of the seed. Though a proof of this property is beyond the current state of the art of computational complexity theory, strong evidence may be provided by reducing the CSPRNG to a problem that is assumed to be hard, such as integer factorization. In general, years of review may be required before an algorithm can be certified as a CSPRNG.
Some classes of CSPRNGs include the following:
It has been shown to be likely that the NSA has inserted an asymmetric backdoor into the NIST certified pseudorandom number generator Dual_EC_DRBG.
Most PRNG algorithms produce sequences that are uniformly distributed by any of several tests. It is an open question, and one central to the theory and practice of cryptography, whether there is any way to distinguish the output of a high-quality PRNG from a truly random sequence. In this setting, the distinguisher knows that either the known PRNG algorithm was used (but not the state with which it was initialized) or a truly random algorithm was used, and has to distinguish between the two. The security of most cryptographic algorithms and protocols using PRNGs is based on the assumption that it is infeasible to distinguish use of a suitable PRNG from use of a truly random sequence. The simplest examples of this dependency are stream ciphers, which (most often) work by exclusive or-ing the plaintext of a message with the output of a PRNG, producing ciphertext. The design of cryptographically adequate PRNGs is extremely difficult because they must meet additional criteria. The size of its period is an important factor in the cryptographic suitability of a PRNG, but not the only one.
The German Federal Office for Information Security ("Bundesamt für Sicherheit in der Informationstechnik", BSI) has established four criteria for quality of deterministic random number generators. They are summarized here:
For cryptographic applications, only generators meeting the K3 or K4 standards are acceptable.
Given
We call a function formula_19 (where formula_20 is the set of positive integers) a pseudo-random number generator for formula_1 given formula_4 taking values in formula_11 if and only if
It can be shown that if formula_28 is a pseudo-random number generator for the uniform distribution on formula_29 and if formula_30 is the CDF of some given probability distribution formula_1, then formula_32 is a pseudo-random number generator for formula_1, where formula_34 is the percentile of formula_1, i.e. formula_36. Intuitively, an arbitrary distribution can be simulated from a simulation of the standard uniform distribution.
An early computer-based PRNG, suggested by John von Neumann in 1946, is known as the middle-square method. The algorithm is as follows: take any number, square it, remove the middle digits of the resulting number as the "random number", then use that number as the seed for the next iteration. For example, squaring the number "1111" yields "1234321", which can be written as "01234321", an 8-digit number being the square of a 4-digit number. This gives "2343" as the "random" number. Repeating this procedure gives "4896" as the next result, and so on. Von Neumann used 10 digit numbers, but the process was the same.
A problem with the "middle square" method is that all sequences eventually repeat themselves, some very quickly, such as "0000". Von Neumann was aware of this, but he found the approach sufficient for his purposes and was worried that mathematical "fixes" would simply hide errors rather than remove them.
Von Neumann judged hardware random number generators unsuitable, for, if they did not record the output generated, they could not later be tested for errors. If they did record their output, they would exhaust the limited computer memories then available, and so the computer's ability to read and write numbers. If the numbers were written to cards, they would take very much longer to write and read. On the ENIAC computer he was using, the "middle square" method generated numbers at a rate some hundred times faster than reading numbers in from punched cards.
The middle-square method has since been supplanted by more elaborate generators.
A recent innovation is to combine the middle square with a Weyl sequence. This method produces high-quality output through a long period (seeMiddle Square Weyl Sequence PRNG).
Numbers selected from a non-uniform probability distribution can be generated using a uniform distribution PRNG and a function that relates the two distributions.
First, one needs the cumulative distribution function formula_37 of the target distribution formula_38:
Note that formula_40. Using a random number "c" from a uniform distribution as the probability density to "pass by", we get
so that
is a number randomly selected from distribution formula_38.
For example, the inverse of cumulative Gaussian distribution formula_44 with an ideal uniform PRNG with range (0, 1) as input formula_45 would produce a sequence of (positive only) values with a Gaussian distribution; however
Similar considerations apply to generating other non-uniform distributions such as Rayleigh and Poisson. | https://en.wikipedia.org/wiki?curid=45524 |
Linear congruential generator
A linear congruential generator (LCG) is an algorithm that yields a sequence of pseudo-randomized numbers calculated with a discontinuous piecewise linear equation. The method represents one of the oldest and best-known pseudorandom number generator algorithms. The theory behind them is relatively easy to understand, and they are easily implemented and fast, especially on computer hardware which can provide modular arithmetic by storage-bit truncation.
The generator is defined by recurrence relation:
where formula_2 is the sequence of pseudorandom values, and
are integer constants that specify the generator. If "c" = 0, the generator is often called a multiplicative congruential generator (MCG), or Lehmer RNG. If "c" ≠ 0, the method is called a mixed congruential generator.
When "c" ≠ 0, a mathematician would call the recurrence an affine transformation, not a linear one, but the misnomer is well-established in computer science.
A benefit of LCGs is that with appropriate choice of parameters, the period is known and long. Although not the only criterion, too short a period is a fatal flaw in a pseudorandom number generator.
While LCGs are capable of producing pseudorandom numbers which can pass formal tests for randomness, the quality of the output is extremely sensitive to the choice of the parameters "m" and "a". For example, "a" = 1 and "c" = 1 produces a simple modulo-"m" counter, which has a long period, but is obviously non-random.
Historically, poor choices for "a" have led to ineffective implementations of LCGs. A particularly illustrative example of this is RANDU, which was widely used in the early 1970s and led to many results which are currently being questioned because of the use of this poor LCG.
There are three common families of parameter choice:
This is the original Lehmer RNG construction. The period is "m"−1 if the multiplier "a" is chosen to be a primitive element of the integers modulo "m". The initial state must be chosen between 1 and "m"−1.
One disadvantage of a prime modulus is that the modular reduction requires a double-width product and an explicit reduction step. Often a prime just less than a power of 2 is used (the Mersenne primes 231−1 and 261−1 are popular), so that the reduction modulo "m" = 2"e" − "d" can be computed as ("ax" mod 2"e") + "d" . This must be followed by a conditional subtraction of "m" if the result is too large, but the number of subtractions is limited to "ad"/"m", which can be easily limited to one if "d" is small.
If a double-width product is unavailable, and the multiplier is chosen carefully, Schrage's method may be used. To do this, factor "m" = "qa"+"r", i.e. and . Then compute "ax" mod "m" = . Since "x" mod "q" < "q" ≤ "m"/"a", the first term is strictly less than "am"/"a" = "m". If "a" is chosen so that "r" ≤ "q" (and thus "r"/"q" ≤ 1), then the second term is also less than "m": "r" ≤ "rx"/"q" = "x"("r"/"q") ≤ "x" < "m". Thus, both products can be computed with a single-width product, and the difference between them lies in the range [1−"m", "m"−1], so can be reduced to [0, "m"−1] with a single conditional add.
A second disadvantage is that it is awkward to convert the value 1 ≤ "x" 32 or "m" = 264, produces a particularly efficient LCG, because this allows the modulus operation to be computed by simply truncating the binary representation. In fact, the most significant bits are usually not computed at all. There are, however, disadvantages.
This form has maximal period "m"/4, achieved if "a" ≡ 3 or "a" ≡ 5 (mod 8). The initial state "X"0 must be odd, and the low three bits of "X" alternate between two states and are not useful. It can be shown that this form is equivalent to a generator with a modulus a quarter the size and "c" ≠ 0.
A more serious issue with the use of a power-of-two modulus is that the low bits have a shorter period than the high bits. The lowest-order bit of "X" never changes ("X" is always odd), and the next two bits alternate between two states. (If "a" ≡ 5 (mod 8), then bit 1 never changes and bit 2 alternates. If "a" ≡ 3 (mod 8), then bit 2 never changes and bit 1 alternates.) Bit 3 repeats with a period of 4, bit 4 has a period of 8, and so on. Only the most significant bit of "X" achieves the full period.
When "c" ≠ 0, correctly chosen parameters allow a period equal to "m", for all seed values. This will occur if and only if:
These three requirements are referred to as the Hull–Dobell Theorem.
This form may be used with any "m", but only works well for "m" with many repeated prime factors, such as a power of 2; using a computer's word size is the most common choice. If "m" were a square-free integer, this would only allow "a" ≡ 1 (mod "m"), which makes a very poor PRNG; a selection of possible full-period multipliers is only available when "m" has repeated prime factors.
Although the Hull–Dobell theorem provides maximum period, it is not sufficient to guarantee a "good" generator. For example, it is desirable for "a" − 1 to not be any more divisible by prime factors of "m" than necessary. Thus, if "m" is a power of 2, then "a" − 1 should be divisible by 4 but not divisible by 8, i.e. "a" ≡ 5 (mod 8).
Indeed, most multipliers produce a sequence which fails one test for non-randomness or another, and finding a multiplier which is satisfactory to all applicable criteria is quite challenging. The spectral test is one of the most important tests.
Note that a power-of-2 modulus shares the problem as described above for "c" = 0: the low "k" bits form a generator with modulus 2"k" and thus repeat with a period of 2"k"; only the most significant bit achieves the full period. If a pseudorandom number less than "r" is desired, is a much higher-quality result than "X" mod "r". Unfortunately, most programming languages make the latter much easier to write (codice_1), so it is the more commonly used form.
The generator is "not" sensitive to the choice of "c", as long as it is relatively prime to the modulus (e.g. if "m" is a power of 2, then "c" must be odd), so the value "c"=1 is commonly chosen.
The series produced by other choices of "c" can be written as a simple function of the series when "c"=1. Specifically, if "Y" is the prototypical series defined by "Y"0 = 0 and "Y""n"+1 = "aYn"+1 mod m, then a general series "X""n"+1 = "aXn"+"c" mod "m" can be written as an affine function of "Y":
More generally, any two series "X" and "Z" with the same multiplier and modulus are related by
The following table lists the parameters of LCGs in common use, including built-in "rand()" functions in runtime libraries of various compilers. This table is to show popularity, not examples to emulate; "many of these parameters are poor." Tables of good parameters are available.
As shown above, LCGs do not always use all of the bits in the values they produce. For example, the Java implementation operates with 48-bit values at each iteration but returns only their 32 most significant bits. This is because the higher-order bits have longer periods than the lower-order bits (see below). LCGs that use this truncation technique produce statistically better values than those that do not. This is especially noticeable in scripts that use the mod operation to reduce range; modifying the random number mod 2 will lead to alternating 0 and 1 without truncation.
LCGs are fast and require minimal memory (one modulo-"m" number, often 32 or 64 bits) to retain state. This makes them valuable for simulating multiple independent streams. LCGs are not intended, and must not be used, for cryptographic applications; use a cryptographically secure pseudorandom number generator for such applications.
Although LCGs have a few specific weaknesses, many of their flaws come from having too small a state. The fact that people have been lulled for so many years into using them with such small moduli can be seen as a testament to strength of the technique. A LCG with large enough state can pass even stringent statistical tests; a modulo-2 LCG which returns the high 32 bits passes TestU01's SmallCrush suite, and a 96-bit LCG passes the most stringent BigCrush suite.
For a specific example, an ideal random number generator with 32 bits of output is expected (by the Birthday theorem) to begin duplicating earlier outputs after ≈ 216 results. "Any" PRNG whose output is its full, untruncated state will not produce duplicates until its full period elapses, an easily detectable statistical flaw. For related reasons, any PRNG should have a period longer than the square of the number of outputs required. Given modern computer speeds, this means a period of 264 for all but the least demanding applications, and longer for demanding simulations.
One flaw specific to LCGs is that, if used to choose points in an n-dimensional space, the points will lie on, at most, hyperplanes (Marsaglia's Theorem, developed by George Marsaglia). This is due to serial correlation between successive values of the sequence "Xn". Carelessly chosen multipliers will usually have far fewer, widely spaced planes, which can lead to problems. The spectral test, which is a simple test of an LCG's quality, measures this spacing and allows a good multiplier to be chosen.
The plane spacing depends both on the modulus and the multiplier. A large enough modulus can reduce this distance below the resolution of double precision numbers. The choice of the multiplier becomes less important when the modulus is large. It is still necessary to calculate the spectral index and make sure that the multiplier is not a bad one, but purely probabilistically it becomes extremely unlikely to encounter a bad multiplier when the modulus is larger than about 264.
Another flaw specific to LCGs is the short period of the low-order bits when "m" is chosen to be a power of 2. This can be mitigated by using a modulus larger than the required output, and using the most significant bits of the state.
Nevertheless, for some applications LCGs may be a good option. For instance, in an embedded system, the amount of memory available is often severely limited. Similarly, in an environment such as a video game console taking a small number of high-order bits of an LCG may well suffice. (The low-order bits of LCGs when m is a power of 2 should never be relied on for any degree of randomness whatsoever.) The low order bits go through very short cycles. In particular, any full-cycle LCG, when m is a power of 2, will produce alternately odd and even results.
LCGs should be evaluated very carefully for suitability in non-cryptographic applications where high-quality randomness is critical. For Monte Carlo simulations, an LCG must use a modulus greater and preferably much greater than the cube of the number of random samples which are required. This means, for example, that a (good) 32-bit LCG can be used to obtain about a thousand random numbers; a 64-bit LCG is good for about 221 random samples (a little over two million), etc. For this reason, in practice LCGs are not suitable for large-scale Monte Carlo simulations.
The following is an implementation of an LCG in Python:
def lcg(modulus, a, c, seed):
Free Pascal uses a Mersenne Twister as its default pseudo random number generator whereas Delphi uses a LCG. Here is a Delphi compatible example in Free Pascal based on the information in the table above. Given the same RandSeed value it generates the same sequence of random numbers as Delphi.
unit lcg_random;
interface
function LCGRandom: extended; overload;inline;
function LCGRandom(const range:longint):longint;overload;inline;
implementation
function IM:cardinal;inline;
begin
end;
function LCGRandom: extended; overload;inline;
begin
end;
function LCGRandom(const range:longint):longint;overload;inline;
begin
end;
Like all pseudorandom number generators, a LCG needs to store state and alter it each time it generates a new number. Multiple threads may access this state simultaneously causing a race condition. Implementations should use different state each with unique initialization for different threads to avoid equal sequences of random numbers on simultaneously executing threads.
There are several generators which are linear congruential generators in a different form, and thus the techniques used to analyze LCGs can be applied to them.
One method of producing a longer period is to sum the outputs of several LCGs of different periods having a large least common multiple; the Wichmann–Hill generator is an example of this form. (We would prefer them to be completely coprime, but a prime modulus implies an even period, so there must be a common factor of 2, at least.) This can be shown to be equivalent to a single LCG with a modulus equal to the product of the component LCG moduli.
Marsaglia's add-with-carry and subtract-with-borrow PRNGs with a word size of "b"=2"w" and lags "r" and "s" ("r" > "s") are equivalent to LCGs with a modulus of "br" ± "bs" ± 1.
Multiply-with-carry PRNGs with a multiplier of "a" are equivalent to LCGs with a large prime modulus of "abr"−1 and a power-of-2 multiplier "b".
A permuted congruential generator begins with a power-of-2-modulus LCG and applies an output transformation to eliminate the short period problem in the low-order bits.
The other widely used primitive for obtaining long-period pseudorandom sequences is the linear feedback shift register construction, which is based on arithmetic in GF(2)["x"], the polynomial ring over GF(2). Rather than integer addition and multiplication, the basic operations are exclusive-or and carry-less multiplication, which is usually implemented as a sequence of logical shifts. These have the advantage that all of their bits are full-period; they do not suffer from the weakness in the low-order bits that plagues arithmetic modulo 2"k".
Examples of this family include xorshift generators and the Mersenne twister. The latter provides a very long period (219937−1) and variate uniformity, but it fails some statistical tests. Lagged Fibonacci generators also fall into this category; although they use arithmetic addition, their period is ensured by an LFSR among the least-significant bits.
It is easy to detect the structure of a linear feedback shift register with appropriate tests such as the linear complexity test implemented in the TestU01 suite; a boolean circulant matrix initialized from consecutive bits of an LFSR will never have rank greater than the degree of the polynomial. Adding a non-linear output mixing function (as in the xoshiro256** and permuted congruential generator constructions) can greatly improve the performance on statistical tests.
Another structure for a PRNG is a very simple recurrence function combined with a powerful output mixing function. This includes counter mode block ciphers and non-cryptographic generators such as SplitMix64.
A structure similar to LCGs, but "not" equivalent, is the multiple-recursive generator: "Xn" = ("a"1"X""n"−1 + "a"2"X""n"−2 + ··· + "akX""n"−"k") mod "m" for "k" ≥ 2. With a prime modulus, this can generate periods up to "mk"−1, so is a useful extension of the LCG structure to larger periods.
A powerful technique for generating high-quality pseudorandom numbers is to combine two or more PRNGs of different structure; the sum of an LFSR and an LCG (as in the KISS or xorwow constructions) can do very well at some cost in speed. | https://en.wikipedia.org/wiki?curid=45527 |
Opportunity cost
When an option is chosen from alternatives, the opportunity cost is the "cost" incurred by "not" enjoying the benefit associated with the best alternative choice. The "New Oxford American Dictionary" defines it as "the loss of potential gain from other alternatives when one alternative is chosen."
In simple terms, opportunity cost is the benefit not received as a result of not selecting the next best option.
Opportunity cost is a key concept in economics, and has been described as expressing "the basic relationship between scarcity and choice".
The notion of opportunity cost plays a crucial part in attempts to ensure that scarce resources are used efficiently. Opportunity costs are not restricted to monetary or financial costs: the real cost of output forgone, lost time, pleasure or any other benefit that provides utility should also be considered an opportunity cost.
The opportunity cost of a product or service is the revenue that could be earned by its alternative use. In other words, opportunity cost is the cost of the next best alternative of a product or service. The meaning of the concept of opportunity cost can be explained with the help of following examples:
Thus opportunity cost requires sacrifices. If there is no sacrifice involved in a decision, there will be no opportunity cost. In this regard the opportunity costs not involving cash flows are not recorded in the books of accounts, but they are important considerations in business decisions.
The term was first used in 1894 by David L. Green in an article in the Quarterly Journal of Economics entitled "Pain Cost and Opportunity-Cost. The idea had been anticipated by previous writers including Benjamin Franklin and Frédéric Bastiat. Franklin coined the phrase "Time is Money", and spelled out the associated opportunity cost reasoning in his “Advice to a Young Tradesman” (1748): “Remember that Time is Money. He that can earn Ten Shillings a Day by his Labour, and goes abroad, or sits idle one half of that Day, tho’ he spends but Sixpence during his Diversion or Idleness, ought not to reckon That the only Expence; he has really spent or rather thrown away Five Shillings besides.”
Bastiat's 1848 essay "What Is Seen and What Is Not Seen" used opportunity cost reasoning in his critique of the broken window fallacy, and of what he saw as spurious arguments for public expenditure.
Explicit costs are opportunity costs that involve direct monetary payment by producers. The explicit opportunity cost of the factors of production not already owned by a producer is the price that the producer has to pay for them. For instance, if a firm spends $100 on electrical power consumed, its explicit opportunity cost is $100. This cash expenditure represents a lost opportunity to purchase something else with the $100.
Implicit costs (also called implied, imputed or notional costs) are the opportunity costs that are not reflected in cash outflow but are implied by the choice of the firm not to allocate its existing (owned) resources, or factors of production, to the best alternative use. For example: a manufacturer has previously purchased 1000 tons of steel and the machinery to produce a widget. The implicit part of the opportunity cost of producing the widget is the revenue lost by not selling the steel and not renting out the machinery instead of using it for production.
One example of opportunity cost is in the evaluation of "foreign" (to the US) buyers and their allocation of cash assets in real estate or other types of investment vehicles. During the downturn in circa June or July 2015 of the Chinese stock market, more and more Chinese investors from Hong Kong and Taiwan turned to the United States as an alternative vessel for their investment dollars; the opportunity cost of leaving their money in the Chinese stock market or Chinese real estate market is the yield available in the US real estate market.
Opportunity cost is not the "sum" of the available alternatives when those alternatives are, in turn, mutually exclusive to each other. It is the highest value option forgone. The opportunity cost of a city's decision to build the hospital on its vacant land is the loss of net income from using the land for a sporting center, or the loss of net income from using the land for a parking lot, or the money the city could have made by selling the land, whichever is greatest. Use for any one of those purposes precludes all the others.
If someone loses the opportunity to earn money, that is part of the opportunity cost. If someone chooses to spend money, that money could be used to purchase other goods and services so the spent money is part of the opportunity cost as well. Add the value of the next best alternative and you have the total opportunity cost. If you miss work to go to a concert, your opportunity cost is the money you would have earned if you had gone to work plus the cost of the concert. | https://en.wikipedia.org/wiki?curid=45528 |
Aldo Rossi
Aldo Rossi (3 May 1931 – 4 September 1997) was an Italian architect and designer who achieved international recognition in four distinct areas: architectural theory, drawing and design and also product design. He was one of the leading exponents of the postmodern movement.
He was the first Italian to receive the Pritzker Prize for architecture.
He was born in Milan, Italy. After early education by the Somascan Religious Order and then at in Lecco, in 1949 he went to the school of architecture at the Polytechnic University of Milan. His thesis advisor was Piero Portaluppi and he graduated in 1959.
In 1955 he had started writing for, and from 1959 was one of the editors of, the architectural magazine Casabella-Continuità, with editor in chief Ernesto Nathan Rogers. Rossi left in 1964, when the chief editorship went to Gian Antonio Bernasconi. Rossi went on to work for and Il_contemporaneo, making him one of the most active participants in the fervent cultural debate of the time.
His early articles cover architects such as Alessandro Antonelli, , Auguste Perret and Emil Kaufmann and much of this material became part of his second book, "Scritti scelti sull'architettura e la città 1956-1972" ("Selected writings on architecture and the city from 1956 to 1972"). He married the Swiss actress Sonia Gessner, who introduced him to the world of film and theater. Culture and his family became central to his life. His son Fausto was active in movie-making both in front of and behind the camera and his daughter Vera was involved with theatre.
He began his professional career at the studio of Ignazio Gardella in 1956, moving on to the studio of Marco Zanuso. In 1963 also he began teaching, firstly as an assistant to (1963) at the school of urban planning in Arezzo, then to Carlo Aymonino at the Institute of Architecture in Venice. In 1965 he was appointed lecturer at the Polytechnic University of Milan and the following year he published "The architecture of the city" which soon became a classic of architectural literature.
His professional career, initially dedicated to architectural theory and small building work took a huge leap forward when Aymonino allowed Rossi to design part of the Monte Amiata complex in the Gallaratese quarter of Milan. In 1971 he won the design competition for the extension of the San Cataldo Cemetery in Modena, which made him internationally famous.
After suspension from teaching in Italy in those politically troubled times, he moved to ETH Zurich, occupying the chair in architectural design from 1971 to 1975.
In 1973 he was director of the International Architecture Section at the , where he presented, among others, his student Arduino Cantafora. Rossi's design ideas for the exhibition are explained in the International Architecture Catalogue and in a 16mm documentary "Ornament and crime" directed by Luigi Durissi and produced along with Gianni Braghieri and Franco Raggi. In 1975, Rossi returned to the teaching profession in Italy, teaching architectural composition in Venice.
In 1979 he was made a member of the prestigious Academy of Saint Luke. Meanwhile, there was international interest in his skills. He taught at several universities in the United States, including Cooper Union in New York City and Cornell University in Ithaca (New York State). At Cornell he participated in the "Institute for Architecture and Urban Studies" joint venture with New York's Museum of Modern Art, travelling to China and Hong Kong and attending conferences in South America.
In 1981 he published his autobiography, "A scientific autobiography". In this work the author, "in discrete disorder", brings back memories, objects, places, forms, literature notes, quotes, and insights and tries to "... go over things or impressions, describe, or look for ways to describe." In the same year he won first prize at the international competition for the design of an apartment block on the corner of Kochstraße and Wilhelmstraße in central Berlin.
In 1984 together with Ignazio Gardella and Fabio Reinhart, he won the competition for the renovation of the Teatro Carlo Felice in Genoa, which was not fully completed until 1991. In 1985 and 1986 Rossi was director of the 3rd (respectively 4th) International Architecture Exhibition at the Venice Biennale including further away display spaces such as Villa Farsetti in Santa Maria di Sala.
In 1987 he won two international competitions: one for a site at the Parc de la Villette in Paris, the other for the Deutsches Historisches Museum in Berlin, which was never brought to fruition. In 1989 he continued product design work for Unifor (now part of ) and Alessi. His espresso maker "La Cupola", designed for Alessi came out in 1988.
In 1990 he was awarded the Pritzker Prize. The city of Fukuoka in Japan honoured him for his work on the hotel complex "The Palace" and he won the 1991 Thomas Jefferson Medal in Public Architecture from the American Institute of Architects. These prestigious awards were followed by exhibitions at the Centre Georges Pompidou in Paris, the Beurs van Berlage in Amsterdam, the Berlinische Galerie in Berlin and the Museum of Contemporary Art in Ghent, Belgium.
In 1996 he became an honorary member of the American Academy of Arts and Letters and the following year he received their special cultural award in architecture and design. He died in Milan on 4 September 1997, following a car accident. Posthumously he received the "Torre Guinigi" prize for his contribution to urban studies and the "Seaside Prize" of the Seaside Institute, Florida, where he had built a detached family home in 1995.
On appeal his proposals won the 1999 competition for the restoration of the Teatro La Fenice, Venice and it reopened in 2004. In 1999 the Faculty of Architecture of the University of Bologna, based in Cesena, was named after him.
His earliest works of the 1960s were mostly theoretical and displayed a simultaneous influence of 1920s Italian modernism ("see Giuseppe Terragni"), classicist influences of Viennese architect Adolf Loos, and the reflections of the painter Giorgio de Chirico. A trip to the Soviet Union to study Stalinist architecture also left a marked impression.
In his writings Rossi criticized the lack of understanding of the city in current architectural practice. He argued that a city must be studied and valued as something constructed over time; of particular interest are urban artifacts that withstand the passage of time. Rossi held that the city remembers its past (our "collective memory"), and that we use that memory through monuments; that is, monuments give structure to the city. Inspired by the persistence of Europe's ancient cities, Rossi strove to create similar structures immune to obsolescence.
He became extremely influential in the late 1970s and 1980s as his body of built work expanded and for his theories promoted in his books "The Architecture of the City" ("L'architettura della città", 1966) and "A Scientific Autobiography" ("Autobiografia scientifica", 1981).The largest of Rossi's projects in terms of scale was the San Cataldo Cemetery, in Modena, Italy, which began in 1971 but is yet to be completed. Rossi referred to it as a "city of the dead".
The distinctive independence of his buildings is reflected in the micro-architectures of the products designed by Rossi. In the 1980s Rossi designed stainless steel cafetières and other products for Alessi, Pirelli, and others.
For the Venice Biennale in 1979 Rossi designed a floating "Teatro del Mondo" that seated 250 people. For the Venice Biennale in 1984, he designed a triumphal arch at the entrance to the exhibition site. In 2006 two pylons based on an original 1989 design by Aldo Rossi were erected in front of the Bonnefanten Museum in Maastricht by the Delft architectural firm Ufo Architecten.
Aldo Rossi won the prestigious Pritzker Prize for architecture in 1990. Ada Louise Huxtable, architectural critic and Pritzker juror, has described Rossi as "a poet who happens to be an architect."
In addition to architecture, Rossi, created product designs, including: | https://en.wikipedia.org/wiki?curid=45534 |
Alessi (Italian company)
Alessi is a housewares and kitchen utensil company in Italy, manufacturing and marketing everyday items authored by a wide range of designers, architects, and industrial designers — including Achille Castiglioni, Richard Sapper, Alessandro Mendini, Ettore Sottsass, Wiel Arets, Zaha Hadid, Toyo Ito, Tom Kovac, Greg Lynn, MVRDV, Jean Nouvel, UN Studio, Michael Graves and Philippe Starck.
Alessi was founded in 1921 by Giovanni Alessi who was born in Italy and raised in Switzerland. A few years after World War 1, Alessi started with producing a wide range of tableware items in nickel, chromium and silver-plated brass. The company began when Carlo Alessi (born 1916), the son of Giovanni, was named chief designer. Between 1935 and 1945 he developed most of the products Alessi released.
In 1969 the company was under the leadership of Carlo Alessi. It was his brother Luigi who introduced the collaboration with external designers in 1955. With some architects, he designed a number of items that were created for the hotel needs. Through his intervention caused many individual objects, which were best-sellers, such as the historical series of "wire baskets". From 1957 by Luigi Massaroni and Carlo Mazzeri. This was designed in a series with an "Ice bucket" and "Ice tongs" as part of the Program 4 for the 11 Triennale in Milan. This was the first time that the Alessi products got shown with manufactured goods. The 1950s were a difficult time to sell designer objects, as it was only a few years after World War II, and many people could not afford sweatshops.
In 1970, Alberto Alessi was responsible for the third transformation of the company. Alessi was considered one of the "Italian Design Factories". In this decade under the leadership of Alberto Alessi the company collaborated with some design maestros like Achille Castiglioni, Richard Sapper, Alessandro Mendini, and Ettore Sottsass. In the '70's, Alessi produced the "Condiment set" (salt, pepper and toothpicks) by Ettore Sottsass, the "Espressomaker" by Sapper.
The 1980s marked a period in which Italian design factories had to compete with mass production. These movements had a different view on design, for the Italian design factories the design and therefore the designer was the most important part of the process while for the mass production the design had to be functional and easy to be reproduced. Also in the 1980s, they changed their marketing image from factory to industrial research lab, a place for research and production. For Alessi the '80's are marked with some designs like the "Two tone kettle" by Sapper, their first cutlery set "Dry" by Castiglioni. Alessi collaborated with new designers like Aldo Rossi, Michael Graves, and Philippe Starck, who have been responsible for the some of Alessi's all time bestseller like the "kettle" with a bird whistle by Graves.
Alessi faced increasing competition from other international manufacturers, especially in lower-cost products mass-produced for retailers such as Target Corporation and J. C. Penney.
In the 1990s Alessi started to work more with plastics, at the request of designers who found it an easier material to work with than metal, offering more design freedom and innovative possibilities. The 1990s were marked by the theme "Family Follows Fiction", with playful and imaginative objects. Artists designing for this theme included Stefano Giovannoni and Alessandro Mendini, who designed "Fruit Mama" and the bestseller "Anna G". Metal still remained a popular material, for example the "Girotondo" family by King Kong.
During the 2000s, Alessi collaborated with several architects for its "coffee and tea towers," with a new generation of architects such as Wiel Arets, Zaha Hadid, Toyo Ito, Tom Kovac, Greg Lynn, MVRDV, Jean Nouvel, and UN Studio. These sets had a limited production of 99 copies. Another remarkable design in the 2000s is the "Blow Up" series by Fratelli Campana. The brothers played with form and shape to create baskets and other objects that look like they would fall apart when touched.
In 2006, the company reclassified its products under three lines: "A di Alessi", "Alessi", and "Officina Alessi". A di Alessi is more "democratic" and more "pop", the lower price range of Alessi. Officina Alessi is more exclusive, innovative, and experimental, marked by small batch production series and limited series.
Alessi products are on display in museums worldwide like Museum of Modern Art (New York), Metropolitan Museum of Art, Victoria and Albert Museum, Pompidou Centre, and Stedelijk Museum Italy. A collaboration with the National Palace Museum of Taiwan produced a collection of various kitchenware products with Asian themes.
From 1945 until today, Alessi has collaborated with designers and even other brands or companies for their products. Some key designs and their designers: | https://en.wikipedia.org/wiki?curid=45535 |
Ustad Isa
Ustad Isa Shirazi ( translation "Master Isa") was a Persian architect, often described as the assistant architect of the Taj Mahal in Agra, India.
The lack of complete and reliable information as to whom the credit for the design belongs, led to innumerable speculations. Scholars suggest the story of Ustad Isa was born of the eagerness of the British in the 19th century to believe that such a beautiful building should be credited to a European architect. Local informants were reported to have started British curiosity regarding the origins of the Taj by also supplying them with fictitious lists of workmen and materials from all over Asia. Typically, he is described as a Persian architect.
Recent research suggests the Persian architect, Ustad Ahmad Lahauri was the most likely candidate as the chief architect of the Taj Mahal, an assertion based on a claim made in writings by Lahauri's son Lutfullah Muhandis.
Ustad Isa Shirazi was the assistant of Ustad Ahmad Lahori. | https://en.wikipedia.org/wiki?curid=45537 |
Mersenne Twister
The Mersenne Twister is a pseudorandom number generator (PRNG). It is by far the most widely used general-purpose PRNG. Its name derives from the fact that its period length is chosen to be a Mersenne prime.
The Mersenne Twister was developed in 1997 by and . It was designed specifically to rectify most of the flaws found in older PRNGs.
The most commonly used version of the Mersenne Twister algorithm is based on the Mersenne prime 219937−1. The standard implementation of that, MT19937, uses a 32-bit word length. There is another implementation (with five variants) that uses a 64-bit word length, MT19937-64; it generates a different sequence.
The Mersenne Twister is the default PRNG for the following software systems: Dyalog APL, Microsoft Excel, GAUSS, GLib, GNU Multiple Precision Arithmetic Library, GNU Octave, GNU Scientific Library, gretl, IDL, Julia, CMU Common Lisp, Embeddable Common Lisp, Steel Bank Common Lisp, Maple, MATLAB, Free Pascal, PHP, Python, R, Ruby, SageMath, Scilab, Stata.
It is also available in Apache Commons, in standard C++ (since C++11), and in Mathematica. Add-on implementations are provided in many program libraries, including the Boost C++ Libraries, the CUDA Library, and the NAG Numerical Library.
The Mersenne Twister is one of two PRNGs in SPSS: the other generator is kept only for compatibility with older programs, and the Mersenne Twister is stated to be "more reliable".
The Mersenne Twister is similarly one of the PRNGs in SAS: the other generators are older and deprecated.
The Mersenne Twister is the default PRNG in Stata, the other one is KISS, for compatibility with older versions of Stata.
An alternative generator, WELL ("Well Equidistributed Long-period Linear"), offers quicker recovery, and equal randomness, and nearly equal speed.
Marsaglia's xorshift generators and variants are the fastest in this class.
64-bit MELGs ("64-bit Maximally Equidistributed F2-Linear Generators with Mersenne Prime Period") are completely optimized in terms of the k-distribution properties.
The ACORN family (published 1989) is another k-distributed PRNG, which shows similar computational speed to MT, and better statistical properties as it satisfies all the current (2019) TestU01 criteria; when used with appropriate choices of parameters, ACORN can have arbitrarily long period and precision.
A pseudorandom sequence "xi" of "w"-bit integers of period "P" is said to be "k-distributed" to "v"-bit accuracy if the following holds.
For a "w"-bit word length, the Mersenne Twister generates integers in the range [0, 2"w"−1].
The Mersenne Twister algorithm is based on a matrix linear recurrence over a finite binary field "F"2. The algorithm is a twisted generalised feedback shift register (twisted GFSR, or TGFSR) of rational normal form (TGFSR(R)), with state bit reflection and tempering. The basic idea is to define a series formula_2 through a simple recurrence relation, and then output numbers of the form formula_3, where formula_4 is an invertible "F"2 matrix called a tempering matrix.
The general algorithm is characterized by the following quantities (some of these explanations make sense only after reading the rest of the algorithm):
with the restriction that 2"nw" − "r" − 1 is a Mersenne prime. This choice simplifies the primitivity test and "k"-distribution test that are needed in the parameter search.
The series x is defined as a series of "w"-bit quantities with the recurrence relation:
where formula_6 denotes concatenation of bit vectors (with upper bits on the left), formula_7 the bitwise exclusive or (XOR), formula_8 means the upper formula_9 bits of formula_10, and formula_11 means the lower formula_12 bits of formula_13. The twist transformation "A" is defined in rational normal form as:
formula_14
with "I""n" − 1 as the ("n" − 1) × ("n" − 1) identity matrix. The rational normal form has the benefit that multiplication by "A" can be efficiently expressed as: (remember that here matrix multiplication is being done in "F"2, and therefore bitwise XOR takes the place of addition)
formula_15
where "x"0 is the lowest order bit of "x".
As like TGFSR(R), the Mersenne Twister is cascaded with a tempering transform to compensate for the reduced dimensionality of equidistribution (because of the choice of "A" being in the rational normal form). Note that this is equivalent to using the matrix formula_16 where formula_17 for formula_4 an invertible matrix, and therefore the analysis of characteristic polynomial mentioned below still holds.
As with "A", we choose a tempering transform to be easily computable, and so do not actually construct "T" itself. The tempering is defined in the case of Mersenne Twister as
where x is the next value from the series, y a temporary intermediate value, z the value returned from the algorithm, with «, » as the bitwise left and right shifts, and & as the bitwise and. The first and last transforms are added in order to improve lower-bit equidistribution. From the property of TGFSR, formula_19 is required to reach the upper bound of equidistribution for the upper bits.
The coefficients for MT19937 are:
Note that 32-bit implementations of the Mersenne Twister generally have "d" = FFFFFFFF16. As a result, the "d" is occasionally omitted from the algorithm description, since the bitwise and with "d" in that case has no effect.
The coefficients for MT19937-64 are:
The state needed for a Mersenne Twister implementation is an array of "n" values of "w" bits each. To initialize the array, a "w"-bit seed value is used to supply "x"0 through "x""n" − 1 by setting "x"0 to the seed value and thereafter setting
for "i" from 1 to "n"−1. The first value the algorithm then generates is based on "x""n", not on "x""0". The constant "f" forms another parameter to the generator, though not part of the algorithm proper. The value for "f" for MT19937 is 1812433253 and for MT19937-64 is 6364136223846793005.
In order to achieve the 2"nw" − "r" − 1 theoretical upper limit of the period in a TGFSR, "φ""B"("t") must be a primitive polynomial, "φ""B"("t") being the characteristic polynomial of
formula_20
formula_21
The twist transformation improves the classical GFSR with the following key properties:
The following pseudocode implements the general Mersenne Twister algorithm. The constants w, n, m, r, a, u, d, s, b, t, c, l, and f are as in the algorithm description above. It is assumed that int represents a type sufficient to hold values with w bits:
CryptMT is a stream cipher and cryptographically secure pseudorandom number generator which uses Mersenne Twister internally. It was developed by Matsumoto and Nishimura alongside Mariko Hagita and Mutsuo Saito. It has been submitted to the eSTREAM project of the eCRYPT network. Unlike Mersenne Twister or its other derivatives, CryptMT is patented.
MTGP is a variant of Mersenne Twister optimised for graphics processing units published by Mutsuo Saito and Makoto Matsumoto. The basic linear recurrence operations are extended from MT and parameters are chosen to allow many threads to compute the recursion in parallel, while sharing their state space to reduce memory load. The paper claims improved equidistribution over MT and performance on a very old GPU (Nvidia GTX260 with 192 cores) of 4.7 ms for 5×107 random 32-bit integers.
The SFMT (SIMD-oriented Fast Mersenne Twister) is a variant of Mersenne Twister, introduced in 2006, designed to be fast when it runs on 128-bit SIMD.
Intel SSE2 and PowerPC AltiVec are supported by SFMT. It is also used for games with the Cell BE in the PlayStation 3.
TinyMT is a variant of Mersenne Twister, proposed by Saito and Matsumoto in 2011. TinyMT uses just 127 bits of state space, a significant decrease compared to the original's 2.5 KiB of state. However, it has a period of 2127−1, far shorter than the original, so it is only recommended by the authors in cases where memory is at a premium. | https://en.wikipedia.org/wiki?curid=45538 |
Social Darwinism
Social Darwinism is any of various theories of society which emerged in the United Kingdom, North America, and Western Europe in the 1870s, claiming to apply biological concepts of natural selection and survival of the fittest to sociology and politics. Social Darwinists argue that the strong should see their wealth and power increase while the weak should see their wealth and power decrease. Different social-Darwinist groups have differing views about which groups of people are considered to be "the strong" and which groups of people are considered to be "the weak", and they also hold different opinions about the precise mechanisms that should be used to reward strength and punish weakness. Many such views stress competition between individuals in "laissez-faire" capitalism, while others were used in support of authoritarianism, eugenics, racism, imperialism, fascism, Nazism, and struggle between national or racial groups.
Social Darwinism broadly declined in popularity as a purportedly scientific concept following World War I and was largely discredited by the end of World War II, partially due to its association with Nazism and partially due to a growing scientific consensus that it was scientifically groundless. Later theories that were categorised as social Darwinism were generally described as such as a critique by their opponents; their proponents did not identify themselves by such a label. Creationists have often maintained that social Darwinism—leading to policies designed to reward the most competitive—is a logical consequence of "Darwinism" (the theory of natural selection in biology). Biologists and historians have stated that this is a fallacy of appeal to nature, since the theory of natural selection is merely intended as a description of a biological phenomenon and should not be taken to imply that this phenomenon is "good" or that it ought to be used as a moral guide in human society. While most scholars recognize some historical links between the popularisation of Darwin's theory and forms of social Darwinism, they also maintain that social Darwinism is not a necessary consequence of the principles of biological evolution.
Scholars debate the extent to which the various social Darwinist ideologies reflect Charles Darwin's own views on human social and economic issues. His writings have passages that can be interpreted as opposing aggressive individualism, while other passages appear to promote it. Darwin's early evolutionary views and his opposition to slavery ran counter to many of the claims that social Darwinists would eventually make about the mental capabilities of the poor and colonial indigenes. After the publication of "On the Origin of Species" in 1859, one strand of Darwins' followers, led by Sir John Lubbock, argued that natural selection ceased to have any noticeable effect on humans once organised societies had been formed. However, some scholars argue that Darwin's view gradually changed and came to incorporate views from other theorists such as Herbert Spencer. Spencer published his Lamarckian evolutionary ideas about society before Darwin first published his hypothesis in 1859, and both Spencer and Darwin promoted their own conceptions of moral values. Spencer supported "laissez-faire" capitalism on the basis of his Lamarckian belief that struggle for survival spurred self-improvement which could be inherited. An important proponent in Germany was Ernst Haeckel, who popularized Darwin's thought and his personal interpretation of it, and used it as well to contribute to a new creed, the monist movement.
The term Darwinism was coined by Thomas Henry Huxley in his March 1861 review of "On the Origin of Species", and by the 1870s it was used to describe a range of concepts of evolution or development, without any specific commitment to Charles Darwin's theory of natural selection.
The first use of the phrase "social Darwinism" was in Joseph Fisher's 1877 article on "The History of Landholding in Ireland" which was published in the "Transactions of the Royal Historical Society". Fisher was commenting on how a system for borrowing livestock which had been called "tenure" had led to the false impression that the early Irish had already evolved or developed land tenure;
Despite the fact that Social Darwinism bears Charles Darwin's name, it is also linked today with others, notably Herbert Spencer, Thomas Malthus, and Francis Galton, the founder of eugenics. In fact, Spencer was not described as a social Darwinist until the 1930s, long after his death. The social Darwinism term first appeared in Europe in 1880, and journalist Emilie Gautier had coined the term with reference to a health conference in Berlin 1877. Around 1900 it was used by sociologists, some being opposed to the concept. The term was popularized in the United States in 1944 by the American historian Richard Hofstadter who used it in the ideological war effort against fascism to denote a reactionary creed which promoted competitive strife, racism and chauvinism. Hofstadter later also recognized (what he saw as) the influence of Darwinist and other evolutionary ideas upon those with collectivist views, enough to devise a term for the phenomenon, "Darwinist collectivism". Before Hofstadter's work the use of the term "social Darwinism" in English academic journals was quite rare. In fact,
Social Darwinism has many definitions, and some of them are incompatible with each other. As such, social Darwinism has been criticized for being an inconsistent philosophy, which does not lead to any clear political conclusions. For example, "The Concise Oxford Dictionary of Politics" states:Part of the difficulty in establishing sensible and consistent usage is that commitment to the biology of natural selection and to 'survival of the fittest' entailed nothing uniform either for sociological method or for political doctrine. A 'social Darwinist' could just as well be a defender of laissez-faire as a defender of state socialism, just as much an imperialist as a domestic eugenist.
The term "Social Darwinism" has rarely been used by advocates of the supposed ideologies or ideas; instead it has almost always been used pejoratively by its opponents. The term draws upon the common meaning of "Darwinism", which includes a range of evolutionary views, but in the late 19th century was applied more specifically to natural selection as first advanced by Charles Darwin to explain speciation in populations of organisms. The process includes competition between individuals for limited resources, popularly but inaccurately described by the phrase "survival of the fittest", a term coined by sociologist Herbert Spencer.
Creationists have often maintained that Social Darwinism—leading to policies designed to reward the most competitive—is a logical consequence of "Darwinism" (the theory of natural selection in biology).
Biologists and historians have stated that this is a fallacy of appeal to nature and should not be taken to imply that this phenomenon ought to be used as a moral guide in human society. While there are historical links between the popularization of Darwin's theory and forms of social Darwinism, social Darwinism is not a necessary consequence of the principles of biological evolution.
While the term has been applied to the claim that Darwin's theory of evolution by natural selection can be used to understand the social endurance of a nation or country, Social Darwinism commonly refers to ideas that predate Darwin's publication of "On the Origin of Species". Others whose ideas are given the label include the 18th century clergyman Thomas Malthus, and Darwin's cousin Francis Galton who founded eugenics towards the end of the 19th century.
The expansion of the British Empire fitted in with the broader notion of social Darwinism used from the 1870s onwards to account for the remarkable and universal phenomenon of "the Anglo-Saxon overflowing his boundaries", as phrased by the late-Victorian sociologist Benjamin Kidd in "Social Evolution", published in 1894. The concept also proved useful to justify what was seen by some as the inevitable extermination of "the weaker races who disappear before the stronger" not so much "through the effects of … our vices upon them" as "what may be called the virtues of our civilisation." Winston Churchill, a political proponent of eugenics, maintained that if fewer ‘feebleminded’ individuals were born, less crime would take place.
Herbert Spencer's ideas, like those of evolutionary progressivism, stemmed from his reading of Thomas Malthus, and his later theories were influenced by those of Darwin. However, Spencer's major work, "Progress: Its Law and Cause" (1857), was released two years before the publication of Darwin's "On the Origin of Species", and "First Principles" was printed in 1860.
In "The Social Organism" (1860), Spencer compares society to a living organism and argues that, just as biological organisms evolve through natural selection, society evolves and increases in complexity through analogous processes.
In many ways, Spencer's theory of cosmic evolution has much more in common with the works of Lamarck and Auguste Comte's positivism than with Darwin's.
Jeff Riggenbach argues that Spencer's view was that culture and education made a sort of Lamarckism possible and notes that Herbert Spencer was a proponent of private charity. However, the legacy of his social Darwinism was less than charitable.
Spencer's work also served to renew interest in the work of Malthus. While Malthus's work does not itself qualify as social Darwinism, his 1798 work "An Essay on the Principle of Population", was incredibly popular and widely read by social Darwinists. In that book, for example, the author argued that as an increasing population would normally outgrow its food supply, this would result in the starvation of the weakest and a Malthusian catastrophe.
According to Michael Ruse, Darwin read Malthus' famous "Essay on a Principle of Population" in 1838, four years after Malthus' death. Malthus himself anticipated the social Darwinists in suggesting that charity could exacerbate social problems.
Another of these social interpretations of Darwin's biological views, later known as eugenics, was put forth by Darwin's cousin, Francis Galton, in 1865 and 1869. Galton argued that just as physical traits were clearly inherited among generations of people, the same could be said for mental qualities (genius and talent). Galton argued that social morals needed to change so that heredity was a conscious decision in order to avoid both the over-breeding by less fit members of society and the under-breeding of the more fit ones.
In Galton's view, social institutions such as welfare and insane asylums were allowing inferior humans to survive and reproduce at levels faster than the more "superior" humans in respectable society, and if corrections were not soon taken, society would be awash with "inferiors". Darwin read his cousin's work with interest, and devoted sections of "Descent of Man" to discussion of Galton's theories. Neither Galton nor Darwin, though, advocated any eugenic policies restricting reproduction, due to their Whiggish distrust of government.
Friedrich Nietzsche's philosophy addressed the question of artificial selection, yet Nietzsche's principles did not concur with Darwinian theories of natural selection. Nietzsche's point of view on sickness and health, in particular, opposed him to the concept of biological adaptation as forged by Spencer's "fitness". Nietzsche criticized Haeckel, Spencer, and Darwin, sometimes under the same banner by maintaining that in specific cases, sickness was necessary and even helpful. Thus, he wrote:
Wherever progress is to ensue, deviating natures are of greatest importance. Every progress of the whole must be preceded by a partial weakening. The strongest natures retain the type, the weaker ones help to advance it.
Something similar also happens in the individual. There is rarely a degeneration, a truncation, or even a vice or any physical or moral loss without an advantage somewhere else. In a warlike and restless clan, for example, the sicklier man may have occasion to be alone, and may therefore become quieter and wiser; the one-eyed man will have one eye the stronger; the blind man will see deeper inwardly, and certainly hear better. To this extent, the famous theory of the survival of the fittest does not seem to me to be the only viewpoint from which to explain the progress of strengthening of a man or of a race.
Ernst Haeckel's recapitulation theory was not Darwinism, but rather attempted to combine the ideas of Goethe, Lamarck and Darwin. It was adopted by emerging social sciences to support the concept that non-European societies were "primitive", in an early stage of development towards the European ideal, but since then it has been heavily refuted on many fronts. Haeckel's works led to the formation of the Monist League in 1904 with many prominent citizens among its members, including the Nobel Prize winner Wilhelm Ostwald.
The simpler aspects of social Darwinism followed the earlier Malthusian ideas that humans, especially males, require competition in their lives in order to survive in the future. Further, the poor should have to provide for themselves and not be given any aid. However, amidst this climate, most social Darwinists of the early twentieth century actually supported better working conditions and salaries. Such measures would grant the poor a better chance to provide for themselves yet still distinguish those who are capable of succeeding from those who are poor out of laziness, weakness, or inferiority.
"Social Darwinism" was first described by Eduard Oscar Schmidt of the University of Strasbourg, reporting at a scientific and medical conference held in Munich in 1877. He "noted" how socialists, although opponents of Darwin's theory, used it to add force to their political arguments. Schmidt's essay first appeared in English in "Popular Science" in March 1879. There followed an anarchist tract published in Paris in 1880 entitled "Le darwinisme social" by Émile Gautier. However, the use of the term was very rare—at least in the English-speaking world (Hodgson, 2004)—until the American historian Richard Hofstadter published his influential "Social Darwinism in American Thought" (1944) during World War II.
Hypotheses of social evolution and cultural evolution were common in Europe. The Enlightenment thinkers who preceded Darwin, such as Hegel, often argued that societies progressed through stages of increasing development. Earlier thinkers also emphasized conflict as an inherent feature of social life. Thomas Hobbes's 17th century portrayal of the state of nature seems analogous to the competition for natural resources described by Darwin. Social Darwinism is distinct from other theories of social change because of the way it draws Darwin's distinctive ideas from the field of biology into social studies.
Darwin, unlike Hobbes, believed that this struggle for natural resources allowed individuals with certain physical and mental traits to succeed more frequently than others, and that these traits accumulated in the population over time, which under certain conditions could lead to the descendants being so different that they would be defined as a new species.
However, Darwin felt that "social instincts" such as "sympathy" and "moral sentiments" also evolved through natural selection, and that these resulted in the strengthening of societies in which they occurred, so much so that he wrote about it in "Descent of Man":
The following proposition seems to me in a high degree probable—namely, that any animal whatever, endowed with well-marked social instincts, the parental and filial affections being here included, would inevitably acquire a moral sense or conscience, as soon as its intellectual powers had become as well, or nearly as well developed, as in man. For, firstly, the social instincts lead an animal to take pleasure in the society of its fellows, to feel a certain amount of sympathy with them, and to perform various services for them.
Nazi Germany's justification for its aggression was regularly promoted in Nazi propaganda films depicting scenes such as beetles fighting in a lab setting to demonstrate the principles of "survival of the fittest" as depicted in "Alles Leben ist Kampf" (English translation: "All Life is Struggle"). Hitler often refused to intervene in the promotion of officers and staff members, preferring instead to have them fight amongst themselves to force the "stronger" person to prevail—"strength" referring to those social forces void of virtue or principle. Key proponents were Alfred Rosenberg, who was hanged later at Nuremberg. Such ideas also helped to advance euthanasia in Germany, especially Action T4, which led to the murder of mentally ill and disabled people in Germany.
The argument that Nazi ideology was strongly influenced by social Darwinist ideas is often found in historical and social science literature. For example, the philosopher and historian Hannah Arendt analysed the historical development from a politically indifferent scientific Darwinism via social Darwinist ethics to racist ideology.
By 1985, creationists were taking up the argument that Nazi ideology was directly influenced by Darwinian evolutionary theory.
Such claims have been presented by creationists such as Jonathan Sarfati. Intelligent design creationism supporters have promoted this position as well. For example, it is a theme in the work of Richard Weikart, who is a historian at California State University, Stanislaus, and a senior fellow for the Center for Science and Culture of the Discovery Institute.
It is also a main argument in the 2008 intelligent-design/creationist movie "". These claims are widely criticized. The Anti-Defamation League has rejected such attempts to link Darwin's ideas with Nazi atrocities, and has stated that "Using the Holocaust in order to tarnish those who promote the theory of evolution is outrageous and trivializes the complex factors that led to the mass extermination of European Jewry." Robert J. Richards describes the link as a myth that ignores far more obvious causes of Nazism - including the "pervasive anti-Semitic miasma created by Christian apologists" - and dismisses efforts to tie Darwin to Nazism as "crude lever" used by religious fundamentalists to try and reduce public support for Darwin's theories.
Similar criticisms are sometimes applied (or misapplied) to other political or scientific theories that resemble social Darwinism, for example criticisms leveled at evolutionary psychology. For example, a critical reviewer of Weikart's book writes that "(h)is historicization of the moral framework of evolutionary theory poses key issues for those in sociobiology and evolutionary psychology, not to mention bioethicists, who have recycled many of the suppositions that Weikart has traced."
Another example is recent scholarship that portrays Ernst Haeckel's Monist League as a mystical progenitor of the Völkisch movement and, ultimately, of the Nazi Party of Adolf Hitler. Scholars opposed to this interpretation, however, have pointed out that the Monists were freethinkers who opposed all forms of mysticism, and that their organizations were immediately banned following the Nazi takeover in 1933 because of their association with a wide variety of causes including feminism, pacifism, human rights, and early gay rights movements.
It was during the Gilded Age that social Darwinism festered most in American society, predominantly through the rationale of the late 19th-century industrial titans, such as John D. Rockefeller and Andrew Carnegie. Nationwide monopolists of this type applied Darwin's theory, specifically the concept of natural selection, to explain corporate dominance in their respective fields and, thus, justify their exorbitant accumulations of wealth. Rockefeller, for example, proclaimed: "The growth of a large business is merely a survival of the fittest...the working out of a law of nature and a law of God." Robert Bork backed this notion of inherent characteristics being the sole determinant of survival, in the business-operations context, when he said: "In America, the rich' are overwhelmingly people – entrepreneurs, small-business men, corporate executives, doctors, lawyers, etc. – who have gained their higher incomes through intelligence, imagination, and hard work." Moreover, William Graham Sumner lauded this cohort of industrial millionaires, and further extended the theory of 'corporate Darwinism'. He argued that societal progress depended on the "fittest families" passing on their wealth and inherited traits to their offspring, creating a lineage of superior citizens, supposedly. However, contemporary social scientists repudiate such claims, demanding that economic status be considered "not" a direct function of one's inborn traits and moral worth.
In 1883, Sumner published a highly influential pamphlet entitled "What Social Classes Owe to Each Other", in which he insisted that the social classes owe each other nothing, synthesizing Darwin's findings with free enterprise Capitalism for his justification. According to Sumner, those who feel an obligation to provide assistance to those unequipped or under-equipped to compete for resources, will lead to a country in which the weak and inferior are encouraged to breed more like them, eventually dragging the country down. Sumner also believed that the best equipped to win the struggle for existence was the American businessman, and concluded that taxes and regulations serve as dangers to his survival. This pamphlet makes no mention of Darwinism, and only refers to Darwin in a statement on the meaning of liberty, that "There never has been any man, from the primitive barbarian up to a Humboldt or a Darwin, who could do as he had a mind to."
Sumner never fully embraced Darwinian ideas, and some contemporary historians do not believe that Sumner ever actually believed in social Darwinism. The great majority of American businessmen rejected the anti-philanthropic implications of the theory. Instead they gave millions to build schools, colleges, hospitals, art institutes, parks and many other institutions. Andrew Carnegie, who admired Spencer, was the leading philanthropist in the world (1890–1920), and a major leader against imperialism and warfare.
H. G. Wells was heavily influenced by Darwinist thoughts, and novelist Jack London wrote stories of survival that incorporated his views on social Darwinism. Film director Stanley Kubrick has been described as having held social Darwinist opinions.
Social Darwinism has influenced political, public health and social movements in Japan since the late 19th and early 20th century. Social Darwinism was originally brought to Japan through the works of Francis Galton and Ernst Haeckel as well as United States, British and French Lamarckian eugenic written studies of the late 19th and early 20th centuries. Eugenism as a science was hotly debated at the beginning of the 20th century, in "Jinsei-Der Mensch", the first eugenics journal in the empire. As Japan sought to close ranks with the west, this practice was adopted wholesale along with colonialism and its justifications.
Social Darwinism was formally introduced to China through the translation by Yan Fu of Huxley's "Evolution and Ethics", in the course of an extensive series of translations of influential Western thought. Yan's translation strongly impacted Chinese scholars because he added national elements not found in the original. Yan Fu criticized Huxley from the perspective of Spencerian social Darwinism in his own annotations to the translation. He understood Spencer's sociology as "not merely analytical and descriptive, but prescriptive as well", and saw Spencer building on Darwin, whom Yan summarized thus:
By the 1920s, social Darwinism found expression in the promotion of eugenics by the Chinese sociologist Pan Guangdan.
When Chiang Kai-shek started the New Life movement in 1934, he
Social evolution theories in Germany gained large popularity in the 1860s and had a strong antiestablishment connotation first. Social Darwinism allowed people to counter the connection of "Thron und Altar", the intertwined establishment of clergy and nobility, and provided as well the idea of progressive change and evolution of society as a whole. Ernst Haeckel propagated both Darwinism as a part of natural history and as a suitable base for a modern Weltanschauung, a world view based on scientific reasoning in his Monist League. Friedrich von Hellwald had a strong role in popularizing it in Austria. Darwin's work served as a catalyst to popularize evolutionary thinking.
A sort of aristocratic turn, the use of the struggle for life as a base of Social Darwinism "sensu stricto" came up after 1900 with Alexander Tille's 1895 work Entwicklungsethik (Ethics of Evolution) which asked to move "from Darwin till Nietzsche". Further interpretations moved to ideologies propagating a racist and hierarchical society and provided ground for the later radical versions of Social Darwinism.
Social Darwinism came to play a major role in the ideology of Nazism, where it was combined with a similarly pseudo-scientific theory of racial hierarchy in order to identify the Germans as a part of what the Nazis regarded as an Aryan or Nordic master race. Nazi social Darwinist beliefs led them to retain business competition and private property as economic engines. Nazism likewise opposed social welfare based on a social Darwinist belief that the weak and feeble should perish. This association with Nazism, coupled with increasing recognition that it was scientifically unfounded, contributed to the broader rejection Social Darwinism after the end of World War II.
Social Darwinism has many definitions, and some of them are incompatible with each other. As such, social Darwinism has been criticized for being an inconsistent philosophy, which does not lead to any clear political conclusions. For example, "The Concise Oxford Dictionary of Politics" states:
Part of the difficulty in establishing sensible and consistent usage is that commitment to the biology of natural selection and to 'survival of the fittest' entailed nothing uniform either for sociological method or for political doctrine. A 'social Darwinist' could just as well be a defender of laissez-faire as a defender of state socialism, just as much an imperialist as a domestic eugenist.
Social Darwinism was predominantly found in laissez-faire societies where the prevailing view was that of an individualist order to society. As such, social Darwinism supposed that human progress would generally favor the most individualistic races, which were those perceived as stronger. A different form of social Darwinism was part of the ideological foundations of Nazism and other fascist movements. This form did not envision survival of the fittest within an individualist order of society, but rather advocated a type of racial and national struggle where the state directed human breeding through eugenics. Names such as "Darwinian collectivism" or "Reform Darwinism" have been suggested to describe these views, in order to differentiate them from the individualist type of social Darwinism.
As mentioned above, social Darwinism has often been linked to nationalism and imperialism. During the age of New Imperialism, the concepts of evolution justified the exploitation of "lesser breeds without the law" by "superior races". To elitists, strong nations were composed of white people who were successful at expanding their empires, and as such, these strong nations would survive in the struggle for dominance. With this attitude, Europeans, except for Christian missionaries, seldom adopted the customs and languages of local people under their empires.
Peter Kropotkin argued in his 1902 book "" that Darwin did not define the fittest as the strongest, or most clever, but recognized that the fittest could be those who cooperated with each other. In many animal societies, "struggle is replaced by co-operation".
It may be that at the outset Darwin himself was not fully aware of the generality of the factor which he first invoked for explaining one series only of facts relative to the accumulation of individual variations in incipient species. But he foresaw that the term [evolution] which he was introducing into science would lose its philosophical and its only true meaning if it were to be used in its narrow sense only—that of a struggle between separate individuals for the sheer means of existence. And at the very beginning of his memorable work he insisted upon the term being taken in its "large and metaphorical sense including dependence of one being on another, and including (which is more important) not only the life of the individual, but success in leaving progeny." [Quoting "Origin of Species," chap. iii, p. 62 of first edition.]
While he himself was chiefly using the term in its narrow sense for his own special purpose, he warned his followers against committing the error (which he seems once to have committed himself) of overrating its narrow meaning. In "The Descent of Man" he gave some powerful pages to illustrate its proper, wide sense. He pointed out how, in numberless animal societies, the struggle between separate individuals for the means of existence disappears, how struggle is replaced by co-operation, and how that substitution results in the development of intellectual and moral faculties which secure to the species the best conditions for survival. He intimated that in such cases the fittest are not the physically strongest, nor the cunningest, but those who learn to combine so as mutually to support each other, strong and weak alike, for the welfare of the community. "Those communities", he wrote, "which included the greatest number of the most sympathetic members would flourish best, and rear the greatest number of offspring" (2nd edit., p. 163). The term, which originated from the narrow Malthusian conception of competition between each and all, thus lost its narrowness in the mind of one who knew Nature.
Noam Chomsky discussed briefly Kropotkin's views in an 8 July 2011 YouTube video from Renegade Economist, in which he said Kropotkin argued
... the exact opposite [of Social Darwinism]. He argued that on Darwinian grounds, you would expect cooperation and mutual aid to develop leading towards community, workers' control and so on. Well, you know, he didn't prove his point. It's at least as well argued as Herbert Spencer is ... | https://en.wikipedia.org/wiki?curid=45541 |
Refugee
A refugee, generally speaking, is a displaced person who has been forced to cross national boundaries and who cannot return home safely (see Definitions for more details). Such a person may be called an asylum seeker until granted refugee status by the contracting state or the United Nations High Commissioner for Refugees (UNHCR) if they formally make a claim for asylum.
The lead international agency coordinating refugee protection is the United Nations Office of the UNHCR. The United Nations has a second Office for refugees, the United Nations Relief and Works Agency (UNRWA), which is solely responsible for supporting the large majority of Palestinian refugees.
Similar terms in other languages have described an event marking migration of a specific population from a place of origin, such as the biblical account of Israelites fleeing from Assyrian conquest ("circa" 740 BCE), or the asylum found by the prophet Muhammad and his emigrant companions with helpers in Yathrib (later Medina) after they fled from persecution in Mecca. In English, the term "refugee" derives from the root word "refuge", from Old French "refuge", meaning "hiding place". It refers to "shelter or protection from danger or distress", from Latin "fugere", "to flee", and "refugium", "a taking [of] refuge, place to flee back to". In Western history, the term was first applied to French Protestant Huguenots looking for a safe place against Catholic persecution after the first Edict of Fontainebleau in 1540. The word appeared in the English language when French Huguenots fled to Britain in large numbers after the 1685 Edict of Fontainebleau (the revocation of the 1598 Edict of Nantes) in France and the 1687 Declaration of Indulgence in England and Scotland. The word meant "one seeking asylum", until around 1914, when it evolved to mean "one fleeing home", applied in this instance to civilians in Flanders heading west to escape fighting in World War I.
The first modern definition of international refugee status came about under the League of Nations in 1921 from the Commission for Refugees. Following World War II, and in response to the large numbers of people fleeing Eastern Europe, the UN 1951 Refugee Convention defined "refugee" (in Article 1.A.2) as any person who:
"owing to well-founded fear of being persecuted for reasons of race, religion, nationality, membership of a particular social group or political opinion, is outside the country of his nationality and is unable or, owing to such fear, is unwilling to avail himself of the protection of that country; or who, not having a nationality and being outside the country of his former habitual residence as a result of such events, is unable or, owing to such fear, is unwilling to return to it."
In 1967, the definition was basically confirmed by the UN Protocol Relating to the Status of Refugees.
The Convention Governing the Specific Aspects of Refugee Problems in Africa expanded the 1951 definition, which the Organization of African Unity adopted in 1969:"Every person who, owing to external aggression, occupation, foreign domination or events seriously disturbing public order in either part or the whole of his country of origin or nationality, is compelled to leave his place of habitual residence in order to seek refuge in another place outside his country of origin or nationality."
The 1984 regional, non-binding Latin-American Cartagena Declaration on Refugees includes:
"persons who have fled their country because their lives, safety or freedom have been threatened by generalized violence, foreign aggression, internal conflicts, massive violation of human rights or other circumstances which have seriously disturbed public order."
As of 2011, the UNHCR itself, in addition to the 1951 definition, recognizes persons as refugees:
"who are outside their country of nationality or habitual residence and unable to return there owing to serious and indiscriminate threats to life, physical integrity or freedom resulting from generalized violence or events seriously disturbing public order."
European Union's minimum standards definition of refugee, underlined by Art. 2 (c) of Directive No. 2004/83/EC, essentially reproduces the narrow definition of refugee offered by the UN 1951 Convention; nevertheless, by virtue of articles 2 (e) and 15 of the same Directive, persons who have fled a war-caused generalized violence are, at certain conditions, eligible for a complementary form of protection, called subsidiary protection. The same form of protection is foreseen for displaced people who, without being refugees, are nevertheless exposed, if returned to their countries of origin, to death penalty, torture or other inhuman or degrading treatments.
The idea that a person who sought sanctuary in a holy place could not be harmed without inviting divine retribution was familiar to the ancient Greeks and ancient Egyptians. However, the right to seek asylum in a church or other holy place was first codified in law by King Æthelberht of Kent in about AD 600. Similar laws were implemented throughout Europe in the Middle Ages. The related concept of political exile also has a long history: Ovid was sent to Tomis; Voltaire was sent to England. By the 1648 Peace of Westphalia, nations recognized each other's sovereignty. However, it was not until the advent of romantic nationalism in late 18th-century Europe that nationalism gained sufficient prevalence for the phrase "country of nationality" to become practically meaningful, and for border crossing to require that people provide identification.
The term "refugee" sometime applies to people who might fit the definition outlined by the 1951 Convention, were it applied retroactively. There are many candidates. For example, after the Edict of Fontainebleau in 1685 outlawed Protestantism in France, hundreds of thousands of Huguenots fled to England, the Netherlands, Switzerland, South Africa, Germany and Prussia. The repeated waves of pogroms that swept Eastern Europe in the 19th and early 20th centuries prompted mass Jewish emigration (more than 2 million Russian Jews emigrated in the period 1881–1920). Beginning in the 19th century, Muslim people emigrated to Turkey from Europe. The Balkan Wars of 1912–1913 caused 800,000 people to leave their homes. Various groups of people were officially designated refugees beginning in World War I.
The first international co-ordination of refugee affairs came with the creation by the League of Nations in 1921 of the High Commission for Refugees and the appointment of Fridtjof Nansen as its head. Nansen and the Commission were charged with assisting the approximately 1,500,000 people who fled the Russian Revolution of 1917 and the subsequent civil war (1917–1921), p. 1. most of them aristocrats fleeing the Communist government. It is estimated that about 800,000 Russian refugees became stateless when Lenin revoked citizenship for all Russian expatriates in 1921.
In 1923, the mandate of the Commission was expanded to include the more than one million Armenians who left Turkish Asia Minor in 1915 and 1923 due to a series of events now known as the Armenian Genocide. Over the next several years, the mandate was expanded further to cover Assyrians and Turkish refugees. In all of these cases, a refugee was defined as a person in a group for which the League of Nations had approved a mandate, as opposed to a person to whom a general definition applied.
The 1923 population exchange between Greece and Turkey involved about two million people (around 1.5 million Anatolian Greeks and 500,000 Muslims in Greece) most of whom were forcibly repatriated and denaturalized from homelands of centuries or millennia (and guaranteed the nationality of the destination country) by a treaty promoted and overseen by the international community as part of the Treaty of Lausanne (1923).
The U.S. Congress passed the Emergency Quota Act in 1921, followed by the Immigration Act of 1924. The Immigration Act of 1924 was aimed at further restricting the Southern and Eastern Europeans, especially Jews, Italians and Slavs, who had begun to enter the country in large numbers beginning in the 1890s. Most European refugees (principally Jews and Slavs) fleeing the Nazis and the Soviet Union were barred from going to the United States until after World War II.
In 1930, the Nansen International Office for Refugees (Nansen Office) was established as a successor agency to the Commission. Its most notable achievement was the Nansen passport, a refugee travel document, for which it was awarded the 1938 Nobel Peace Prize. The Nansen Office was plagued by problems of financing, an increase in refugee numbers, and a lack of co-operation from some member states, which led to mixed success overall.
However, the Nansen Office managed to lead fourteen nations to ratify the 1933 Refugee Convention, an early, and relatively modest, attempt at a human rights charter, and in general assisted around one million refugees worldwide.
The rise of Nazism led to such a very large increase in the number of refugees from Germany that in 1933 the League created a high commission for refugees coming from Germany. Besides other measures by the Nazis which created fear and flight, Jews were stripped of German citizenship by the "Reich Citizenship Law" of 1935. On 4 July 1936 an agreement was signed under League auspices that defined a refugee coming from Germany as "any person who was settled in that country, who does not possess any nationality other than German nationality, and in respect of whom it is established that in law or in fact he or she does not enjoy the protection of the Government of the Reich" (article 1).
The mandate of the High Commission was subsequently expanded to include persons from Austria and Sudetenland, which Germany annexed after 1 October 1938 in accordance with the Munich Agreement. According to the Institute for Refugee Assistance, the actual count of refugees from Czechoslovakia on 1 March 1939 stood at almost 150,000. Between 1933 and 1939, about 200,000 Jews fleeing Nazism were able to find refuge in France, while at least 55,000 Jews were able to find refuge in Palestine p. 326 n. 6. before the British authorities closed that destination in 1939.
On 31 December 1938, both the Nansen Office and High Commission were dissolved and replaced by the Office of the High Commissioner for Refugees under the Protection of the League. This coincided with the flight of several hundred thousand Spanish Republicans to France after their defeat by the Nationalists in 1939 in the Spanish Civil War.
The conflict and political instability during World War II led to massive numbers of refugees (see World War II evacuation and expulsion). In 1943, the Allies created the United Nations Relief and Rehabilitation Administration (UNRRA) to provide aid to areas liberated from Axis powers, including parts of Europe and China. By the end of the War, Europe had more than 40 million refugees. UNRRA was involved in returning over seven million refugees, then commonly referred to as displaced persons or DPs, to their country of origin and setting up displaced persons camps for one million refugees who refused to be repatriated. Even two years after the end of War, some 850,000 people still lived in DP camps across Western Europe. DP Camps in Europe Intro, from: "DPs Europe's Displaced Persons, 1945–1951" by Mark Wyman After the establishment of Israel in 1948, Israel accepted more than 650,000 refugees by 1950. By 1953, over 250,000 refugees were still in Europe, most of them old, infirm, crippled, or otherwise disabled.
After the Soviet armed forces captured eastern Poland from the Germans in 1944, the Soviets unilaterally declared a new frontier between the Soviet Union and Poland approximately at the Curzon Line, despite the protestations from the Polish government-in-exile in London and the western Allies at the Teheran Conference and the Yalta Conference of February 1945. After the German surrender on 7 May 1945, the Allies occupied the remainder of Germany, and the Berlin declaration of 5 June 1945 confirmed the unfortunate help of the second division team of Cruzeiro of Allied-occupied Germany according to the Yalta Conference, which stipulated the continued existence of the German Reich as a whole, which would include its eastern territories as of 31 December 1937. This did not impact on Poland's eastern border, and Stalin refused to be removed from these eastern Polish territories.
In the last months of World War II, about five million German civilians from the German provinces of East Prussia, Pomerania and Silesia fled the advance of the Red Army from the east and became refugees in Mecklenburg, Brandenburg and Saxony. Since the spring of 1945 the Poles had been forcefully expelling the remaining German population in these provinces. When the Allies met in Potsdam on 17 July 1945 at the Potsdam Conference, a chaotic refugee situation faced the occupying powers. The Potsdam Agreement, signed on 2 August 1945, defined the Polish western border as that of 1937, (Article VIII) Agreements of the Berlin (Potsdam) Conference placing one fourth of Germany's territory under the Provisional Polish administration. Article XII ordered that the remaining German populations in Poland, Czechoslovakia and Hungary be transferred west in an "orderly and humane" manner.Agreements of the Berlin (Potsdam) Conference (See Flight and expulsion of Germans (1944–50).)
Although not approved by Allies at Potsdam, hundreds of thousands of ethnic Germans living in Yugoslavia and Romania were deported to slave labour in the Soviet Union, to Allied-occupied Germany, and subsequently to the German Democratic Republic (East Germany), Austria and the Federal Republic of Germany (West Germany). This entailed the largest population transfer in history. In all 15 million Germans were affected, and more than two million perished during the expulsions of the German population. (See Flight and expulsion of Germans (1944–1950).) Between the end of War and the erection of the Berlin Wall in 1961, more than 563,700 refugees from East Germany traveled to West Germany for asylum from the Soviet occupation.
During the same period, millions of former Russian citizens were forcefully repatriated against their will into the USSR. On 11 February 1945, at the conclusion of the Yalta Conference, the United States and United Kingdom signed a Repatriation Agreement with the USSR. The interpretation of this Agreement resulted in the forcible repatriation of all Soviets regardless of their wishes. When the war ended in May 1945, British and United States civilian authorities ordered their military forces in Europe to deport to the Soviet Union millions of former residents of the USSR, including many persons who had left Russia and established different citizenship decades before. The forced repatriation operations took place from 1945 to 1947.
At the end of World War II, there were more than 5 million "displaced persons" from the Soviet Union in Western Europe. About 3 million had been forced laborers (Ostarbeiters) in Germany and occupied territories. The Soviet POWs and the Vlasov men were put under the jurisdiction of SMERSH (Death to Spies). Of the 5.7 million Soviet prisoners of war captured by the Germans, 3.5 million had died while in German captivity by the end of the war. The survivors on their return to the USSR were treated as traitors (see Order No. 270). Over 1.5 million surviving Red Army soldiers imprisoned by the Nazis were sent to the Gulag.
Poland and Soviet Ukraine conducted population exchanges following the imposition of a new Poland-Soviet border at the Curzon Line in 1944. About 2,100,000 Poles were expelled west of the new border (see Repatriation of Poles), while about 450,000 Ukrainians were expelled to the east of the new border. The population transfer to Soviet Ukraine occurred from September 1944 to May 1946 (see Repatriation of Ukrainians). A further 200,000 Ukrainians left southeast Poland more or less voluntarily between 1944 and 1945.
Due to the report of the U.S. Committee for Refugees (1995), 10 to 15 percent of 7,5 million Azerbaijani population were refugees or displaced people. Most of them were 228,840 refugee people of Azerbaijan who fled from Armenia in 1988 as a result of deportation policy of Armenia against ethnic Azerbaijanis.
The International Refugee Organization (IRO) was founded on 20 April 1946, and took over the functions of the United Nations Relief and Rehabilitation Administration, which was shut down in 1947. While the handover was originally planned to take place at the beginning of 1947, it did not occur until July 1947. The International Refugee Organization was a temporary organization of the United Nations (UN), which itself had been founded in 1945, with a mandate to largely finish the UNRRA's work of repatriating or resettling European refugees. It was dissolved in 1952 after resettling about one million refugees. The definition of a refugee at this time was an individual with either a Nansen passport or a "Certificate of identity" issued by the International Refugee Organization.
The Constitution of the International Refugee Organization, adopted by the United Nations General Assembly on 15 December 1946, specified the agency's field of operations. Controversially, this defined "persons of German ethnic origin" who had been expelled, or were to be expelled from their countries of birth into the postwar Germany, as individuals who would "not be the concern of the Organization." This excluded from its purview a group that exceeded in number all the other European displaced persons put together. Also, because of disagreements between the Western allies and the Soviet Union, the IRO only worked in areas controlled by Western armies of occupation.
With the occurrence of major instances of diaspora and forced migration, the study of their causes and implications has emerged as a legitimate interdisciplinary area of research, and began to rise by mid to late 20th century, after World War II. Although significant contributions had been made before, the latter half of the 20th century saw the establishment of institutions dedicated to the study of refugees, such as the Association for the Study of the World Refugee Problem, which was closely followed by the founding of the United Nations High Commissioner for Refugees. In particular, the 1981 volume of the "International Migration Review" defined refugee studies as "a comprehensive, historical, interdisciplinary and comparative perspective which focuses on the consistencies and patterns in the refugee experience." Following its publishing, the field saw a rapid increase in academic interest and scholarly inquiry, which has continued to the present. Most notably in 1988, the "Journal of Refugee Studies" was established as the field's first major interdisciplinary journal.
The emergence of refugee studies as a distinct field of study has been criticized by scholars due to terminological difficulty. Since no universally accepted definition for the term "refugee" exists, the academic respectability of the policy-based definition, as outlined in the 1951 Refugee Convention, is disputed. Additionally, academics have critiqued the lack of a theoretical basis of refugee studies and dominance of policy-oriented research. In response, scholars have attempted to steer the field toward establishing a theoretical groundwork of refugee studies through "situating studies of particular refugee (and other forced migrant) groups in the theories of cognate areas (and major disciplines), [providing] an opportunity to use the particular circumstances of refugee situations to illuminate these more general theories and thus participate in the development of social science, rather than leading refugee studies into an intellectual cul-de-sac." Thus, the term "refugee" in the context of refugee studies can be referred to as "legal or descriptive rubric", encompassing socioeconomic backgrounds, personal histories, psychological analyses, and spiritualities.
Headquartered in Geneva, Switzerland, the Office of the United Nations High Commissioner for Refugees (UNHCR) was established on 14 December 1950. It protects and supports refugees at the request of a government or the United Nations and assists in providing durable solutions, such as return or resettlement. All refugees in the world are under UNHCR mandate except Palestinian refugees, who fled the current state of Israel between 1947 and 1949, as a result of the 1948 Palestine War. These refugees are assisted by the United Nations Relief and Works Agency (UNRWA). However, Palestinian Arabs who fled the West Bank and Gaza after 1949 (for example, during the 1967 Six Day war) are under the jurisdiction of the UNHCR. Moreover, the UNHCR also provides protection and assistance to other categories of displaced persons: asylum seekers, refugees who returned home voluntarily but still need help rebuilding their lives, local civilian communities directly affected by large refugee movements, stateless people and so-called internally displaced people (IDPs), as well as people in refugee-like and IDP-like situations.
The agency is mandated to lead and co-ordinate international action to protect refugees and to resolve refugee problems worldwide. Its primary purpose is to safeguard the rights and well-being of refugees. It strives to ensure that everyone can exercise the right to seek asylum and find safe refuge in another state or territory and to offer "durable solutions" to refugees and refugee hosting countries.
A refugee camp is a place built by governments or NGOs (such as the Red Cross) to receive refugees, internally displaced persons or sometimes also other migrants. It is usually designed to offer acute and temporary accommodation and services and any more permanent facilities and structures often banned. People may stay in these camps for many years, receiving emergency food, education and medical aid until it is safe enough to return to their country of origin.
There, refugees are at risk of disease, child soldier and terrorist recruitment, and physical and sexual violence. There are estimated to be 700 refugee camp locations worldwide.
Not all refugees who are supported by the UNHCR live in refugee camps. A significant number, actually more than half, live in urban settings, such as the ~60,000 Iraqi refugees in Damascus (Syria), and the ~30,000 Sudanese refugees in Cairo (Egypt).
The residency status in the host country whilst under temporary UNHCR protection is very uncertain as refugees are only granted temporary visas that have to be regularly renewed. Rather than only safeguarding the rights and basic well-being of refugees in the camps or in urban settings on a temporary basis the UNHCR's ultimate goal is to find one of the three durable solutions for refugees: integration, repatriation, resettlement.
Local integration is aiming at providing the refugee with the permanent right to stay in the country of asylum, including, in some situations, as a naturalized citizen. It follows the formal granting of refugee status by the country of asylum. It is difficult to quantify the number of refugees who settled and integrated in their first country of asylum and only the number of naturalisations can give an indication. In 2014 Tanzania granted citizenship to 162,000 refugees from Burundi and in 1982 to 32,000 Rwandan refugees. Mexico naturalised 6,200 Guatemalan refugees in 2001.
Voluntary return of refugees into their country of origin, in safety and dignity, is based on their free will and their informed decision. In the last couple of years parts of or even whole refugee populations were able to return to their home countries: e.g. 120,000 Congolese refugees returned from the Republic of Congo to the DRC, 30,000 Angolans returned home from the DRC and Botswana, Ivorian refugees returned from Liberia, Afghans from Pakistan, and Iraqis from Syria. In 2013, the governments of Kenya and Somalia also signed a tripartite agreement facilitating the repatriation of refugees from Somalia. The UNHCR and the IOM offer assistance to refugees who want to return voluntarily to their home countries. Many developed countries also have Assisted Voluntary Return (AVR) programmes for asylum seekers who want to go back or were refused asylum.
Third country resettlement involves the assisted transfer of refugees from the country in which they have sought asylum to a safe third country that has agreed to admit them as refugees. This can be for permanent settlement or limited to a certain number of years. It is the third durable solution and it can only be considered once the two other solutions have proved impossible. The UNHCR has traditionally seen resettlement as the least preferable of the "durable solutions" to refugee situations. However, in April 2000 the then UN High Commissioner for Refugees, Sadako Ogata, stated "Resettlement can no longer be seen as the least-preferred durable solution; in many cases it is the "only" solution for refugees."
UNHCR's mandate has gradually been expanded to include protecting and providing humanitarian assistance to internally displaced persons (IDPs) and people in IDP-like situations. These are civilians who have been forced to flee their homes, but who have not reached a neighboring country. IDPs do not fit the legal definition of a refugee under the 1951 Refugee Convention, 1967 Protocol and the 1969 Organization for African Unity Convention, because they have not left their country. As the nature of war has changed in the last few decades, with more and more internal conflicts replacing interstate wars, the number of IDPs has increased significantly.
The term refugee is often used in different contexts: in everyday usage it refers to a forcibly displaced person who has fled their country of origin; in a more specific context it refers to such a person who was, on top of that, granted refugee status in the country the person fled to. Even more exclusive is the Convention refugee status which is given only to persons who fall within the refugee definition of the 1951 Convention and the 1967 Protocol.
To receive refugee status, a person must have applied for asylum, making them—while waiting for a decision—an asylum seeker. However, a displaced person otherwise legally entitled to refugee status may never apply for asylum, or may not be allowed to apply in the country they fled to and thus may not have official asylum seeker status.
Once a displaced person is granted refugee status they enjoy certain rights as agreed in the 1951 Refugee convention. Not all countries have signed and ratified this convention and some countries do not have a legal procedure for dealing with asylum seekers.
An asylum seeker is a displaced person or immigrant who has formally sought the protection of the state they fled to as well as the right to remain in this country and who is waiting for a decision on this formal application. An asylum seeker may have applied for Convention refugee status or for complementary forms of protection. Asylum is thus a category that includes different forms of protection. Which form of protection is offered depends on the legal definition that best describes the asylum seeker's reasons to flee. Once the decision was made the asylum seeker receives either Convention refugee status or a complementary form of protection, and can stay in the country—or is refused asylum, and then often has to leave. Only after the state, territory or the UNHCR—wherever the application was made—recognises the protection needs does the asylum seeker "officially" receive refugee status. This carries certain rights and obligations, according to the legislation of the receiving country.
Quota refugees do not need to apply for asylum on arrival in the third countries as they already went through the UNHCR refugee status determination process whilst being in the first country of asylum and this is usually accepted by the third countries.
To receive refugee status, a displaced person must go through a Refugee Status Determination (RSD) process, which is conducted by the government of the country of asylum or the UNHCR, and is based on international, regional or national law. RSD can be done on a case by case basis as well as for whole groups of people. Which of the two processes is used often depends on the size of the influx of displaced persons.
There is no specific method mandated for RSD (apart from the commitment to the 1951 Refugee Convention) and it is subject to the overall efficacy of the country's internal administrative and judicial system as well as the characteristics of the refugee flow to which the country responds. This lack of a procedural direction could create a situation where political and strategic interests override humanitarian considerations in the RSD process. There are also no fixed interpretations of the elements in the 1951 Refugee Convention and countries may interpret them differently (see also refugee roulette).
However, in 2013, the UNHCR conducted them in more than 50 countries and co-conducted them parallel to or jointly with governments in another 20 countries, which made it the second largest RSD body in the world The UNHCR follows a set of guidelines described in the "Handbook and Guidelines on Procedures and Criteria for Determining Refugee Status" to determine which individuals are eligible for refugee status.
Refugee rights encompass both customary law, peremptory norms, and international legal instruments. If the entity granting refugee status is a state that has signed the 1951 Refugee Convention then the refugee has the right to employment. Further rights include the following rights and obligations for refugees:
Even in a supposedly "post-conflict" environment, it is not a simple process for refugees to return home. The UN Pinheiro Principles are guided by the idea that people not only have the right to return home, but also the right to the same property. It seeks to return to the pre-conflict status quo and ensure that no one profits from violence. Yet this is a very complex issue and every situation is different; conflict is a highly transformative force and the pre-war status-quo can never be reestablished completely, even if that were desirable (it may have caused the conflict in the first place). Therefore, the following are of particular importance to the right to return:
Refugees who were resettled to a third country will likely lose the indefinite leave to remain in this country if they return to their country of origin or the country of first asylum.
Non-refoulement is the right not to be returned to a place of persecution and is the foundation for international refugee law, as outlined in the 1951 Convention Relating to the Status of Refugees. The right to non-refoulement is distinct from the right to asylum. To respect the right to asylum, states must not deport genuine refugees. In contrast, the right to non-refoulement allows states to transfer genuine refugees to third party countries with respectable human rights records. The portable procedural model, proposed by political philosopher Andy Lamey, emphasizes the right to non-refoulement by guaranteeing refugees three procedural rights (to a verbal hearing, to legal counsel, and to judicial review of detention decisions) and ensuring those rights in the constitution. This proposal attempts to strike a balance between the interest of national governments and the interests of refugees.
Family reunification (which can also be a form of resettlement) is a recognized reason for immigration in many countries. Divided families have the right to be reunited if a family member with permanent right of residency applies for the reunification and can prove the people on the application were a family unit before arrival and wish to live as a family unit since separation. If application is successful this enables the rest of the family to immigrate to that country as well.
Those states that signed the Convention Relating to the Status of Refugees are obliged to issue travel documents (i.e. "Convention Travel Document") to refugees lawfully residing in their territory. It is a valid travel document in place of a passport, however, it cannot be used to travel to the country of origin, i.e. from where the refugee fled.
Once refugees or asylum seekers have found a safe place and protection of a state or territory outside their territory of origin they are discouraged from leaving again and seeking protection in another country. If they do move onward into a second country of asylum this movement is also called ""irregular movement"" by the UNHCR (see also asylum shopping). UNHCR support in the second country may be less than in the first country and they can even be returned to the first country.
World Refugee Day has occurred annually on 20 June since 2000 by a special United Nations General Assembly Resolution. 20 June had previously been commemorated as "African Refugee Day" in a number of African countries.
In the United Kingdom World Refugee Day is celebrated as part of Refugee Week. Refugee Week is a nationwide festival designed to promote understanding and to celebrate the cultural contributions of refugees, and features many events such as music, dance and theatre.
In the Roman Catholic Church, the World Day of Migrants and Refugees is celebrated in January each year, since instituted in 1914 by Pope Pius X.
Displacement is a long lasting reality for most refugees. Two-thirds of all refugees around the world have been displaced for over three years, which is known as being in 'protracted displacement'. 50% of refugees – around 10 million people – have been displaced for over ten years.
The Overseas Development Institute has found that aid programmes need to move from short-term models of assistance (such as food or cash handouts) to more sustainable long-term programmes that help refugees become more self-reliant. This can involve tackling difficult legal and economic environments, by improving social services, job opportunities and laws.
Refugees typically report poorer levels of health, compared to other immigrants and the non-immigrant population.
Apart from physical wounds or starvation, a large percentage of refugees develop symptoms of post-traumatic stress disorder (PTSD) or depression. These long-term mental problems can severely impede the functionality of the person in everyday situations; it makes matters even worse for displaced persons who are confronted with a new environment and challenging situations. They are also at high risk for suicide.
Among other symptoms, post-traumatic stress disorder involves anxiety, over-alertness, sleeplessness, chronic fatigue syndrome, motor difficulties, failing short term memory, amnesia, nightmares and sleep-paralysis. Flashbacks are characteristic to the disorder: the patient experiences the traumatic event, or pieces of it, again and again. Depression is also characteristic for PTSD-patients and may also occur without accompanying PTSD.
PTSD was diagnosed in 34.1% of Palestinian children, most of whom were refugees, males, and working. The participants were 1,000 children aged 12 to 16 years from governmental, private, and United Nations Relief Work Agency UNRWA schools in East Jerusalem and various governorates in the West Bank.
Another study showed that 28.3% of Bosnian refugee women had symptoms of PTSD three or four years after their arrival in Sweden. These women also had significantly higher risks of symptoms of depression, anxiety, and psychological distress than Swedish-born women. For depression the odds ratio was 9.50 among Bosnian women.
A study by the Department of Pediatrics and Emergency Medicine at the Boston University School of Medicine demonstrated that twenty percent of Sudanese refugee minors living in the United States had a diagnosis of post-traumatic stress disorder. They were also more likely to have worse scores on all the Child Health Questionnaire subscales.
In a study for the United Kingdom, refugees were found to be 4 percentage points more likely to report a mental health problem compared to the non-immigrant population. This contrasts with the results for other immigrant groups, which were less likely to report a mental health problem compared to the non-immigrant population.
Many more studies illustrate the problem. One meta-study was conducted by the psychiatry department of Oxford University at Warneford Hospital in the United Kingdom. Twenty surveys were analyzed, providing results for 6,743 adult refugees from seven countries. In the larger studies, 9% were diagnosed with post-traumatic stress disorder and 5% with major depression, with evidence of much psychiatric co-morbidity. Five surveys of 260 refugee children from three countries yielded a prevalence of 11% for post-traumatic stress disorder. According to this study, refugees resettled in Western countries could be about ten times more likely to have PTSD than age-matched general populations in those countries. Worldwide, tens of thousands of refugees and former refugees resettled in Western countries probably have post-traumatic stress disorder.
Refugees are often more susceptible to illness for several reasons, including a lack of immunity to local strains of malaria and other diseases. Displacement of a people can create favorable conditions for disease transmission. Refugee camps are typically heavily populated with poor sanitary conditions. The removal of vegetation for space, building materials or firewood also deprives mosquitoes of their natural habitats, leading them to more closely interact with humans. In the 1970s, Afghani refugees that were relocated to Pakistan were going from a country with an effective malaria control strategy, to a country with a less effective system.
The refugee camps were built near rivers or irrigation sites had higher malaria prevalence than refugee camps built on dry lands.
The location of the camps lent themselves to better breeding grounds for mosquitoes, and thus a higher likelihood of malaria transmission. Children aged 1–15 were the most susceptible to malaria infection, which is a significant cause of mortality in children younger than 5. Malaria was the cause of 16% of the deaths in refugee children younger than 5 years of age. Malaria is one of the most commonly reported causes of death in refugees and displaced persons. Since 2014, reports of malaria cases in Germany had doubled compared to previous years, with the majority of cases found in refugees from Eritrea.
The World Health Organization recommends that all people in areas that are endemic for malaria use long-lasting insecticide nets. A cohort study found that within refugee camps in Pakistan, insecticide treated bed nets were very useful in reducing malaria cases. A single treatment of the nets with the insecticide permethrin remained protective throughout the 6 month transmission season.
Access to services depends on many factors, including whether a refugee has received official status, is situated within a refugee camp, or is in the process of third country resettlement. The UNHCR recommends integrating access to primary care and emergency health services with the host country in as equitable a manner as possible. Prioritized services include areas of maternal and child health, immunizations, tuberculosis screening and treatment, and HIV/AIDS-related services. Despite inclusive stated policies for refugee access to health care on the international levels, potential barriers to that access include language, cultural preferences, high financial costs, administrative hurdles, and physical distance. Specific barriers and policies related to health service access also emerge based on the host country context. For example, primaquine, an often recommended malaria treatment is not currently licensed for use in Germany and must be ordered from outside the country.
In Canada, barriers to healthcare access include the lack of adequately trained physicians, complex medical conditions of some refugees and the bureaucracy of medical coverage. There are also individual barriers to access such as language and transportation barriers, institutional barriers such as bureaucratic burdens and lack of entitlement knowledge, and systems level barriers such as conflicting policies, racism and physician workforce shortage.
In the US, all officially designated Iraqi refugees had health insurance coverage compared to a little more than half of non-Iraqi immigrants in a Dearborn, Michigan, study. However, greater barriers existed around transportation, language and successful stress coping mechanisms for refugees versus other immigrants, in addition, refugees noted greater medical conditions. The study also found that refugees had higher healthcare utilization rate (92.1%) as compared to the US overall population (84.8%) and immigrants (58.6%) in the study population.
Within Australia, officially designated refugees who qualify for temporary protection and offshore humanitarian refugees are eligible for health assessments, interventions and access to health insurance schemes and trauma-related counseling services. Despite being eligible to access services, barriers include economic constraints around perceived and actual costs carried by refugees. In addition, refugees must cope with a healthcare workforce unaware of the unique health needs of refugee populations. Perceived legal barriers such as fear that disclosing medical conditions prohibiting reunification of family members and current policies which reduce assistance programs may also limit access to health care services.
Providing access to healthcare for refugees through integration into the current health systems of host countries may also be difficult when operating in a resource limited setting. In this context, barriers to healthcare access may include political aversion in the host country and already strained capacity of the existing health system. Political aversion to refugee access into the existing health system may stem from the wider issue of refugee resettlement. One approach to limiting such barriers is to move from a parallel administrative system in which UNHCR refugees may receive better healthcare than host nationals but is unsustainable financially and politically to that of an integrated care where refugee and host nationals receive equal and more improved care all around. In the 1980s, Pakistan attempted to address Afghan refugee healthcare access through the creation of Basic Health Units inside the camps. Funding cuts closed many of these programs, forcing refugees to seek healthcare from the local government. In response to a protracted refugee situation in the West Nile district, Ugandan officials with UNHCR created an integrative healthcare model for the mostly Sudanese refugee population and Ugandan citizens. Local nationals now access health care in facilities initially created for refugees.
One potential argument for limiting refugee access to healthcare is associated with costs with states desire to decrease health expenditure burdens. However, Germany found that restricting refugee access led to an increase actual expenditures relative to refugees which had full access to healthcare services. The legal restrictions on access to health care and the administrative barriers in Germany have been criticized since the 1990s for leading to delayed care, for increasing direct costs and administrative costs of health care, and for shifting the responsibility for care from the less expensive primary care sector to costly treatments for acute conditions in the secondary and tertiary sector.
Refugee populations consist of people who are terrified and are away from familiar surroundings. There can be instances of exploitation at the hands of enforcement officials, citizens of the host country, and even United Nations peacekeepers. Instances of human rights violations, child labor, mental and physical trauma/torture, violence-related trauma, and sexual exploitation, especially of children, have been documented. In many refugee camps in three war-torn West African countries, Sierra Leone, Guinea, and Liberia, young girls were found to be exchanging sex for money, a handful of fruit, or even a bar of soap. Most of these girls were between 13 and 18 years of age. In most cases, if the girls had been forced to stay, they would have been forced into marriage. They became pregnant around the age of 15 on average. This happened as recently as in 2001. Parents tended to turn a blind eye because sexual exploitation had become a "mechanism of survival" in these camps.
Large groups of displaced persons could be abused as "weapons" to threaten or political enemies or neighbouring countries.
Very rarely, refugees have been used and recruited as refugee militants or terrorists, and the humanitarian aid directed at refugee relief has very rarely been utilized to fund the acquisition of arms. Support from a refugee-receiving state has rarely been used to enable refugees to mobilize militarily, enabling conflict to spread across borders.
Historically, refugee populations have often been portrayed as a security threat. In the U.S and Europe, there has been much focus on the narrative that terrorists maintain networks amongst transnational, refugee, and migrant populations. This fear has been exaggerated into a modern-day Islamist terrorism Trojan Horse in which terrorists hide among refugees and penetrate host countries. 'Muslim-refugee-as-an-enemy-within' rhetoric is relatively new, but the underlying scapegoating of out-groups for domestic societal problems, fears and ethno-nationalist sentiment is not new. In the 1890s, the influx of Eastern European Jewish refugees to London coupled with the rise of anarchism in the city led to a confluence of threat-perception and fear of the refugee out-group. Populist rhetoric then too propelled debate over migration control and protecting national security.
Cross-national empirical verification, or rejection, of populist suspicion and fear of refugees' threat to national security and terror-related activities is relatively scarce. Case studies suggest that the threat of an Islamist refugee Trojan House is highly exaggerated. Of the 800,000 refugees vetted through the resettlement program in the United States between 2001 and 2016, only five were subsequently arrested on terrorism charges; and 17 of the 600,000 Iraqis and Syrians who arrived in Germany in 2015 were investigated for terrorism. One study found that European jihadists tend to be 'homegrown': over 90% were residents of a European country and 60% had European citizenship.
While the statistics do not support the rhetoric, a PEW Research Center survey of ten European countries (Hungary, Poland, Netherlands, Germany, Italy, Sweden, Greece, UK, France, and Spain) released on 11 July 2016, finds that the majority (ranges from 52% to 76%) of respondents in eight countries (Hungary, Poland, Netherlands, Germany, Italy, Sweden, Greece, and UK) think refugees increase the likelihood of terrorism in their country. Since 1975, in the U.S., the risk of dying in a terror attack by a refugee is 1 in 3.6 billion per year; whereas, the odds of dying in a motor vehicle crash are 1 in 113, by state sanctioned execution 1 in 111,439, or by dog attack 1 in 114,622.
In Europe, fear of immigration, Islamification and job and welfare benefits competition has fueled an increase in violence. Immigrants are perceived as a threat to ethno-nationalist identity and increase concerns over criminality and insecurity.
In the PEW survey previously referenced, 50% of respondents believe that refugees are a burden due to job and social benefit competition. When Sweden received over 160,000 asylum seekers in 2015, it was accompanied by 50 attacks against asylum-seekers, which was more than four times the number of attacks that occurred in the previous four years. At the incident level, the 2011 Utøya Norway terror attack by Breivik demonstrates the impact of this threat perception on a country's risk from domestic terrorism, in particular ethno-nationalist extremism. Breivik portrayed himself as a protector of Norwegian ethnic identity and national security fighting against immigrant criminality, competition and welfare abuse and an Islamic takeover.
According to a 2018 study in the "Journal of Peace Research", states often resort to anti-refugee violence in response to terrorist attacks or security crises. The study notes that there is evidence to suggest that "the repression of refugees is more consistent with a scapegoating mechanism than the actual ties and involvement of refugees in terrorism."
The category of “refugee” tends to have a universalizing effect on those classified as such. It draws upon the common humanity of a mass of people in order to inspire public empathy, but doing so can have the unintended consequence of silencing refugee stories and erasing the political and historical factors that led to their present state. Humanitarian groups and media outlets often rely on images of refugees that evoke emotional responses and are said to speak for themselves. The refugees in these images, however, are not asked to elaborate on their experiences, and thus, their narratives are all but erased. From the perspective of the international community, “refugee” is a performative status equated with injury, ill health, and poverty. When people no longer display these traits, they are no longer seen as ideal refugees, even if they still fit the legal definition. For this reason, there is a need to improve current humanitarian efforts by acknowledging the “narrative authority, historical agency, and political memory” of refugees alongside their shared humanity. Dehistorizing and depoliticizing refugees can have dire consequences. Rwandan refugees in Tanzanian camps, for example, were pressured to return to their home country before they believed it was truly safe to do so. Despite the fact that refugees, drawing on their political history and experiences, claimed that Tutsi forces still posed a threat to them in Rwanda, their narrative was overshadowed by the U.N. assurances of safety. When the refugees did return home, reports of reprisals against them, land seizures, disappearances, and incarceration abounded, as they had feared.
Integrating refugees into the workforce is one of the most important steps to overall integration of this particular migrant group. Many refugees are unemployed, under-employed, under-paid and work in the informal economy, if not receiving public assistance. Refugees encounter many barriers in receiving countries in finding and sustaining employment commensurate with their experience and expertise. A systemic barrier that is situated across multiple levels (i.e. institutional, organizational and individual levels) is coined "canvas ceiling".
Refugee children come from many different backgrounds, and their reasons for resettlement are even more diverse. The number of refugee children has continued to increase as conflicts interrupt communities at a global scale. In 2014 alone, there were approximately 32 armed conflicts in 26 countries around the world, and this period saw the highest number of refugees ever recorded Refugee children experience traumatic events in their lives that can affect their learning capabilities, even after they have resettled in first or second settlement countries. Educators such as teachers, counselors, and school staff, along with the school environment, are key in facilitating socialization and acculturation of recently arrived refugee and immigrant children in their new schools.
The experiences children go through during times of armed conflict can impede their ability to learn in an educational setting. Schools experience drop-outs of refugee and immigrant students from an array of factors such as: rejection by peers, low self-esteem, antisocial behavior, negative perceptions of their academic ability, and lack of support from school staff and parents. Because refugees come from various regions globally with their own cultural, religious, linguistic, and home practices, the new school culture can conflict with the home culture, causing tension between the student and their family.
Aside from students, teachers and school staff also face their own obstacles in working with refugee students. They have concerns about their ability to meet the mental, physical, emotional, and educational needs of students. One study of newly arrived Bantu students from Somalia in a Chicago school questioned whether schools were equipped to provide them with a quality education that met the needs of the pupils. The students were not aware of how to use pencils, which caused them to break the tips requiring frequent sharpening. Teachers may even see refugee students as different from other immigrant groups, as was the case with the Bantu pupils. Teachers may sometimes feel that their work is made harder because of the pressures to meet state requirements for testing. With refugee children falling behind or struggling to catch up, it can overwhelm teachers and administrators. Further leading to Anger
Not all students adjust the same way to their new setting. One student may take only three months, while others may take four years. One study found that even in their fourth year of schooling, Lao and Vietnamese refugee students in the US were still in a transitional status. Refugee students continue to encounter difficulties throughout their years in schools that can hinder their ability to learn. Furthermore, to provide proper support, educators must consider the experiences of students before they settled the US.
In their first settlement countries, refugee students may encounter negative experiences with education that they can carry with them post settlement. For example:
Statistics found that in places such as Uganda and Kenya, there were gaps in refugee students attending schools. It found that 80% of refugees in Uganda were attending schools, whereas only 46% of students were attending schools in Kenya. Furthermore, for secondary levels, the numbers were much lower. There was only 1.4% of refugee students attending schools in Malaysia. This trend is evident across several first settlement countries and carry negative impacts on students once they arrive to their permanent settlement homes, such as the US, and have to navigate a new education system. Unfortunately, some refugees do not have a chance to attend schools in their first settlement countries because they are considered undocumented immigrants in places like Malaysia for Rohingya refugees. In other cases, such as Burundians in Tanzania, refugees can get more access to education while in displacement than in their home countries.
All students need some form of support to help them overcome obstacles and challenges they may face in their lives, especially refugee children who may experience frequent disruptions. There are a few ways in which schools can help refugee students overcome obstacles to attain success in their new homes.
One school in NYC has found a method that works for them to help refugee students succeed. This school creates support for language and literacies, which promotes students using English and their native languages to complete projects. Furthermore, they have a learning centered pedagogy, which promotes the idea that there are multiple entry points to engage the students in learning. Both strategies have helped refugee students succeed during their transition into US schools.
Various websites contain resources that can help school staff better learn to work with refugee students such as Bridging Refugee Youth and Children's Services. With the support of educators and the school community, education can help rebuild the academic, social, and emotional well being of refugee students who have suffered from past and present trauma, marginalization, and social alienation.
It is important to understand the cultural differences amongst newly arrived refugees and school culture, such as that of the U.S. This can be seen as problematic because of the frequent disruptions that it can create in a classroom setting.
In addition, because of the differences in language and culture, students are often placed in lower classes due to their lack of English proficiency. Students can also be made to repeat classes because of their lack of English proficiency, even if they have mastered the content of the class. When schools have the resources and are able to provide separate classes for refugee students to develop their English skills, it can take the average refugee students only three months to catch up with their peers. This was the case with Somali refugees at some primary schools in Nairobi.
The histories of refugee students are often hidden from educators, resulting in cultural misunderstandings. However, when teachers, school staff, and peers help refugee students develop a positive cultural identity, it can help buffer the negative effects refugees' experiences have on them, such as poor academic performance, isolation, and discrimination.
Refugee crisis can refer to movements of large groups of displaced persons, who could be either internally displaced persons, refugees or other migrants. It can also refer to incidents in the country of origin or departure, to large problems whilst on the move or even after arrival in a safe country that involve large groups of displaced persons.
In 2018, the United Nations estimated the number of forcibly displaced people to be 68.5 million worldwide. Of those, 25.4 million are refugees while 40 million are internally displaced within a nation state and 3.1 million are classified as asylum seekers. 85% of refugees are hosted in developed countries, with 57% coming from Syria, Afghanistan and South Sudan. Turkey is the top hosting country of refugees with 3.5 million displaced people within its borders.
In 2006, there were 8.4 million UNHCR registered refugees worldwide, the lowest number since 1980. At the end of 2015, there were 16.1 million refugees worldwide. When adding the 5.2 million Palestinian refugees who are under UNRWA's mandate there were 21.3 million refugees worldwide. The overall forced displacement worldwide has reached a total of 65.3 million displaced persons at the end of 2015, while it was 59.5 million 12 months earlier. One in every 113 people globally is an asylum seeker or a refugee. In 2015, the total number of displaced people worldwide, including refugees, asylum seekers and internally displaced persons, was at its highest level on record.
Among them, Syrian refugees were the largest group in 2015 at 4.9 million. In 2014, Syrians had overtaken Afghan refugees (2.7 million), who had been the largest refugee group for three decades. Somalis were the third largest group with one million. The countries hosting the largest number of refugees according to UNHCR were Turkey (2.5 million), Pakistan (1.6 million), Lebanon (1.1 million) and Iran (1 million). the countries that had the largest numbers of internally displaced people were Colombia at 6.9, Syria at 6.6 million and Iraq at 4.4 million.
Children were 51% of refugees in 2015 and most of them were separated from their parents or travelling alone. In 2015, 86 per cent of the refugees under UNHCR's mandate were in low and middle-income countries that themselves are close to situations of conflict. Refugees have historically tended to flee to nearby countries with ethnic kin populations and a history of accepting other co-ethnic refugees. The religious, sectarian and denominational affiliation has been an important feature of debate in refugee-hosting nations. | https://en.wikipedia.org/wiki?curid=45547 |
Porto Torres
Porto Torres (, ) is a "comune" and a city of the Province of Sassari in north-west of Sardinia, Italy. It is situated on the coast at about east of "Capo del Falcone" and in the center of the Gulf of Asinara. The port of Porto Torres is the second biggest seaport of the island, followed by the port of Olbia. The town is very close to the main city of Sassari, where the local university takes office.
Founded during the 1st century BC as "Colonia Iulia Turris Libisonis", it was the first Roman colony of the entire island.
Porto Torres' territory is situated on the north-west part of Sardinian Coast.
The extension of municipality is almost 10,200 hectare and is subdivided in two parts, with almost the same extension.
One part includes the city, the industrial area and the Roman ruins; the latter is formed by the Asinara island with the smaller Isola Piana. This part of territory is, since 1997, a national park.
The morphology of "city part" is flat; the area of Porto Torres and the rest part of north-west Sardinia is characterized by a Nurra flat, there are some hill formations in the middle of the flat. Part of this hill formation is in Porto Torres' territory and the highest elevation of it is Monte Alvaro with a height of 342 m.
The communal territory is crossed by two rivers, Rio Mannu and Fiume Santo. The first draws the edge of Porto Torres territory on west, while instead the latter flows near the city and was used as a fluvial way as early as the Roman age.
In ancient times, Turris Libisonis was one of the most considerable cities in Sardinia. It was probably of purely Roman origin, founded apparently by Julius Caesar, as it bore the title "Colonia Julia". Pliny described it as a colony, the only on the island in his time, suggesting that there was previously no town on the spot, but merely a fort or "castellum". It is noticed also by Ptolemy and in the Itineraries, but without any indication that it was a place of any importance.
The ancient remains still existing prove that it must have been a considerable town under the Roman Empire. According to inscriptions on ancient milestones, the principal road through the island ran directly from Caralis (Cagliari) to Turris, a sufficient proof that the latter was a place much frequented. Indeed, two roads, which diverged at Othoca (modern Santa Giusta) connected Caralis to Turris, the more important keeping inland and the other following the west coast. It was also an episcopal see during the early part of the Middle Ages.
The existing port at Porto Torres, which is almost wholly artificial, is based in great part on Roman foundations; and there exists also the remains of a temple (which, as we learn from an inscription, was dedicated to Fortune, and restored in the reign of Philip), of "thermae", of a basilica and an aqueduct, as well as a bridge over the adjoining small river, still called the "Fiume Turritano". The ancient city continued to be inhabited till the 11th century, when the greater part of the population migrated to Sassari, about inland, and situated on a hill. It was partly under Genoese hands before, in the early 15th century, it was conquered by the Aragonese. After the Spanish rule it was part of the Kingdom of Sardinia.
Torres was separated from the comune of Sassari in 1842. At the time the area which had been built around the basilica of San Gavino joined the fishermen's community near the port to form the new "Porto Torres".
On 10 May 1942 Benito Mussolini visited the town.
On 18 April 1943 the city was bombed by the Allies.
The town of Porto Torres has several free access beaches and cliffs suitable for bathing activities. Main beach is the Balai beach (lit.""Spiaggia di Balai""). Other beaches are :
In the north-west of Sardinia, the harbor of Porto Torres is the biggest. The city has connections with the rest of the Italy, of Spain and France. Not so far from the harbor there is the Maritime Terminal ("Stazione marittima"). In the same area there is built the new passenger terminal ("Terminal passeggeri"; the building is still under construction).
From the seaport there is also available a connection for the island of Asinara.
Porto Torres belong to the metropolitan network of north Sardinia (lit. "Rete metropolitana del nord Sardegna"). Due to this, the city is well-connected with all nearly towns via intercity autobus thanks to the ARST. Local rides are managed by the local public transport agency ("A.t.p. Sassari").
Highway SS131/E25 connect the town with the main city Sassari and the chief town Cagliari. Also road SS200 lead the way to Santa Teresa Gallura.
SP81 lead to Platamona, Sorso and "Eden beach."
SP42 (A.k.a. ""Strada dei due mar"i") connect the town with Alghero's airport and Alghero.
A railway operated by Trenitalia connects the town with Sassari and the rest of the island. The town has two train stations, one built at the end of the 20th century (considered as the main station) and one smaller and more historical built during the 19th century (referred as "Porto Torres marittima").
The town has many state high-schools and several state primary schools in its territory.
In the urban territory there is also a music school named in memory of the Italian songwriter Fabrizio de Andrè.
Due to the proximity to the city of Sassari and thanks to the intercity lines managed by ARST for the citizens is very easy to reach the near University of Sassari.
Public library "Antonio Pigliaru" (lit. "Biblioteca Comunale "Antonio Pigliaru"") is the only library of the town.
Main football clubs:
Main association:
Main associations:
There are many boxing clubs and martial arts schools. Sports like Karate shotokan, MMA, Boxing, Jujitsu, Krav-Maga and Self-defense are very appreciated and practiced by some part of the citizens.
Main association:
Right below the ancient Roman bridge of Riu Mannu Porto Torres has a riding hall where the local "A.S.D. Centro Ippico Equitazione Porto Torres" practice horse riding"."
A 67.000 m² area which offer many sports.
It is a multi-purpose stadium mainly composed with:
A sport facility with a capacity of 1.600 people mainly used as an indoor basketball court.
A 1.800 m² skateboard park with a bank ramp of 20°, a square-rail, a pyramid ledges and a quarter pipe.
Football pitch of 100 x 60 meters situated not so far from the town hall.
Called "Pineta la Farrizza", "Pineta Abbacurrente" or "Pineta Balai lontano", it is composed mainly of stone pines.
Starting from "Piazza eroi dell'onda" and finishing in the plaza of "Balai lontano", it offers a panoramic view of the sea.
Chemical industries support the modern economy of Porto Torres. Fiume Santo, a 1,040 MW power station owned by E.ON, is west from the city, in the municipality of Sassari.
Plans related to industrial conversion are in progress in Porto Torres, where seven research centers are developing the transformation from traditional fossil fuel related industry to an integrated production chain from vegetable oil using oleaginous seeds to bioplastics.
Starting in 2008, tourism has become a very important activity for the economy of the city. The town have several attractions, both natural and anthropic. The main attraction is the Asinara national park. The Aragonese seaport tower is considered the symbol of the city and because of this it is one of the main tourist attractions. Other main attractions are the Roman bridge of Riu Mannu and the Basilica of Saint Gavinus. Due the decline of the industrial sector, the tourist sector has started to become the leading sector of the local economy (despite the local industrial zone, that importance for the city remains high).
Fishing and farming activities are also practiced around the land.
References | https://en.wikipedia.org/wiki?curid=45550 |
Alghero
Alghero (), also known in the local Algherese dialect as L'Alguer (; ; ), is a town of about 45,000 inhabitants in the Italian insular province of Sassari in northwestern Sardinia, next to the Mediterranean Sea. Part of its population descends from Catalan conquerors from the end of the Middle Ages, when Sardinia was part of the Crown of Aragon. Hence, the Catalan language is co-official (a unique situation in Italy) and known as the Alguerès dialect. The name Alghero comes from "Aleguerium", which is a mediaeval Latin word meaning "stagnation of algae" ("Posidonia oceanica").
Alghero is the third university center in the island, coming after Cagliari and Sassari. It hosts the headquarters of the Università degli Studi di Sassari’s Architecture and Design department. In 2012 it was the 10th most visited city by tourists in Italy.
The area of today's Alghero has been settled since pre-historic times. The Ozieri culture was present here in the 4th millennium BC (Necropolis of Anghelu Ruju), while the Nuraghe civilization settled in the area around 1,500 BC.
The Phoenicians arrived by the 8th century BC and the metalworking town of Sant'Imbenia – in the area of later Alghero –, with a mixed Phoenician and Nuragic population, engaged in trade with the Etruscans on the Italian mainland.
Due to its strategic position on the Mediterranean Sea, Alghero had been developed into a fortified port town by 1102, built by the Genoese Doria family. The Dorias ruled Alghero for centuries, apart from a brief period under the rule of Pisa between 1283 and 1284. Alghero's population later grew because of the arrival of Catalan colonists. In the early 16th century Alghero received papal recognition as a bishopric and the status of King's City ("ciutat de l'Alguer") and developed economically.
Historically, the city was founded in the early twelfth century between 1102 and 1112, when the noble Doria family of Genoa was allowed to build the first historical nucleus into an empty section of the coast of the parish of Nulauro in Judicature of Torres (Sassari). For two centuries it remained in the orbit of the Maritime Republics, first and foremost the Genoese, apart from 1283–1284 when the Pisans were able to control it for a year. It is plausible that at this time the town shared, given its commercial and multi-ethnic nature, a language similar to the nascent Sassarese.
The village was conquered by force by the Crown of Aragon, at the behest of King Pere IV of Aragon (r. 1336–1387), who later actively promoted colonisation of the town and the surrounding area, sending numerous families from different counties and provinces of the then Crown of Aragon, including Valencia, Majorca, Catalonia and Aragon. These were granted enticing privileges, and in fact, replaced the original population some of whom were sent to the Iberian Peninsula and Majorca as slaves. The dialects these families spoke in Alghero, were all very similar and derived from the same linguistic family. Over time it settled on its current form of Catalan, despite the subsequent decline of the Crown of Aragon.
The Aragonese were followed by the Spanish Habsburgs, who ruled until 1702 and continued expanding the town.
In 1720 Alghero, along with the rest of Sardinia, was handed over to the Piedmont-based House of Savoy. In 1821 a famine led to a revolt by the population, which was bloodily suppressed. At the end of the same century, Alghero was de-militarised.
During the Fascist era, part of the surrounding marshes were reclaimed and the suburbs of Fertilia and S.M. La Palma were founded. During World War II (1943), Alghero was bombed, and its historical centre suffered heavy damage. The presence of malaria in the countryside was finally overcome in the 1950s.
Since then, Alghero has become a popular tourist resort.
Alghero is located in the northwestern coast of Sardinia, along the bay named after the city. In the north of the urban area, there is the Nurra plain; to the northwest, the karstic systems of Capo Caccia, Punta Giglio and Monte Doglia. The south is built mainly by mountains and the plateaus of Villanova Monteleone and Bosa.
The climate at Alghero is mild due to the presence of the sea, which attenuates the temperatures especially during the summer. Summers are warm like in most parts of the Mediterranean. Winters are also tempered, with the thermometers showing negative Celsius temperatures just a few days per year.
A dialect of Catalan is spoken in Alghero, introduced when Catalans settled in the town. Catalan was replaced as the official language of the Island by Spanish in the 17th century, then by Italian. The most recent linguistic research showed that 24.1% of the people have Algherese Catalan as a mother tongue, which is habitually spoken by 18.5% and taught to the children by 8% of the population, whereas 88.2% have some understanding of the language. Since 1997, Catalan has had official recognition and national and regional laws grant its right to be used in the city. Currently, there has been a revival of the arts in Algherese Catalan, with singers such as Franca Masu performing original compositions in the language.
Following a rural exodus from the surrounding villages towards the city, much of the population speaks or has some proficiency in Sardinian, in addition to Italian and Catalan. Historically, the spread of Catalan was limited to the city and part of the coast, as the surrounding countryside has always been populated by Sardinian-speaking people.
Moreover, the ancient part of Alghero shows many characteristics of Catalan medieval architecture. The ‘algueresos’ (Alghero inhabitants) usually refer to their city as ‘Barceloneta’ – 'little Barcelona' – because of their ancestry and fraternity with the Catalan capital. Also the cuisine is a blending of Catalan cuisine and Sardinian cuisine.
The many historical dominances that occurred in Alghero have created a rich variety of monuments, buildings and sights. Back from the Neolithic period from which many settlements remain, up to nowadays, in the last decades Alghero has become a touristic main point not only because of its coast and natural beauties but also because of a fairly well-preserved patrimony.
Several archeological sites out of the urban area: the Anghelu Ruju necropolis, the Santu Pedru hill, the Villa Romana of Santa Imbenia or even the Purissima. Many nuraghi in some other points as Palmavera are also well preserved and open to visitors.
The first ramparts system looks back to the 13th century and was imported from the Genovese system. In 1354 the city was occupied by Catalans, who restored and expanded the defensive system, back then in bad condition. Some features from the old walls were respected, but Ferdinand the Catholic, who wanted to grant more protection to the city, built the majority of them in the 16th Century. Along the walls, 7 towers and 3 forts are found.
The coral of Alghero is known as among the finest in the Mediterranean and the world for the particular reputation of quantity, quality, compression and the ruby red color, much to remember one of the most important economic aspects of the territory, also called the Riviera del Corallo, and have in his coat of arms a branch of the precious red coral on a foundation of rock.
Another of Alghero features is its landscape. It has several beaches, bays and natural parks on the shoreline. Capo Caccia promontory and its lighthouse are landmarks.
Alghero is well-connected. Roads lead to Sassari, the province's capital. The main port for passengers is 30 kilometers away and Alghero – Fertilia airport has national and international flights.
Alghero has a train station in the Pietraia neighborhood, Sant’Agostino, with daily trains to Sassari.
There is a pleasure and fishing port in the heart of the city. Passenger traffic is handled by Porto Torres, some 30 kilometers north. There are ferry services from there to Genoa, Barcelona and Civitavecchia.
The Alghero-Fertilia "Riviera del Corallo" Airport is 10 kilometers from the centre near Fertilia. It's the principal connection with the rest of Italy and Europe. There is an hourly bus service to Fertilia and the centre of Alghero.
In the 1930s the Swedish writer Amelie Posse Brazdova wrote a book entitled "Sardinia Side Show", where she told the complete story of two years she spent "interned" in Alghero old town during World War I. | https://en.wikipedia.org/wiki?curid=45551 |
Portuguese Timor
Portuguese Timor () refers to East Timor during the historic period when it was a Portuguese colony that existed between 1702 and 1975. During most of this period, Portugal shared the island of Timor with the Dutch East Indies.
The first Europeans to arrive in the region were the Portuguese in 1515. Dominican friars established a presence on the island in 1556, and the territory was declared a Portuguese colony in 1702. Following the beginning of a Lisbon-instigated decolonisation process in 1975, East Timor was invaded by Indonesia. However, the invasion was not recognized as legitimate by the United Nations (UN), which continued to regard Portugal as the legal Administering Power of East Timor. The independence of East Timor was finally achieved in 2002 following a UN-administered transition period.
Prior to the arrival of European colonial powers, the island of Timor was part of the trading networks that stretched between India and China and incorporating Maritime Southeast Asia. The island's large stands of fragrant sandalwood were its main commodity. The first European powers to arrive in the area were the Portuguese in the early sixteenth century followed by the Dutch in the late sixteenth century. Both came in search of the fabled Spice Islands of Maluku. In 1515, Portuguese first landed near modern Pante Macassar. Portuguese merchants exported sandalwood from the island until the tree nearly became extinct. In 1556 a group of Dominican friars established the village of Lifau.
In 1613, the Dutch took control of the western part of the island. Over the following three centuries, the Dutch would come to dominate the Indonesian archipelago with the exception of the eastern half of Timor, which would become Portuguese Timor. The Portuguese introduced maize as a food crop and coffee as an export crop. Timorese systems of tax and labour control were preserved, through which taxes were paid through their labour and a portion of the coffee and sandalwood crop. The Portuguese introduced mercenaries into Timor communities and Timor chiefs hired Portuguese soldiers for wars against neighbouring tribes. With the use of the Portuguese musket, Timorese men became deer hunters and suppliers of deer horn and hide for export.
The Portuguese introduced Catholicism to Portuguese Timor, as well as the Latin writing system, the printing press, and formal schooling. Two groups of people were introduced to East Timor: Portuguese men, and Topasses. The Portuguese language was introduced into church and state business, and Portuguese Asians used Malay in addition to Portuguese. Under colonial policy, Portuguese citizenship was available to men who assimilated the Portuguese language, literacy, and religion; by 1970, 1,200 East Timorese, largely drawn from the aristocracy, Dili residents, or larger towns, had obtained Portuguese citizenship. By the end of the colonial administration in 1974, 30 percent of Timorese were practising Catholics while the majority continued to worship spirits of the land and sky.
In 1702, Lisbon sent its first governor, António Coelho Guerreiro, to Lifau, which became the capital of all Portuguese dependencies in the Lesser Sunda Islands. Former capitals were Solor and Larantuka. Portuguese control over the territory was tenuous, particularly in the mountainous interior. Dominican friars, the occasional Dutch raid, and the Timorese themselves, competed with Portuguese merchants. The control of colonial administrators was largely restricted to the Dili area, and they had to rely on traditional tribal chieftains for control and influence.
The capital was moved to Dili in 1769, due to attacks from the Topasses, who became rulers of several local kingdoms (Liurai). At the same time, the Dutch were colonising the west of the island and the surrounding archipelago that is now Indonesia. The border between Portuguese Timor and the Dutch East Indies was formally decided in 1859 with the Treaty of Lisbon. In 1913, the Portuguese and Dutch formally agreed to split the island between them. The definitive border was drawn by the Permanent Court of Arbitration in 1916, and it remains the international boundary between East Timor and Indonesia.
For the Portuguese, their colony of Portuguese Timor remained little more than a neglected trading post until the late nineteenth century. Investment in infrastructure, health, and education was minimal. Sandalwood remained the main export crop with coffee exports becoming significant in the mid-nineteenth century. In places where Portuguese rule was asserted, it tended to be brutal and exploitative.
At the beginning of the twentieth century, a faltering home economy prompted the Portuguese to extract greater wealth from its colonies, resulting in increased resistance to Portuguese rule in Portuguese Timor. In 1911–12, a Timorese rebellion was quashed after Portugal brought in troops from the Portuguese colonies of Mozambique and Macau, resulting in the deaths of 3,000 East Timorese.
In the 1930s, the Japanese semi-governmental "Nan’yō Kōhatsu" development company, with the secret sponsorship of the Imperial Japanese Navy, invested heavily in a joint-venture with the primary plantation company of Portuguese Timor, SAPT. The joint-venture effectively controlled imports and exports into the island by the mid-1930s and the extension of Japanese interests greatly concerned the British, Dutch and Australian authorities.
Although Portugal was neutral during the Second World War, in December 1941, Portuguese Timor was occupied by a small British, Australian and Dutch force, to preempt a Japanese invasion. However, the Japanese did invade in the Battle of Timor in February 1942. Under Japanese occupation, the borders of the Dutch and Portuguese were overlooked with Timor island being made a single Imperial Japanese Army (IJA) administration zone. 400 Australian and Dutch commandos trapped on the island by the Japanese invasion waged a guerrilla campaign, which tied up Japanese troops and inflicted over 1,000 casualties. Timorese and the Portuguese helped the guerillas but following the Allies' eventual evacuation, Japanese retribution from their soldiers and Timorese militia raised in West Timor was severe. By the end of the War, an estimated 40–60,000 Timorese had died, the economy was in ruins, and famine widespread. (see Battle of Timor).
Following the Second World War, the Portuguese promptly returned to reclaim their colony, while West Timor became part of Indonesia, which secured its independence in 1949.
To rebuild the economy, colonial administrators forced local chiefs to supply labourers which further damaged the agricultural sector. The role of the Catholic Church in Portuguese Timor grew following the Portuguese government handing over the education of the Timorese to the Church in 1941. In post-war Portuguese Timor, primary and secondary school education levels significantly increased, albeit on a very low base.
Although illiteracy in 1973 was estimated at 93 percent of the population, the small educated elite of Portuguese Timorese produced by the Church in the 1960s and 1970s became the independence leaders during the Indonesian occupation.
Following a 1974 coup (the "Carnation Revolution"), the new Government of Portugal favoured a gradual decolonisation process for Portuguese territories in Asia and Africa. When Portuguese Timorese political parties were first legalised in April 1974, three major players emerged. The Timorese Democratic Union (UDT) was dedicated to preserving Portuguese Timor as a protectorate of Portugal, and in September announced its support for independence. Fretilin endorsed "the universal doctrines of socialism", as well as "the right to independence", and later declared itself "the only legitimate representative of the people". A third party, APODETI, emerged advocating Portuguese Timor's integration with Indonesia expressing concerns that an independent East Timor would be economically weak and vulnerable.
On 14 November 1974, Mário Lemos Pires - an Army officer - was appointed by the new Portuguese Government as Governor and Commander-in-Chief of Portuguese Timor.
Meanwhile, the political dispute between the Timorese parties soon gave rise to an armed conflict, that included the participation of members of the Colonial Police and Timorese soldiers of the Portuguese Army. Unable to control the conflict with the few Portuguese troops that he had at his disposal, Lemos Pires decided to leave Dili with his staff and transfer the seat of the administration to the Atauro Island (located 25 km off Dili) in late August 1975. At the same time, he requested Lisbon to send military reinforcements, the request being responded with the sending of a warship, the NRP "Afonso Cerqueira", which arrived in Timorese waters in early October.
On 28 November 1975, Fretilin unilaterally declared the territory's independence, as the Democratic Republic of East Timor ("República Democrática de Timor-Leste").
On 7 December 1975, the Indonesian Armed Forces launched an invasion of East Timor. At 3:00 a.m., the two Portuguese corvettes, the NRP "João Roby" and NRP "Afonso Cerqueira", anchored near Atauro, detected on the radar a high number of unidentified air and naval targets approaching. They soon identified the targets as Indonesian military aircraft and warships, which initiated an assault against Dili. Lemos Pires and his staff then left Atauro, embarked on the Portuguese warships, and headed to Darwin in the Northern Territory of Australia.
The "João Roby" and "Afonso Cerqueira" were ordered to continue patrolling the waters around the former Portuguese Timor, in preparation of possible military action to respond to the Indonesian invasion, constituting the naval task force UO 20.1.2 (latter renamed FORNAVTIMOR). Portugal sent a third warship to the region, the NRP "Oliveira e Carmo", which arrived on 31 January 1976 and replaced the NRP "Afonso Cerqueira". The Portuguese warships would continue in the region until May 1976, when the remaining NRP "Oliveira e Carmo" left, going back to Lisbon, at a time when a military action to expel the Indonesian forces was clearly seen as unviable.
On 17 July 1976, Indonesia formally annexed East Timor, declaring it as its 27th province and renaming it Timor Timur. The United Nations, however, did not recognise the annexation, continuing to consider Portugal as the legitimate administering power of East Timor.
Following the end of Indonesian occupation in 1999, and a United Nations administered transition period, East Timor became formally independent in 2002.
The first Timorese currency was the Portuguese Timorese pataca, introduced in 1894.
From 1959, the Portuguese Timorese escudo - linked to the Portuguese escudo - was used.
In 1975, the currency ceased to exist as East Timor was annexed by Indonesia and began using the Indonesian rupiah. | https://en.wikipedia.org/wiki?curid=45553 |
United States Secretary of Energy
The United States secretary of energy is the head of the United States Department of Energy, a member of the Cabinet of the United States, and fifteenth in the presidential line of succession. The position was formed on October 1, 1977 with the creation of the Department of Energy when President Jimmy Carter signed the Department of Energy Organization Act. Originally the post focused on energy production and regulation. The emphasis soon shifted to developing technology for better and more efficient energy sources as well as energy education. After the end of the Cold War, the department's attention also turned toward radioactive waste disposal and maintenance of environmental quality. The current secretary of energy is Dan Brouillette.
Former secretary of defense James Schlesinger was the first secretary of energy, who was a Republican nominated to the post by Democratic president Jimmy Carter, the only time a president has appointed someone of another party to the post. Schlesinger is also the only secretary to be dismissed from the post. Hazel O'Leary, Bill Clinton's first secretary of energy, was the first female and African-American holder. The first Hispanic to serve as Energy Secretary was Clinton's second, Federico Peña. Spencer Abraham became the first Arab American to hold the position on January 20, 2001, serving under the administration of George W. Bush. Steven Chu became the first Asian American to hold the position on January 20, 2009, serving under the administration of Barack Obama. He is also the longest-serving secretary of energy and the first individual to join the Cabinet having received a Nobel Prize.
(6)
(9)
As of , there are ten living former secretaries of energy, the oldest being Charles Duncan Jr. (served 1979–1981, born 1926). The most recent secretary of energy to die was Samuel Bodman (served 2005–2009, born 1938) on September 7, 2018. | https://en.wikipedia.org/wiki?curid=45561 |
Cartagena Protocol on Biosafety
The Cartagena Protocol on Biosafety to the Convention on Biological Diversity is an international agreement on biosafety as a supplement to the Convention on Biological Diversity effective since 2003. The Biosafety Protocol seeks to protect biological diversity from the potential risks posed by genetically modified organisms resulting from modern biotechnology.
The Biosafety Protocol makes clear that products from new technologies must be based on the precautionary principle and allow developing nations to balance public health against economic benefits. It will for example let countries ban imports of genetically modified organisms if they feel there is not enough scientific evidence that the product is safe and requires exporters to label shipments containing genetically altered commodities such as corn or cotton.
The required number of 50 instruments of ratification/accession/approval/acceptance by countries was reached in May 2003. In accordance with the provisions of its Article 37, the Protocol entered into force on 11 September 2003. As of December 2019, the Protocol had 172 parties, which includes 168 United Nations member states, the State of Palestine, Niue, the European Union, and now Uzbekistan signed on October 25, 2019.
In accordance with the precautionary approach, contained in Principle 15 of the Rio Declaration on Environment and Development, the objective of the Protocol is to contribute to ensuring an adequate level of protection in the field of the safe transfer, handling and use of 'living modified organisms resulting from modern biotechnology' that may have adverse effects on the conservation and sustainable use of biological diversity, taking also into account risks to human health, and specifically focusing on transboundary movements (Article 1 of the Protocol, SCBD 2000).
The protocol defines a 'living modified organism' as any living organism that possesses a novel combination of genetic material obtained through the use of modern biotechnology, and 'living organism' means any biological entity capable of transferring or replicating genetic material, including sterile organisms, viruses and viroids. 'Modern biotechnology' is defined in the Protocol to mean the application of in vitro nucleic acid techniques, or fusion of cells beyond the taxonomic family, that overcome natural physiological reproductive or recombination barriers and are not techniques used in traditional breeding and selection. 'Living modified organism (LMO) Products' are defined as processed material that are of living modified organism origin, containing detectable novel combinations of replicable genetic material obtained through the use of modern biotechnology. Common LMOs include agricultural crops that have been genetically modified for greater productivity or for resistance to pests or diseases. Examples of modified crops include tomatoes, cassava, corn, cotton and soybeans. 'Living modified organism intended for direct use as food or feed, or for processing (LMO-FFP)' are agricultural commodities from GM crops. Overall the term 'living modified organisms' is equivalent to genetically modified organism – the Protocol did not make any distinction between these terms and did not use the term 'genetically modified organism.'
One of the outcomes of the United Nations Conference on Environment and Development (also known as the Earth Summit) held in Rio de Janeiro, Brazil, in June 1992, was the adoption of the Rio Declaration on Environment and Development, which contains 27 principles to underpin sustainable development. Commonly known as the precautionary principle, Principle 15 states that "In order to protect the environment, the precautionary approach shall be widely applied by States according to their capabilities. Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation."
Elements of the precautionary approach are reflected in a number of the provisions of the Protocol, such as:
The Protocol applies to the transboundary movement, transit, handling and use of all living modified organisms that may have adverse effects on the conservation and sustainable use of biological diversity, taking also into account risks to human health (Article 4 of the Protocol, SCBD 2000).
The governing body of the Protocol is called the Conference of the Parties to the Convention serving as the meeting of the Parties to the Protocol (also the COP-MOP). The main function of this body is to review the implementation of the Protocol and make decisions necessary to promote its effective operation. Decisions under the Protocol can only be taken by Parties to the Protocol. Parties to the Convention that are not Parties to the Protocol may only participate as observers in the proceedings of meetings of the COP-MOP.
The Protocol addresses the obligations of Parties in relation to the transboundary movements of LMOs to and from non-Parties to the Protocol. The transboundary movements between Parties and non-Parties must be carried out in a manner that is consistent with the objective of the Protocol. Parties are required to encourage non-Parties to adhere to the Protocol and to contribute information to the Biosafety Clearing-House.
A number of agreements under the World Trade Organization (WTO), such as the Agreement on the Application of Sanitary and Phytosanitary Measures (SPS Agreement) and the Agreement on Technical Barriers to Trade (TBT Agreement), and the Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPs), contain provisions that are relevant to the Protocol. This Protocol states in its preamble that parties:
The Protocol promotes biosafety by establishing rules and procedures for the safe transfer, handling, and use of LMOs, with specific focus on transboundary movements of LMOs. It features a set of procedures including one for LMOs that are to be intentionally introduced into the environment called the advance informed agreement procedure, and one for LMOs that are intended to be used directly as food or feed or for processing. Parties to the Protocol must ensure that LMOs are handled, packaged and transported under conditions of safety. Furthermore, the shipment of LMOs subject to transboundary movement must be accompanied by appropriate documentation specifying, among other things, identity of LMOs and contact point for further information. These procedures and requirements are designed to provide importing Parties with the necessary information needed for making informed decisions about whether or not to accept LMO imports and for handling them in a safe manner.
The Party of import makes its decisions in accordance with scientifically sound risk assessments. The Protocol sets out principles and methodologies on how to conduct a risk assessment. In case of insufficient relevant scientific information and knowledge, the Party of import may use precaution in making their decisions on import. Parties may also take into account, consistent with their international obligations, socio-economic considerations in reaching decisions on import of LMOs.
Parties must also adopt measures for managing any risks identified by the risk assessment, and they must take necessary steps in the event of accidental release of LMOs.
To facilitate its implementation, the Protocol establishes a Biosafety Clearing-House for Parties to exchange information, and contains a number of important provisions, including capacity-building, a financial mechanism, compliance procedures, and requirements for public awareness and participation.
The "Advance Informed Agreement" (AIA) procedure applies to the first intentional transboundary movement of LMOs for intentional introduction into the environment of the Party of import. It includes four components: notification by the Party of export or the exporter, acknowledgment of receipt of notification by the Party of import, the decision procedure, and opportunity for review of decisions. The purpose of this procedure is to ensure that importing countries have both the opportunity and the capacity to assess risks that may be associated with the LMO before agreeing to its import. The Party of import must indicate the reasons on which its decisions are based (unless consent is unconditional). A Party of import may, at any time, in light of new scientific information, review and change a decision. A Party of export or a notifier may also request the Party of import to review its decisions.
However, the Protocol's AIA procedure does not apply to certain categories of LMOs:
While the Protocol's AIA procedure does not apply to certain categories of LMOs, Parties have the right to regulate the importation on the basis of domestic legislation. There are also allowances in the Protocol to declare certain LMOs exempt from application of the AIA procedure.
LMOs intended for direct use as food or feed, or processing (LMOs-FFP) represent a large category of agricultural commodities. The Protocol, instead of using the AIA procedure, establishes a more simplified procedure for the transboundary movement of LMOs-FFP. Under this procedure, A Party must inform other Parties through the Biosafety Clearing-House, within 15 days, of its decision regarding domestic use of LMOs that may be subject to transboundary movement.
Decisions by the Party of import on whether or not to accept the import of LMOs-FFP are taken under its domestic regulatory framework that is consistent with the objective of the Protocol. A developing country Party or a Party with an economy in transition may, in the absence of a domestic regulatory framework, declare through the Biosafety Clearing-House that its decisions on the first import of LMOs-FFP will be taken in accordance with risk assessment as set out in the Protocol and time frame for decision-making.
The Protocol provides for practical requirements that are deemed to contribute to the safe movement of LMOs. Parties are required to take measures for the safe handling, packaging and transportation of LMOs that are subject to transboundary movement. The Protocol specifies requirements on identification by setting out what information must be provided in documentation that should accompany transboundary shipments of LMOs. It also leaves room for possible future development of standards for handling, packaging, transport and identification of LMOs by the meeting of the Parties to the Protocol.
Each Party is required to take measures ensuring that LMOs subject to intentional transboundary movement are accompanied by documentation identifying the LMOs and providing contact details of persons responsible for such movement. The details of these requirements vary according to the intended use of the LMOs, and, in the case of LMOs for food, feed or for processing, they should be further addressed by the governing body of the Protocol. (Article 18 of the Protocol, SCBD 2000).
The first meeting of the Parties adopted decisions outlining identification requirements for different categories of LMOs (Decision BS-I/6, SCBD 2004). However, the second meeting of the Parties failed to reach agreement on the detailed requirements to identify LMOs intended for direct use as food, feed or for processing and will need to reconsider this issue at its third meeting in March 2006.
The Protocol established a Biosafety Clearing-House (BCH), in order to facilitate the exchange of scientific, technical, environmental and legal information on, and experience with, living modified organisms; and to assist Parties to implement the Protocol (Article 20 of the Protocol, SCBD 2000). It was established in a phased manner, and the first meeting of the Parties approved the transition from the pilot phase to the fully operational phase, and adopted modalities for its operations (Decision BS-I/3, SCBD 2004). | https://en.wikipedia.org/wiki?curid=45562 |
Portuguese Mozambique
Portuguese Mozambique () or Portuguese East Africa ("África Oriental Portuguesa") were the common terms by which Mozambique was designated during the historic period when it was a Portuguese colony. Portuguese Mozambique originally constituted a string of Portuguese possessions along the south-east African coast, and later became a unified colony, which now forms the Republic of Mozambique.
Portuguese trading settlements and, later, colonies, were formed along the coast and into the Zambezi basin from 1498 when Vasco da Gama first reached the Mozambican coast. Lourenço Marques explored the area that is now Maputo Bay in 1544. The Portuguese increased efforts for occupying the interior of the colony after the Scramble for Africa, and secured political control over most of its territory in 1918, facing the resistance of Africans during the process.
Some territories in Mozambique were handed over in the late 19th century for rule by chartered companies like the Mozambique Company ("Companhia de Moçambique"), which had the concession of the lands corresponding to the present-day provinces of Manica and Sofala, and the Niassa Company ("Companhia do Niassa"), which had controlled the lands of the modern provinces of Cabo Delgado and Niassa. The Mozambique Company relinquished its territories back to Portuguese control in 1942, unifying Mozambique under control of the Portuguese government.
The region as a whole was long officially termed Portuguese East Africa, and was subdivided into a series of colonies extending from Lourenço Marques in the south to Niassa in the north. Cabo Delgado was initially merely a strip of territory along the Rovuma River, including Cape Delgado itself, which Portugal acquired out of German East Africa in 1919, but it was enlarged southward to the Lurio River to form what is now Cabo Delgado Province. In the Zambezi basin were the colonies of Quelimane (now Zambezia Province) and Tete (in the panhandle between Northern Rhodesia [now Zambia] and Southern Rhodesia [now Zimbabwe]), which were for a time merged as Zambezia. The colony of Moçambique (now Nampula Province) had the Island of Mozambique as its capital. The island was also the seat of the Governor-General of Portuguese East Africa until the late 1890s, when that official was officially moved to the city of Lourenço Marques. Also in the south was the colony of Inhambane, which lay north-east of Lourenço Marques.
Once these colonies were merged, the region as a whole became known as "Moçambique".
Mozambique, according to the official policy of the Salazar regime, was an integral part of the "pluricontinental and multiracial nation" of Portugal. Portugal claimed, as it did in all its colonies, to Europeanise the local population and assimilate them into Portuguese culture. However, this stated policy was largely unsuccessful, and African opposition to colonisation led to a ten-year independence war that culminated in independence from Portugal in June 1975.
During its history as a Portuguese colony, the present-day territory of Mozambique had the following formal designations:
Until the 20th century, the land and peoples of Mozambique were barely affected by the Europeans who came to its shores and entered its major rivers. As the Muslim traders, mostly Swahili, were displaced from their coastal centres and routes to the interior by the Portuguese, migrations of Bantu peoples continued and tribal federations formed and reformed as the relative power of local chiefs changed. For four centuries the Portuguese presence was meagre. Coastal and river trading posts were built, abandoned, and built again. Governors sought personal profits to take back to Portugal, and colonists were not attracted to the distant area with its relatively unattractive climate; those who stayed were traders who married local women and successfully maintained relations with local chiefs.
In Portugal, however, Mozambique was considered to be a vital part of a world empire. Periodic recognition of the relative insignificance of the revenues it could produce was tempered by the mystique which developed regarding the mission of the Portuguese to bring their civilization to the African territory. It was believed that through missionary activity and other direct contact between Africans and Europeans, the Africans could be taught to appreciate and participate in Portuguese culture.
In the last decade of the 19th century and the first part of the 20th century, integration of Mozambique into the structure of the Portuguese nation was begun. After all of the area of the present province had been recognized by other European powers as belonging to Portugal, administrators waged wars against African polities to assert control over the territory. Civil administration was established throughout the area, the building of an infrastructure was begun, and agreements regarding the transit trade of Mozambique's land-locked neighbours to the west were made.
Colonial legislation discriminated against Africans on cultural grounds. Colonial legislation submitted Africans to forced labour, to pass laws and to segregation in schools. That most Africans were perceived to engage in "uncivilized behaviour" by the Portuguese created a low opinion of Africans as a group among Europeans. The uneducated Portuguese immigrant peasants in urban areas were frequently in direct competition with Africans for jobs and demonstrated jealousies and racial prejudice.
Between the urban and rural sectors of the society lied a steadily increasing group of Africans who were loosening their ties with rural villages and starting to participate in the urban economy, to settle in suburbs, and to adopt European customs. Members of this group would later become active participants in the independence movement.
When Portuguese explorers reached East Africa in 1498, Swahili commercial settlements had existed along the Swahili Coast and outlying islands for several centuries. From about 1500, Portuguese trading posts and forts became regular ports of call on the new route to the east.
The voyage of Vasco da Gama around the Cape of Good Hope into the Indian Ocean in 1498 marked the Portuguese entry into trade, politics, and society in the Indian Ocean world. The Portuguese gained control of the Island of Mozambique and the port city of Sofala in the early 16th century. Vasco da Gama having visited Mombasa in 1498 was then successful in reaching India thereby permitting the Portuguese to trade with the Far East directly by sea, thus challenging older trading networks of mixed land and sea routes, such as the spice trade routes that used the Persian Gulf, Red Sea and caravans to reach the eastern Mediterranean.
The Republic of Venice had gained control over much of the trade routes between Europe and Asia. After traditional land routes to India had been closed by the Ottoman Turks, Portugal hoped to use the sea route pioneered by da Gama to break the Venetian trading monopoly. Initially, Portuguese rule in East Africa focused mainly on a coastal strip centred in Mombasa. With voyages led by Vasco da Gama, Francisco de Almeida and Afonso de Albuquerque, the Portuguese dominated much of southeast Africa's coast, including Sofala and Kilwa, by 1515. Their main goal was to dominate trade with India. As the Portuguese settled along the coast, they made their way into the hinterland as (backwoodsmen). These lived alongside Swahili traders and even took up service among Shona kings as interpreters and political advisors. One such managed to travel through almost all the Shona kingdoms, including the Mutapa Empire's (Mwenemutapa) metropolitan district, between 1512 and 1516.
By the 1530s, small groups of Portuguese traders and prospectors penetrated the interior regions seeking gold, where they set up garrisons and trading posts at Sena and Tete on the Zambezi River and tried to gain exclusive control over the gold trade. The Portuguese finally entered into direct relations with the Mwenemutapa in the 1560s.
They recorded a wealth of information about the Mutapa kingdom as well as its predecessor, Great Zimbabwe. According to Swahili traders whose accounts were recorded by the Portuguese historian João de Barros, Great Zimbabwe was an ancient capital city built of stones of marvellous size without the use of mortar. And while the site was not within Mutapa's borders, the Mwenemutapa kept noblemen and some of his wives there.
The Portuguese attempted to legitimate and consolidate their trade and settlement positions through the creation of (land grants) tied to Portuguese settlement and administration. While were originally developed to be held by Portuguese, through intermarriage they became African Portuguese or African Indian centres defended by large African slave armies known as "Chikunda". Historically, within Mozambique, there was slavery. Human beings were bought and sold by African tribal chiefs, Arab traders, and the Portuguese. Many Mozambican slaves were supplied by tribal chiefs who raided warring tribes and sold their captives to the .
Although Portuguese influence gradually expanded, its power was limited and exercised through individual settlers and officials who were granted extensive autonomy. The Portuguese were able to wrest much of the coastal trade from Arabs between 1500 and 1700, but, with the Arab seizure of Portugal's key foothold at Fort Jesus on Mombasa Island (now in Kenya) in 1698, the pendulum began to swing in the other direction. As a result, investment lagged while Lisbon devoted itself to the more lucrative trade with India and the Far East and to the colonisation of Brazil. During the 18th and 19th centuries, the Mazrui and Omani Arabs reclaimed much of the Indian Ocean trade, forcing the Portuguese to retreat south. Many had declined by the mid-19th century, but several of them survived. During the 19th century, other European powers, particularly the British and the French, became increasingly involved in the trade and politics of the region. In the Island of Mozambique, the hospital, a majestic neo-classical building constructed in 1877 by the Portuguese, with a garden decorated with ponds and fountains, was for many years the biggest hospital south of the Sahara. By the early 20th century the Portuguese had shifted the administration of much of Mozambique to large private companies, like the Mozambique Company, the Zambezia Company and the Niassa Company, controlled and financed mostly by the British, which established, with the Portuguese, railroad lines to neighbouring countries. The companies, granted a charter by the Portuguese government to foster economic development and maintain Portuguese control in the territory's provinces, would lose their purpose when the territory was transferred to the control of the Portuguese colonial government between 1929 and 1942.
Although slavery had been legally abolished in Mozambique by the Portuguese authorities, at the end of the 19th century the Chartered companies enacted a forced labour policy and supplied cheap – often forced – African labour to the mines and plantations of the nearby British colonies and South Africa. The Zambezia Company, the most profitable chartered company, took over a number of smaller holdings and requested Portuguese military outposts to protect its property. The chartered companies and the Portuguese administration built roads and ports to bring their goods to market including a railway linking Southern Rhodesia with the Mozambican port of Beira. However, the development's administration gradually started to pass directly from the trading companies to the Portuguese government itself.
Because of their unsatisfactory performance and because of the shift, under the regime of Oliveira Salazar, towards a stronger Portuguese control of the Portuguese Empire's economy, the companies' concessions were not renewed when they ran out. This was what happened in 1942 with the Mozambique Company, which however continued to operate in the agricultural and commercial sectors as a corporation, and had already happened in 1929 with the termination of the Niassa Company's concession.
In the 1950s, the Portuguese overseas colony was rebranded an overseas province of Portugal, and by the early 1970s, it was officially upgraded to the status of Portuguese non-sovereign state, by which it would remain a Portuguese territory but with a wider administrative autonomy. The Front for the Liberation of Mozambique (FRELIMO), initiated a guerrilla campaign against Portuguese rule in September 1964. This conflict, along with the two others already initiated in the other Portuguese colonies of Angola and Guinea, became part of the so-called Portuguese Colonial War (1961–74). From a military standpoint, the Portuguese regular army held the upper hand during all of the conflicts against the independentist guerrilla forces, which created favourable conditions for social development and economic growth until the end of the conflict in 1974.
After ten years of sporadic warfare and after Portugal's return to democracy through a leftist military coup in Lisbon which replaced Portugal's regime in favour of a military junta (the Carnation Revolution of April 1974), FRELIMO took control of the territory. The talks that led to an agreement on Mozambique's independence, signed in Lusaka, were started. Within a year, almost the entire ethnic Portuguese population had left, many fleeing in fear (in mainland Portugal they were known as ); others were expelled by the ruling power of the newly independent territory. Mozambique became independent from Portugal on 25 June 1975.
At least since the early 19th century, the legal status of Mozambique always considered it as much a part of Portugal as Lisbon, but as an overseas province enjoyed special derogations to account for its distance from Europe.
From 1837, the highest government official in the province of Mozambique has always been the governor-general, who reported directly to the Government in Lisbon, usually through the minister of the Overseas. During some periods in the late 19th and the early 20th century, the governors-general of Mozambique received the status of royal commissioners or of high commissioners, which gave them extended executive and legislative powers, equivalent to those of a government minister.
In the 20th century, the province was also subject to the authoritarian regime that ruled Portugal from 1933 to 1974, until the military coup in Lisbon, known as the Carnation Revolution. Most members of the government of Mozambique were from Portugal, but a few were Africans. Nearly all members of the bureaucracy were from Portugal, as most Africans did not have the necessary qualifications to obtain positions.
The government of Mozambique, as it was in Portugal, was highly centralized. Power was concentrated in the executive branch, and all elections where they occurred were carried out using indirect methods. From the Prime Minister's office in Lisbon, authority extended down to the most remote posts and of Mozambique through a rigid chain of command. The authority of the government of Mozambique was residual, primarily limited to implementing policies already decided in Europe. In 1967, Mozambique also sent seven delegates to the National Assembly in Lisbon.
The highest official in the province was the governor-general, appointed by the Portuguese cabinet on recommendation of the Overseas Minister. The governor-general had both executive and legislative authority. A Government Council advised the governor-general in the running of the province. The functional cabinet consisted of five secretaries appointed by the Overseas Minister on the advice of the governor. A Legislative Council had limited powers and its main activity was approving the provincial budget. Finally, an Economic and Social Council had to be consulted on all draft legislation, and the governor-general had to justify his decision to Lisbon if he ignored its advice.
Mozambique was divided into nine districts, which were further subdivided into 61 municipalities () and 33 circumscriptions (). Each subdivision was then made up of three or four individual posts, 166 in all with an average of 40,000 Africans in each. Each district, except Lourenço Marques which was run by the governor-general, was overseen by a governor. Most Africans only had contact with the Portuguese through the post administrator, who was required to visit each village in his domain at least once a year.
The lowest level of administration was the , settlements inhabited by Africans living according to customary law. Each was run by a , an African or Portuguese official chosen on the recommendation of local residents. Under the , each village had its own African headman.
Each level of government could also have an advisory board or council. They were established in municipalities with more than 500 electors, in smaller municipalities or circumscriptions with more than 300 electors, and in posts with more than 20 electors. Each district also had its own board as well.
Two legal systems were in force — Portuguese civil law and African customary law. Until 1961, Africans were considered to be Natives (), rather than citizens. After 1961, the previous native laws were repealed and Africans gained "de facto" Portuguese citizenship.
Portuguese East Africa was located in south-eastern Africa. It was a long coastal strip with Portuguese strongholds, from current day Tanzania and Kenya, to the south of current-day Mozambique.
In 1900, the part of modern Mozambique northwest of the Zambezi and Shire Rivers was called ; the rest of it was . Various districts existed, and even issued stamps, during the first part of the century, including Inhambane, , Mozambique Colony, Mozambique Company, Nyassa Company, Quelimane, Tete, and . The Nyassa Company territory is now and .
In the early- and mid-20th century, a number of changes occurred. Firstly, on 28 June 1919, the Treaty of Versailles transferred the Kionga Triangle, a territory south of the Rovuma River from German East Africa to Mozambique.
During World War II, the Charter of the Mozambique Company expired, on 19 July 1942; its territory, known as Manica and Sofala, became a district of Mozambique. Mozambique was constituted as four districts on 1 January 1943 — Manica and Sofala, , (South of the Save River), and .
On 20 October 1954, administrative reorganization caused and Mozambique districts to be split from . At the same time, the district was divided into Gaza, Inhambane and , while the district was split from Manica and Sofala.
By the early 1970s, Mozambique was bordering the Mozambique Channel, bordering the countries of Malawi, Rhodesia, South Africa, Swaziland, Tanzania, and Zambia. Covering a total area of . With a tropical to subtropical climate, the Zambezi flows through the north-central and most fertile part of the country. Its coastline had , with of land boundaries, its highest point at Monte Binga (). The Gorongosa National Park, founded in 1920, was the main natural park in the territory.
The districts with its respective capitals were:
Other important urban centres included Sofala, Nacala, António Enes, Island of Mozambique and Vila Junqueiro.
By 1970, the Portuguese Overseas Province of Mozambique had about 8,168,933 inhabitants. Nearly 300,000 were white ethnic Portuguese. There was a number of mulattoes, from both European and African ancestry, living across the territory. However, the majority of the population belonged to local tribal groups which included the Makua–Lomwe, the Shona and the Tsonga. Other ethnic minorities included British, Greeks, Chinese and Indians. Most inhabitants were black indigenous Africans with a diversity of ethnic and cultural backgrounds, ranging from Shangaan and Makonde to Yao or Shona peoples. The Makua were the largest ethnic group in the north. The Sena and Shona (mostly Ndau) were prominent in the Zambezi valley, and the Shangaan (Tsonga) dominated in the south. In addition, several other minority groups lived a tribal lifestyle across the territory.
Mozambique had around 250,000 Europeans in 1974 that made up around 3% of the population. Mozambique was cosmopolitan as it had Indian, Chinese, Greek and Anglophone communities (over 25,000 Indians and 5,000 Chinese by the early 1970s). The white population was more influenced from South Africa. The capital of Portuguese Mozambique, Lourenço Marques (Maputo), had a population of 355,000 in 1970 with around 100,000 Europeans. Beira had around 115,000 inhabitants at the time with around 30,000 Europeans. Most of the other cities ranged from 10 to 15% in the number of Europeans, while Portuguese Angola cities had European majorities ranging from 50% to 60%.
Starting in 1926, Portugal's colonial authorities abandoned conceptions of an innate inferiority of Africans, and set as their goal the development of a multiethnic society in its African colonies. The establishment of a dual, racialized civil society was formally recognized in (The Statute of Indigenous Populations) adopted in 1929, which was based on the subjective concept of civilization versus tribalism. In the administration's view, the goal of civilizing mission would only be achieved after a period of Europeanization or enculturation of African communities.
The established a distinction between the colonial citizens, subject to the Portuguese laws and entitled to all citizenship rights and duties effective in the metropole, and the (natives), subjected to colonial legislation and customary African laws. Between the two groups there was a third small group, the , comprising native blacks, mulatos, Asians, and mixed-race people, who had at least some formal education and not subjected to paid forced labor. They were entitled to some citizenship rights, and held a special identification card, used to control the movements of forced labor. The were subject to the traditional authorities, who were gradually integrated into the colonial administration and charged with solving disputes, managing the access to land, and guaranteeing the flows of workforce and the payment of taxes. As several authors have pointed out, the regime was the political system that subordinated the immense majority of Africans to local authorities entrusted with governing, in collaboration with the lowest echelon of the colonial administration, the native communities described as tribes and assumed to have a common ancestry, language, and culture. The colonial use of traditional law and structures of power was thus an integral part of the process of colonial domination.
In the 1940s, the integration of traditional authorities into the colonial administration was deepened. The Portuguese colony was divided into (municipalities), in urban areas, governed by colonial and metropolitan legislation, and (localities), in rural areas. The were led by a colonial administrator and divided into (subdivisions of circunscrições), headed by (tribal chieftains), the embodiment of traditional authorities. Provincial Portuguese Decree No. 5.639, of July 29, 1944, attributed to and their assistants, the , the status of (administrative assistants). Gradually, these traditional titles lost some of their content, and the and came to be viewed as an effective part of the colonial state, remunerated for their participation in the collection of taxes, recruitment of the labor force, and agricultural production in the area under their control. Within the areas of their jurisdiction, the and the also controlled the distribution of land and settled conflicts according to customary norms. To exercise their power, the and had their own police force.
The "indigenato" regime was abolished in 1960. From then on, all Africans were considered Portuguese citizens, and racial discrimination became a sociological rather than a legal feature of colonial society. In fact, the rule of traditional authorities became even more integrated than before in the colonial administration. Legally speaking, by the 1960s and 1970s segregation in Mozambique was minimal compared to that in neighbouring South Africa.
The largest coastal cities, the first founded or settled by Portuguese people since the 16th century, like the capital , Beira, Quelimane, Nacala and Inhambane were modern cosmopolitan ports and a melting pot of several cultures, with a strong South African influence. The Southeast African and Portuguese cultures were dominant, but the influence of Arab, Indian, and Chinese cultures were also felt. The cuisine was diverse, owing especially to the Portuguese cuisine and Muslim heritage, and seafood was also quite abundant.
Lourenço Marques had always been a point of interest for artistic and architectural development since the first days of its urban expansion and this strong artistic spirit was responsible for attracting some of the world's most forward architects at the turn of the 20th century. The city was home to masterpieces of building work by, Pancho Guedes, Herbert Baker and Thomas Honney amongst others. The earliest architectural efforts around the city focused on classical European designs such as the Central Train Station (CFM) designed by architects Alfredo Augusto Lisboa de Lima, Mario Veiga and Ferreira da Costa and built between 1913 and 1916 (sometimes mistaken with the work of Gustav Eiffel), and the Hotel Polana designed by Herbert Baker.
As the 1960s and 1970s approached, was yet again at the center of a new wave of architectural influences made most popular by Pancho Guedes. The designs of the 1960s and 1970s were characterized by modernist movements of clean, straight and functional structures. However, prominent architects such as Pancho Guedes fused this with local art schemes giving the city's buildings a unique Mozambican theme. As a result, most of the properties erected during the second construction boom take on these styling cues.
Since the 15th century, Portugal founded settlements, trading posts, forts and ports in the Sub-Saharan Africa's coast. Cities, towns and villages were founded all over East African territories by the Portuguese, especially since the 19th century, like Lourenço Marques, Beira, Vila Pery, Vila Junqueiro, Vila Cabral and Porto Amélia. Others were expanded and developed greatly under Portuguese rule, like Quelimane, Nampula and Sofala. By this time, Mozambique had become a Portuguese colony, but administration was left to the trading companies (like Mozambique Company and Niassa Company) who had received long-term leases from Lisbon. By the mid-1920s, the Portuguese succeeded in creating a highly exploitative and coercive settler economy, in which African natives were forced to work on the fertile lands taken over by Portuguese settlers. Indigenous African peasants mainly produced cash crops designated for sale in the markets of the colonial metropole (the center, i.e. Portugal). Major cash crops included cotton, cashews, tea and rice. This arrangement ended in 1932 after the takeover in Portugal by the new António de Oliveira Salazar's government — the . Thereafter, Mozambique, along with other Portuguese colonies, was put under the direct control of Lisbon. In 1951, it became an overseas province. The economy expanded rapidly during the 1950s and 1960s, attracting thousands of Portuguese settlers to the country. It was around this time that the first nationalist guerrilla groups began to form in Tanzania and other African countries. The strong industrial and agricultural development that did occur throughout the 1950s, 1960s and early 1970s was based on Portuguese development plans, and also included British and South African investment.
In 1959–60, Mozambique's major exports included cotton, cashew nuts, tea, sugar, copra and sisal. Other major agricultural productions included rice and coconut. The expanding economy of the Portuguese overseas province was fuelled by foreign direct investment, and public investment which included ambitious state-managed development plans. British capital owned two of the large sugar concessions (the third was Portuguese), including the famous Sena states. The Matola Oil Refinery, Procon, was controlled by Britain and the United States. In 1948 the petroleum concession was given to the Mozambique Gulf Oil Company. At Maotize coal was mined; the industry was chiefly financed by Belgian capital. 60% of the capital of the was held by the , 30% by the Mozambique Company, and the remaining 10% by the Government of the territory. Three banks were in operation, the , Portuguese, Barclays Bank, D.C.O., British, and the (a partnership between Standard Bank of South Africa and mainland's ). Nine out of the twenty-three insurance companies were Portuguese. 80% of life assurance was in the hands of foreign companies which testifies to the openness of the economy.
The Portuguese overseas province of Mozambique was the first territory of Portugal, including the European mainland, to distribute Coca-Cola. Lately the Oil Refinery was established by the (SONAREP) — a Franco-Portuguese syndicate. In the sisal plantations Swiss capital was invested, and in copra concerns, a combination of Portuguese, Swiss and French capital was invested. The large availability of capital from both Portuguese and international origin, allied to the wide range of natural resources and the growing urban population, lead to an impressive growth and development of the economy.
From the late stages of this notable period of high growth and huge development effort started in the 1950s, was the construction of Cahora Bassa dam by the Portuguese, which started to fill in December 1974 after construction was commenced in 1969. In 1971 construction work of the Massingir Dam began. At independence, Mozambique's industrial base was well-developed by Sub-Saharan Africa standards, thanks to a boom in investment in the 1960s and early 1970s. Indeed, in 1973, value added in manufacturing was the sixth highest in Sub-Saharan Africa.
Economically, Mozambique was a source of agricultural raw materials and an earner of foreign exchange. It also provided a market for Portuguese manufacturers which were protected from local competition. Transportation facilities had been developed to exploit the transit trade of South Africa, Swaziland, Rhodesia, Malawi, and Zambia, agricultural production for export purposes had been encouraged, and profitable arrangements for the export of labour had been made with neighbouring countries. Industrial production had been relatively insignificant but did begin to increase in the 1960s. The economic structure generally favoured the taking of profits to Portugal rather than their reinvestment in Mozambique. The Portuguese interests which dominate in banking, industry, and agriculture, exerted a powerful influence on policy.
Mozambique's rural black populations were largely illiterate. However, some thousands of Africans were educated in religion, Portuguese language, and Portuguese history by Catholic and Protestant missionary schools established in cities and in the countryside.
In 1930, primary schooling became racially segregated. Africans who did not hold assimilated status had to enroll in "rudimentary schools," whereas whites and the few thousand assimilated Africans had access to "primary schools" of better quality.
Starting in the early 1940s, access to education was expanded in all levels. Nevertheless, "rudimentary schools" retained their poor quality. In 1956, there were 292,199 African students enrolled in first grade. Of these, only 9,486 had successfully passed third grade in 1959. By 1970, only 7.7% of Mozambique's population was literate.
A comprehensive network of secondary schools (the ) and technical or vocational education schools were implemented across the cities and main towns of the territory. However, access to these institutions was largely limited to whites. In 1960, only 30 out of 1,000 students of the "Liceu Salazar" were Africans, in spite of whites making up only 2% of the Mozambican population.
In 1962, the first Mozambican university was founded by the Portuguese authorities: the .
The Portuguese-ruled territory was introduced to several popular European and North American sports disciplines since the early urbanistic and economic booms of the 1920s and 1940s. This period was a time of city and town expansion and modernization that included the construction of several sports facilities for football, rink hockey, basketball, volleyball, handball, athletics, gymnastics and swimming. Several sports clubs were founded across the entire territory, among them were some of the largest and oldest sports organizations of Mozambique like established in 1920. Other major sports clubs were founded in the following years like (1921), (1924), (1928), (1943), (1943), and (1955). Several sportsmen, especially football players, that achieved wide notability in Portuguese sports were from Mozambique. Eusébio and Mário Coluna were examples of that, and excelled in the Portugal national football team. Since the 1960s, with the latest developments on commercial aviation, the highest ranked football teams of Mozambique and the other African overseas provinces of Portugal, started to compete in the (the Portuguese Cup). There were also several facilities and organizations for golf, tennis and wild hunting.
The nautical sports were also well developed and popular, especially in , home to the . The largest stadium was the , located near . Opened in 1968, it was at the time the most advanced in Mozambique conforming to standards set by both FIFA and the Union Cycliste Internationale (UCI). The cycling track could be adjusted to allow for 20,000 more seats.
Beginning in the 1950s, motorsport was introduced to Mozambique. At first race cars would compete in areas around the city, Polana and along the but as funding and interest increased, a dedicated race track was built in the Costa Do Sol area along and behind the with the ocean to the east with a length of . The initial surface of the new track, named did not provide enough grip and an accident in the late 1960s killed 8 people and injured many more. Therefore, in 1970, the track was renovated and the surface changed to meet the highest international safety requirements that were needed at large events with many spectators. The length then increased to . The city became host to several international and local events beginning with the inauguration on 26 November 1970.
As communist and anti-colonial ideologies spread out across Africa, many clandestine political movements were established in support of Mozambique's independence. These movements claimed that policies and development plans were primarily designed by the ruling authorities for the benefit of the ethnic Portuguese population, affecting a majority of the indigenous population who suffered both state-sponsored discrimination and enormous social pressure. Many felt they had received too little opportunity or resources to upgrade their skills and improve their economic and social situation to a degree comparable to that of the Europeans. Statistically, Portuguese Mozambique's whites were indeed wealthier and more skilled than the black indigenous majority, in spite of decreasing legal discrimination of Africans starting in the 1960s.
The Front for the Liberation of Mozambique (FRELIMO), headquartered in Tanzania, initiated a guerrilla campaign against Portuguese rule in September 1964. This conflict, along with the two others already initiated in the other Portuguese overseas territories of Angola and Portuguese Guinea, became part of the Portuguese Colonial War (1961–74). Several African territories under European rule had achieved independence in recent decades. Oliveira Salazar attempted to resist this tide and maintain the integrity of the Portuguese empire. By 1970, the anti-guerrilla war in Africa was consuming an important part of the Portuguese budget and there was no sign of a final solution in sight. This year was marked by a large-scale military operation in northern Mozambique, the Gordian Knot Operation, which displaced the FRELIMO's bases and destroyed much of the guerrillas' military capacity. At a military level, a part of Guinea-Bissau was de facto independent since 1973, but the capital and the major towns were still under Portuguese control. In Angola and Mozambique, independence movements were only active in a few remote countryside areas from where the Portuguese Army had retreated. However, their impending presence and the fact that they wouldn't go away dominated public anxiety. Throughout the war period Portugal faced increasing dissent, arms embargoes and other punitive sanctions imposed by most of the international community. For the Portuguese society the war was becoming even more unpopular due to its length and financial costs, the worsening of diplomatic relations with other United Nations members, and the role it had always played as a factor of perpetuation of the regime. It was this escalation that would lead directly to the mutiny of members of the FAP in the Carnation Revolution in 1974 – an event that would lead to the independence of the former Portuguese colonies in Africa. A leftist military coup in Lisbon on 24 April 1974 by the (MFA), overthrew the Estado Novo regime headed by Prime Minister Marcelo Caetano.
As one of the objectives of the MFA, all the Portuguese overseas territories in Africa were offered independence. FRELIMO took complete control of the Mozambican territory after a transition period, as agreed in the Lusaka Accord which recognized Mozambique's right to independence and the terms of the transfer of power.
Within a year of the Portuguese military coup at Lisbon, almost all of the Portuguese population had left the African territory as refugees (in mainland Portugal they were known as ) – some expelled by the new ruling power of Mozambique, some fleeing in fear. A parade and a state banquet completed the independence festivities in the capital, which was expected to be renamed Can Phumo, or "Place of Phumo", after a Tsonga chief who lived in the area before the Portuguese navigator founded the city in 1545 and gave his name to it. Most city streets, named for Portuguese heroes or important dates in Portuguese history, had their names changed.
Herrick, Allison and others (1969). "Area Handbook for Mozambique", US Government Printing Office. | https://en.wikipedia.org/wiki?curid=45567 |
The Abyss
The Abyss is a 1989 American science fiction film written and directed by James Cameron and starring Ed Harris, Mary Elizabeth Mastrantonio, and Michael Biehn. When an American submarine sinks in the Caribbean, the U.S. search and recovery team works with an oil platform crew, racing against Soviet vessels to recover the boat. Deep in the ocean, they encounter something unexpected. The film was released on August 9, 1989, receiving generally positive reviews and grossed $89.8 million. It won the Academy Award for Best Visual Effects and was nominated for three more Academy awards.
In January 1994, the U.S. USS "Montana" has an encounter with an unidentified submerged object and sinks near the Cayman Trough. With Soviet ships moving in to try to salvage the sub and a hurricane moving over the area, the U.S. government sends a SEAL team to "Deep Core", a privately owned experimental underwater drilling platform near the Cayman Trough to use as a base of operations. The platform's designer, Dr. Lindsey Brigman, insists on going along with the SEAL team, despite her estranged husband Virgil "Bud" Brigman being the current foreman.
During initial investigation of the "Montana", a power cut in the team's submersibles leads to Lindsey seeing a strange light circling the sub, which she later calls a "non-terrestrial intelligence" or "NTI". Lt. Hiram Coffey, the SEAL team leader, is ordered to accelerate their mission and takes one of the mini-subs without "Deep Core"s permission to recover a Trident missile warhead from the "Montana" just as the storm hits above, leaving the crew unable to disconnect from their surface support ship in time. The cable crane is torn from the ship and falls into the trench, dragging the "Deep Core" to the edge before it stops. The rig is partially flooded, killing several crew members and damaging its power systems.
The crew wait out the storm so they can restore communications and be rescued. As they struggle against the cold, they find the NTIs have formed an animated column of water that is exploring the rig. Though they treat it with curiosity, Coffey is agitated and cuts it in half by closing a pressure bulkhead on it, causing it to retreat. Realizing that Coffey is suffering paranoia from high-pressure nervous syndrome, the crew spies on him through a remote operated vehicle, finding him and another SEAL arming the warhead to attack the NTIs. To try and stop him, Bud fights Coffey but Coffey escapes in a mini-sub with the primed warhead; Bud and Lindsey give chase in the other sub, damaging both. Coffey is able to launch the warhead into the trench, but his sub drifts over the edge, crushing him when it implodes. Bud's mini-sub is inoperable and taking on water; with only one functional diving suit, Lindsey opts to enter deep hypothermia when the ocean's cold water engulfs her. Bud swims back to the platform with her body; there, he and the crew administer CPR and revive her.
One SEAL, Ensign Monk, helps Bud use an experimental diving suit equipped with a liquid breathing apparatus to survive to that depth, though he will only be able to communicate through a keypad on the suit. Bud begins his dive, assisted by Lindsey's voice to keep him coherent against the effects of the mounting pressure, and reaches the warhead. Monk guides him in successfully disarming it. With little oxygen left in the system, Bud explains he knew it was a one-way trip, and tells Lindsey he loves her. As he waits for death, an NTI approaches Bud, takes his hand, and guides him to an alien ship deep in the trench. Inside the ship, the NTIs create an atmospheric pocket for Bud, allowing him to breathe normally. The NTIs then play back Bud's message to his wife and they look at each other with understanding.
On "Deep Core" the crew is waiting for rescue when they see a message from Bud that he met some friends and warns them to hold on. The base shakes and lights from the trench bring the arrival of the alien ship. It rises to the ocean's surface, with "Deep Core" and several of the surface ships run aground on its hull. The crew of "Deep Core" exit the platform, surprised they are not suffering from decompression sickness. They see Bud walking out of the alien ship and Lindsey races to hug him.
In the extended version, the events in the film are played against a backdrop of conflict between the United States and the Soviet Union, with the potential for all-out war; the sinking of the "Montana" additionally fuels the aggression. There is more conflict between Bud and Lindsey in regard to their former relationship. The primary addition is the ending: when Bud is taken to the alien ship, they start by showing him images of war and aggression from news sources around the globe. The aliens then create massive megatsunamis that threaten the world's coasts, but stop them short before they hit. Bud asks why they spared the humans and they show Bud his message to Lindsey.
H. G. Wells was the first to introduce the notion of a sea alien in his 1897 short story "In the Abyss". The idea for "The Abyss" came to James Cameron when, at age 17 and in high school, he attended a science lecture about deep sea diving by a man, Francis J. Falejczyk, who was the first human to breathe fluid through his lungs in experiments conducted by Johannes A. Kylstra. He subsequently wrote a short story that focused on a group of scientists in a laboratory at the bottom of the ocean. The basic idea did not change, but many of the details were modified over the years. Once Cameron arrived in Hollywood, he quickly realized that a group of scientists was not that commercial and changed it to a group of blue-collar workers. While making "Aliens", Cameron saw a "National Geographic" film about remote operated vehicles operating deep in the North Atlantic Ocean. These images reminded him of his short story. He and producer Gale Anne Hurd decided that "The Abyss" would be their next film. Cameron wrote a treatment combined with elements of a shooting script, which generated a lot of interest in Hollywood. He then wrote the script, basing the character of Lindsey on Hurd and finished it by the end of 1987. Cameron and Hurd were married before "The Abyss", separated during pre-production, and divorced in February 1989, two months after principal photography.
The cast and crew trained for underwater diving for one week in the Cayman Islands. This was necessary because 40% of all live-action principal photography took place underwater. Furthermore, Cameron's production company had to design and build experimental equipment and develop a state-of-the-art communications system that allowed the director to talk underwater to the actors and dialogue to be recorded directly onto tape for the first time.
Cameron had originally planned to shoot on location in the Bahamas where the story was set but quickly realized that he needed to have a completely controlled environment because of the stunts and special visual effects involved. He considered shooting the film in Malta, which had the largest unfiltered tank of water, but it was not adequate for Cameron's needs. Underwater sequences for the film were shot at a unit of the Gaffney Studios, situated south of Cherokee Falls, outside Gaffney, South Carolina, which had been abandoned by Duke Power officials after previously spending $700 million constructing the Cherokee Nuclear Power Plant, along Owensby Street, Gaffney, South Carolina.
Two specially constructed tanks were used. The first one, based on the abandoned plant's primary reactor containment vessel, held of water, was 55 feet (18 m) deep and 209 feet (70 m) across. At the time, it was the largest fresh-water filtered tank in the world. Additional scenes were shot in the second tank, an unused turbine pit, which held of water. As the production crew rushed to finish painting the main tank, millions of gallons of water poured in and took five days to fill. The Deepcore rig was anchored to a 90-ton concrete column at the bottom of the large tank. It consisted of six partial and complete modules that took over half a year to plan and build from scratch.
Can-Dive Services Ltd., a Canadian commercial diving company that specialized in saturation diving systems and underwater technology, specially manufactured the two working craft (Flatbed and Cab One) for the film. Two million dollars was spent on set construction.
Filming was also done at the largest underground lake in the world—a mine in Bonne Terre, Missouri, which was the background for several underwater shots.
The main tank was not ready in time for the first day of principal photography. Cameron delayed filming for a week and pushed the smaller tank's schedule forward, demanding that it be ready weeks ahead of schedule. Filming eventually began on August 15, 1988, but there were still problems. On the first day of shooting in the main water tank, it sprang a leak and of water a minute rushed out. The studio brought in dam-repair experts to seal it. In addition, enormous pipes with elbow fittings had been improperly installed. There was so much water pressure in them that the elbows blew off.
Cameron's cinematographer, Mikael Salomon, used three cameras in watertight housings that were specially designed. Another special housing was designed for scenes that went from above-water dialogue to below-water dialogue. The filmmakers had to figure out how to keep the water clear enough to shoot and dark enough to look realistic at 2,000 feet (700 m), which was achieved by floating a thick layer of plastic beads in the water and covering the top of the tank with an enormous tarpaulin. Cameron wanted to see the actors' faces and hear their dialogue, and thus hired Western Space and Marine to engineer helmets which would remain optically clear underwater and installed state-of-the-art aircraft quality microphones into each helmet. Safety conditions were also a major factor with the installation of a decompression chamber on site, along with a diving bell and a safety diver for each actor.
The breathing fluid used in the film actually exists but has only been thoroughly investigated in animals. Over the previous 20 years it had been tested on several animals, who survived. The rat shown in the film was actually breathing fluid and survived unharmed.
Ed Harris did not actually breathe the fluid. He held his breath inside a helmet full of liquid while being towed 30 feet (10 m) below the surface of the large tank. He recalled that the worst moments were being towed with fluid rushing up his nose and his eyes swelling up. Actors played their scenes at 33 feet (11 m), too shallow a depth for them to need decompression, and rarely stayed down for more than an hour at a time. Cameron and the 26-person underwater diving crew sank to 50 feet (17 m) and stayed down for five hours at a time. To avoid decompression sickness, they would have to hang from hoses halfway up the tank for as long as two hours, breathing pure oxygen.
The cast and crew endured over six months of grueling six-day, 70-hour weeks on an isolated set. At one point, Mary Elizabeth Mastrantonio had a physical and emotional breakdown on the set and on another occasion, Ed Harris burst into spontaneous sobbing while driving home. Cameron himself admitted, "I knew this was going to be a hard shoot, but even I had no idea just how hard. I don't "ever" want to go through this again". For example, for the scene where portions of the rig are flooded with water, he realized that he initially did not know how to minimize the sequence's inherent danger. It took him more than four hours to set up the shot safely. Actor Leo Burmester said, "Shooting "The Abyss" has been the hardest thing I've ever done. Jim Cameron is the type of director who pushes you to the edge, but he doesn't make you do anything he wouldn't do himself." A lightning storm caused a 200-foot (65 m) tear in the black tarpaulin covering the main tank. Repairing it would have taken too much time, so the production began shooting at night. In addition, blooming algae often reduced visibility to 20 feet (6 m) within hours. Over-chlorination led to divers' skin burning and exposed hair being stripped off or turning white.
As production went on, the slow pace and daily mental and physical strain of filming began to wear on the cast and crew. Mary Elizabeth Mastrantonio remembered, "We never started and finished any one scene in any one day". At one point, Cameron told the actors to relieve themselves in their wetsuits to save time between takes. While filming one of many takes of Mastrantonio's character's resuscitation scene—in which she was soaking wet, topless and repeatedly being slapped and pounded on the chest—the camera ran out of film, prompting Mastrantonio to storm off the set yelling, "We are not animals!" For some shots in the scene that focus on Ed Harris, he was yelling at thin air because Mastrantonio refused to film the scene again. Michael Biehn also grew frustrated by the waiting. He claimed that he was in South Carolina for five months and only acted for three to four weeks. He remembered one day being ten meters underwater and "suddenly the lights went out. It was so black I couldn't see my hand. I couldn't surface. I realized I might not get out of there." Harris recalled: "One day we were all in our dressing rooms and people began throwing couches out the windows and smashing the walls. We just had to get our frustrations out." Cameron responded to these complaints, saying, "For every hour they spent trying to figure out what magazine to read, we spent an hour at the bottom of the tank breathing compressed air." After 140 days and $4 million over budget, filming finally wrapped on December 8, 1988. Before the film's release, there were reports from South Carolina that Ed Harris was so upset by the physical demands of the film and Cameron's dictatorial directing style that he said he would refuse to help promote the motion picture. Harris later denied this rumor and helped promote the film. However, after its release and initial promotion, Harris publicly disowned the film, saying "I'm never talking about it and never will." Mary Elizabeth Mastrantonio also disowned the film, saying, ""The Abyss" was a lot of things. Fun to make is not one of them."
To create the alien water tentacle, Cameron initially considered cel animation or a tentacle sculpted in clay and then animated via stop-motion techniques with water reflections projected onto it. Phil Tippett suggested Cameron contact Industrial Light & Magic. The special visual effects work was divided up among seven FX divisions with motion control work by Dream Quest Images and computer graphics and opticals by ILM. ILM designed a program to produce surface waves of differing sizes and kinetic properties for the pseudopod. For the moment where it mimics Bud and Lindsey's faces, Ed Harris had eight of his facial expressions scanned while twelve of Mastrantonio's were scanned via software used to create computer-generated sculptures. The set was photographed from every angle and digitally recreated so that the pseudopod could be accurately composited into the live-action footage. The company spent six months to create 75 seconds of computer graphics needed for the creature. The film was to have opened on July 4, 1989, but its release was delayed for more than a month by production and special effects problems. The animated sequences were supervised by ILM animation director Wes Takahashi. The technology they used for the CGI was SGI and Apple.
Studio executives were nervous about the film's commercial prospects when preview audiences laughed at scenes of serious intent. Industry insiders said that the release delay was because nervous executives ordered the film's ending completely re-shot. There was also the question of the size of the film's budget: 20th Century Fox stated that the budget was $43 million, a figure Cameron himself has reiterated. However, estimates put the figure higher with "The New York Times" estimating the cost at $45 million and one executive claiming it cost $47 million, while box office revenue tracker website "The Numbers" lists the production budget at $70 million.
"The Abyss" was released on August 9, 1989, in 1,533 theaters, where it grossed $9.3 million on its opening weekend and ranked #2 at the box office. It went on to make $54.2 million in North America and $35.5 million throughout the rest of the world for a worldwide total of $89.8 million.
On Rotten Tomatoes, a review aggregator, "The Abyss" has an 89% approval rating based on 45 reviews and an average rating of 7.19/10. The critical consensus states: "The utterly gorgeous special effects frequently overshadow the fact that "The Abyss" is also a totally gripping, claustrophobic thriller, complete with an interesting crew of characters." On Metacritic, the film has an average score of 62 out of 100, based on 14 critics indicating "generally favorable reviews". The reviews tallied therein are for both the theatrical release and the Special Edition.
David Ansen of "Newsweek", summarizing the theatrical release, wrote, "The payoff to "The Abyss" is pretty damn silly — a portentous "deus ex machina" that leaves too many questions unanswered and evokes too many other films." In her review for "The New York Times", Caryn James claimed that the film had "at least four endings," and "by the time the last ending of this two-and-a-quarter-hour film comes along, the effect is like getting off a demon roller coaster that has kept racing several laps after you were ready to get off." Chris Dafoe, in his review for "The Globe and Mail", wrote, "At its best, "The Abyss" offers a harrowing, thrilling journey through inky waters and high tension. In the end, however, this torpedo turns out to be a dud—it swerves at the last minute, missing its target and exploding ineffectually in a flash of fantasy and fairy-tale schtick."
While praising the film's first two hours as "compelling", the "Toronto Star" remarked, "But when Cameron takes the adventure to the next step, deep into the heart of fantasy, it all becomes one great big deja boo. If we are to believe what Cameron finds way down there, E.T. didn't really phone home, he went surfing and fell off his board." "USA Today" gave the film three out of four stars and wrote, "Most of this underwater blockbuster is 'good,' and at least two action set pieces are great. But the dopey wrap-up sinks the rest 20,000 leagues." In her review for "The Washington Post", Rita Kempley wrote that the film "asks us to believe that the drowned return to life, that the comatose come to the rescue, that driven women become doting wives, that Neptune cares about landlubbers. I'd sooner believe that Moby Dick could swim up the drainpipe." "Halliwell's Film Guide" claimed the film was, "despite some clever special effects, a tedious, overlong fantasy that is more excited by machinery than people." Conversely, Peter Travers of "Rolling Stone" enthused, "["The Abyss" is] the greatest underwater adventure ever filmed, the most consistently enthralling of the summer blockbusters…one of the best pictures of the year."
The release of the Special Edition in 1993 garnered much praise. Each giving it thumbs up, Siskel remarked, ""The Abyss" has been improved," and Ebert added, "It makes the film seem more well rounded." In the book "Reel Views 2", James Berardinelli comments, "James Cameron's "The Abyss" may be the most extreme example of an available movie that demonstrates how the vision of a director, once fully realized on screen, can transform a good motion picture into a great one."
"The Abyss" won the 1990 Oscar for Best Visual Effects (John Bruno, Dennis Muren, Hoyt Yeatman, and Dennis Skotak). It was also nominated for:
The studio unsuccessfully lobbied hard to get Michael Biehn nominated for the Academy Award for Best Supporting Actor.
Many other film organizations, such as the Academy of Science Fiction, Fantasy & Horror Films, and the American Society of Cinematographers, also nominated "The Abyss". The film ended up winning a total of three other awards from these organizations.
The soundtrack to "The Abyss" was released on August 22, 1989.
Varèse Sarabande, which released the original album, issued a limited-edition (3,000 copies), two-disc album in 2014 featuring the complete score minus the end credits medley, which is absent from both releases.
Even as the film was in the first weeks of its 1989 theatrical release, rumors were circulating of a wave sequence missing from the film's end. As chronicled in the 1993 LaserDisc Special Edition release and later in the 2000 DVD, the pressure to cut the film's running time stemmed from both distribution concerns and Industrial Light & Magic's then-inability to complete the required sequences. From the distributor's perspective, the looming three-hour length limited the number of times the film could be shown each day, assuming that audiences would be willing to sit through the entire film, though 1990's "Dances with Wolves" would shatter both industry-held notions. Further, test audience screenings revealed a surprisingly mixed reaction to the sequences as they appeared in their unfinished form; in post-screening surveys, they dominated both the "Scenes I liked most" and "Scenes I liked least" fields. Contrary to speculation, studio meddling was not the cause of the shortened length; Cameron held final cut as long as the film met a running time of roughly two hours and 15 minutes. He later noted, "Ironically, the studio brass were horrified when I said I was cutting the wave."
What emerges in the winnowing process is only the best stuff. And I think the overall caliber of the film is improved by that. I cut only two minutes of "Terminator". On "Aliens", we took out much more. I even reconstituted some of that in a special (TV) release version.
The sense of something being missing on "Aliens" was greater for me than on "The Abyss", where the film just got consistently better as the cut got along. The film must function as a dramatic, organic whole. When I cut the film together, things that read well on paper, on a conceptual level, didn't necessarily translate to the screen as well. I felt I was losing something by breaking my focus. Breaking the story's focus and coming off the main characters was a far greater detriment to the film than what was gained. The film keeps the same message intact at a thematic level, not at a really overt level, by working in a symbolic way.
Cameron elected to remove the sequences along with other, shorter scenes elsewhere in the film, reducing the running time from roughly two hours and 50 minutes to two hours and 20 minutes and diminishing his signature themes of nuclear peril and disarmament. Subsequent test audience screenings drew substantially better reactions.
Star Mary Elizabeth Mastrantonio publicly expressed regret about some of the scenes selected for removal from the film's theatrical cut: "There were some beautiful scenes that were taken out. I just wish we hadn't shot so much that isn't in the film."
Shortly after the film's premiere, Cameron and video editor Ed Marsh created a longer video cut of "The Abyss" for their own use that incorporated dailies. With the tremendous success of Cameron's "" in 1991, Lightstorm Entertainment secured a five-year, $500 million financing deal with 20th Century Fox for films produced, directed or written by Cameron. The contract allocated roughly $500,000 of the amount to complete "The Abyss". ILM was commissioned to finish the work they had started three years earlier, with many of the same people who had worked on it originally.
The CGI tools developed for "Terminator 2: Judgment Day" allowed ILM to complete the rumored tidal wave sequence, as well as correcting flaws in rendering for all their other work done for the film.
The tidal wave sequence had originally been designed by ILM as a physical effect, using a plastic wave, but Cameron was dissatisfied with the end result, and the sequence was scrapped. By the time Cameron was ready to revisit "The Abyss", ILM's CGI prowess had finally progressed to an appropriate level, and the wave was rendered as a CGI effect. "Terminator 2: Judgment Day" screenwriter and frequent Cameron collaborator William Wisher had a cameo in the scene as a reporter in Santa Monica who catches the first tidal wave on camera.
When it was discovered that original production sound recordings had been lost, new dialogue and foley were recorded, but since Kidd Brewer had died of a self-inflicted gunshot before he could return to re-loop his dialog, producers and editors had to lift his original dialogue tracks from the remaining optical-sound prints of the dailies. The Special Edition was therefore dedicated to his memory as a result.
As Alan Silvestri was not available to compose new music for the restored scenes, Robert Garrett, who had composed temporary music for the film's initial cutting in 1989, was chosen to create new music. The Special Edition was completed in December 1992, with 28 minutes added to the film, and saw a limited theatrical release in New York City and Los Angeles on February 26, 1993, and expanded to key cities nationwide in the following weeks. Both versions of the film continue to receive public exhibitions, including a screening of an original 35mm print of the theatrical cut on August 20, 2019, in New York City.
The first THX-certified LaserDisc title of the Special Edition Box Set was released in May 1993, in both Widescreen and Full Screen formats, and was a best-seller for the rest of the year. The Special Edition was released on VHS in 1996 as a part of Fox Video's Widescreen Series with a seven-minute behind-the-scenes featurette with footage that did not appear in the "Under Pressure: The Making of The Abyss" documentary that was included on the Laserdisc and DVD releases. The Special Edition's first DVD release in 2000 was on two discs and featured animated menus, both the theatrical and Special Edition versions of the film via seamless branching along with the Laserdisc's extensive text, artwork and photographic documentation of the film's production, a ten-minute featurette and the sixty minute documentary "Under Pressure: The Making of The Abyss." The Special Edition is also available in a bare-bones Full Screen version on DVD. All available DVDs are non-anamorphic with the exception of the Chinese DVD produced for Region 6 by Excel Media.
In 2014 the pay cable channels Cinemax and HBO began broadcasting both versions of the film in 1080p. Netflix's UK service began offering the theatrical version in 1080p in 2017. At an October, 2014 event James Cameron and Gale Anne Hurd were asked about a future Blu-ray release for the film. Cameron gestured to the head of Fox Home Entertainment, implying the decision lay with the studio. Five months later another article suggested a spat between Cameron and 20th Century Fox Home Entertainment was responsible for the delay. While promoting the upcoming 30th-anniversary Blu-ray release of "Aliens" at Comic-Con in San Diego July 2016 James Cameron confirmed that he was working on a remastered 4K transfer of "The Abyss" and that it would be released on Blu-ray for the first time in early 2017. Cameron added, "We've done a wet-gate 4K scan of the original negative, and it's going to look insanely good. We're going to do an authoring pass in the DI for Blu-ray and HDR at the same time."
In March 2018 digital intermediate colourist Skip Kimball posted a photo to his Instagram suggesting that he was working on the film. In November 2018 Cameron told "Empire" magazine that a Blu-ray transfer was "complete for my review" and he hoped it would be ready before 2019.
Science-fiction author Orson Scott Card was hired to write a novelization of the film based on the screenplay and discussions with Cameron. He wrote back-stories for Bud, Lindsey, and Coffey as a means not only of helping the actors define their roles, but also to justify some of their behavior and mannerisms in the film. Card also wrote the aliens as a colonizing species which preferentially sought high-pressure deep-water worlds to build their ships as they traveled further into the galaxy (their mothership was in orbit on the far side of the moon). The NTIs' knowledge of neuroanatomy and nanoscale manipulation of biochemistry was responsible for many of the "deus ex machina" aspects of the film.
A licensed interactive fiction video game based on the script was being developed for Infocom by Bob Bates, but was cancelled when Infocom was shut down by its then-parent company Activision. Sound Source Interactive later created an action video game entitled "The Abyss: Incident at Europa". The game takes place a few years after the film, where the player must find a cure for a deadly virus. | https://en.wikipedia.org/wiki?curid=45568 |
Dedekind cut
In mathematics, Dedekind cuts, named after German mathematician Richard Dedekind but previously considered by Joseph Bertrand, are а method of construction of the real numbers from the rational numbers. A Dedekind cut is a partition of the rational numbers into two non-empty sets "A" and "B", such that all elements of "A" are less than all elements of "B", and "A" contains no greatest element. The set "B" may or may not have a smallest element among the rationals. If "B" has a smallest element among the rationals, the cut corresponds to that rational. Otherwise, that cut defines a unique irrational number which, loosely speaking, fills the "gap" between "A" and "B". In other words, "A" contains every rational number less than the cut, and "B" contains every rational number greater than or equal to the cut. An irrational cut is equated to an irrational number which is in neither set. Every real number, rational or not, is equated to one and only one cut of rationals .
Dedekind cuts can be generalized from the rational numbers to any totally ordered set by defining a Dedekind cut as a partition of a totally ordered set into two non-empty parts "A" and "B", such that "A" is closed downwards (meaning that for all "a" in "A", "x" ≤ "a" implies that "x" is in "A" as well) and "B" is closed upwards, and "A" contains no greatest element. See also completeness (order theory).
It is straightforward to show that a Dedekind cut among the real numbers is uniquely defined by the corresponding cut among the rational numbers. Similarly, every cut of reals is identical to the cut produced by a specific real number (which can be identified as the smallest element of the "B" set). In other words, the number line where every real number is defined as a Dedekind cut of rationals is a complete continuum without any further gaps.
A Dedekind cut is a partition of the rationals formula_1 into two subsets "A" and "B" such that
By relaxing the first two requirements, we formally obtain the extended real number line.
It is more symmetrical to use the ("A","B") notation for Dedekind cuts, but each of "A" and "B" does determine the other. It can be a simplification, in terms of notation if nothing more, to concentrate on one "half" — say, the lower one — and call any downward closed set "A" without greatest element a "Dedekind cut".
If the ordered set "S" is complete, then, for every Dedekind cut ("A", "B") of "S", the set "B" must have a minimal element "b",
hence we must have that "A" is the interval (−∞, "b"), and "B" the interval ["b", +∞).
In this case, we say that "b" "is represented by" the cut ("A","B").
The important purpose of the Dedekind cut is to work with number sets that are "not" complete. The cut itself can represent a number not in the original collection of numbers (most often rational numbers). The cut can represent a number "b", even though the numbers contained in the two sets "A" and "B" do not actually include the number "b" that their cut represents.
For example if "A" and "B" only contain rational numbers, they can still be cut at by putting every negative rational number in "A", along with every non-negative number whose square is less than 2; similarly "B" would contain every positive rational number whose square is greater than or equal to 2. Even though there is no rational value for , if the rational numbers are partitioned into "A" and "B" this way, the partition itself represents an irrational number.
Regard one Dedekind cut ("A", "B") as "less than" another Dedekind cut ("C", "D") (of the same superset) if "A" is a proper subset of "C". Equivalently, if "D" is a proper subset of "B", the cut ("A", "B") is again "less than" ("C", "D"). In this way, set inclusion can be used to represent the ordering of numbers, and all other relations ("greater than", "less than or equal to", "equal to", and so on) can be similarly created from set relations.
The set of all Dedekind cuts is itself a linearly ordered set (of sets). Moreover, the set of Dedekind cuts has the least-upper-bound property, i.e., every nonempty subset of it that has any upper bound has a "least" upper bound. Thus, constructing the set of Dedekind cuts serves the purpose of embedding the original ordered set "S", which might not have had the least-upper-bound property, within a (usually larger) linearly ordered set that does have this useful property.
A typical Dedekind cut of the rational numbers formula_13 is given by the partition formula_14 with
This cut represents the irrational number in Dedekind's construction. The essential idea is that we use a set formula_2, which is the set of all rational numbers whose squares are less than 2, to "represent" number , and further, by defining properly arithmetic operators over these sets (addition, subtraction, multiplication, and division), these sets (together with these arithmetic operations) form the familiar real numbers.
To establish this, one must show that formula_2 really is a cut (according to the definition) and the square of formula_2, that is formula_26 (please refer to the link above for the precise definition of how the multiplication of cuts are defined), is formula_27 (note that rigorously speaking this is a cut formula_28). To show the first part, we show that for any positive rational formula_29 with formula_30, there is a rational formula_31 with formula_32 and formula_33. The choice formula_34 works, thus formula_2 is indeed a cut. Now armed with the multiplication between cuts, it is easy to check that formula_36 (essentially, this is because formula_37). Therefore to show that formula_38, we show that formula_39, and it suffices to show that for any formula_40, there exists formula_41, formula_42. For this we notice that if formula_43, then formula_44 for the formula_31 constructed above, this means that we have a sequence in formula_2 whose square can become arbitrarily close to formula_27, which finishes the proof.
Note that the equality cannot hold since is not rational.
A construction similar to Dedekind cuts is used for the construction of surreal numbers.
More generally, if "S" is a partially ordered set, a "completion" of "S" means a complete lattice "L" with an order-embedding of "S" into "L". The notion of "complete lattice" generalizes the least-upper-bound property of the reals.
One completion of "S" is the set of its "downwardly closed" subsets, ordered by inclusion. A related completion that preserves all existing sups and infs of "S" is obtained by the following construction: For each subset "A" of "S", let "A"u denote the set of upper bounds of "A", and let "A"l denote the set of lower bounds of "A". (These operators form a Galois connection.) Then the Dedekind–MacNeille completion of "S" consists of all subsets "A" for which ("A"u)l = "A"; it is ordered by inclusion. The Dedekind-MacNeille completion is the smallest complete lattice with "S" embedded in it. | https://en.wikipedia.org/wiki?curid=45569 |
DNA vaccination
DNA vaccination is a technique for protecting against disease by injection with genetically engineered plasmid containing the DNA sequence encoding the antigen(s) against which an immune response is sought, so cells directly produce the antigen, causing a protective immunological response. DNA vaccines have potential advantages over conventional vaccines, including the ability to induce a wider range of immune response types.
Several DNA vaccines are available for veterinary use. Currently no DNA vaccines have been approved for human use. Research is investigating the approach for viral, bacterial and parasitic diseases in humans, as well as for several cancers.
DNA vaccines are third generation vaccines. They contain DNA that codes for specific proteins (antigens) from a pathogen. The DNA is injected into the body and taken up by cells, whose normal metabolic processes synthesize proteins based on the genetic code in the plasmid that they have taken up. Because these proteins contain regions of amino acid sequences that are characteristic of bacteria or viruses, they are recognized as foreign and when they are processed by the host cells and displayed on their surface, the immune system is alerted, which then triggers immune responses.
Alternatively, the DNA may be encapsulated in protein to facilitate cell entry. If this capsid protein is included in the DNA, the resulting vaccine can combine the potency of a live vaccine without reversion risks. In 1983, Enzo Paoletti and Dennis Panicali at the New York Department of Health devised a strategy to produce recombinant DNA vaccines by using genetic engineering to transform ordinary smallpox vaccine into vaccines that may be able to prevent other diseases. They altered the DNA of cowpox virus by inserting a gene from other viruses (namely Herpes simplex virus, hepatitis B and influenza).
In 2016 a DNA vaccine for the Zika virus began testing at the National Institutes of Health. The study was planned to involve up to 120 subjects between 18 and 35. Separately, Inovio Pharmaceuticals and GeneOne Life Science began tests of a different DNA vaccine against Zika in Miami. The NIH vaccine is injected into the upper arm under high pressure. Manufacturing the vaccines in volume remains unsolved.
Clinical trials for DNA vaccines to prevent HIV are underway.
No DNA vaccines have been approved for human use in the United States. Few experimental trials have evoked a response strong enough to protect against disease and the technique's usefulness remains to be proven in humans. A veterinary DNA vaccine to protect horses from West Nile virus has been approved.
DNA vaccines elicit the best immune response when highly active expression vectors are used. These are plasmids that usually consist of a strong viral promoter to drive the in vivo transcription and translation of the gene (or complementary DNA) of interest. Intron A may sometimes be included to improve mRNA stability and hence increase protein expression. Plasmids also include a strong polyadenylation/transcriptional termination signal, such as bovine growth hormone or rabbit beta-globulin polyadenylation sequences. Polycistronic vectors (ones located at multiple genome sites) are sometimes constructed to express more than one immunogen, or to express an immunogen and an immunostimulatory protein.
Because the plasmid is the “vehicle” from which the immunogen is expressed, optimising vector design for maximal protein expression is essential. One way of enhancing protein expression is by optimising the codon usage of pathogenic mRNAs for eukaryotic cells. Pathogens often have different AT-contents than the target species, so altering the gene sequence of the immunogen to reflect the codons more commonly used in the target species may improve its expression.
Another consideration is the choice of promoter. The SV40 promoter was conventionally used until research showed that vectors driven by the Rous Sarcoma Virus (RSV) promoter had much higher expression rates. More recently, expression rates have been further increased by the use of the cytomegalovirus (CMV) immediate early promoter. Inclusion of the Mason-Pfizer monkey virus (MPV)-CTE with/without rev increased envelope expression. Furthermore, the CTE+rev construct was significantly more immunogenic than CTE-alone vector. Additional modifications to improve expression rates include the insertion of enhancer sequences, synthetic introns, adenovirus tripartite leader (TPL) sequences and modifications to the polyadenylation and transcriptional termination sequences. An example of DNA vaccine plasmid is pVAC, which uses SV40 promoter.
Structural instability phenomena are of particular concern for plasmid manufacture, DNA vaccination and gene therapy. Accessory regions pertaining to the plasmid backbone may engage in a wide range of structural instability phenomena. Well-known catalysts of genetic instability include direct, inverted and tandem repeats, which are conspicuous in many commercially available cloning and expression vectors. Therefore, the reduction or complete elimination of extraneous noncoding backbone sequences would pointedly reduce the propensity for such events to take place and consequently the overall plasmid's recombinogenic potential.
Once the plasmid inserts itself into the transfected cell nucleus, it codes for a peptide string of a foreign antigen. On its surface the cell displays the foreign antigen with both histocompatibility complex (MHC) classes I and class II molecules. The antigen-presenting cell then travels to the lymph nodes and presents the antigen peptide and costimulatory molecule signaled by T-cell, initiating the immune response.
Immunogens can be targeted to various cellular compartments to improve antibody or cytotoxic T-cell responses. Secreted or plasma membrane-bound antigens are more effective at inducing antibody responses than cytosolic antigens, while cytotoxic T-cell responses can be improved by targeting antigens for cytoplasmic degradation and subsequent entry into the major histocompatibility complex (MHC) class I pathway. This is usually accomplished by the addition of N-terminal ubiquitin signals.
The conformation of the protein can also affect antibody responses. “Ordered” structures (such as viral particles) are more effective than unordered structures. Strings of minigenes (or MHC class I epitopes) from different pathogens raise cytotoxic T-cell responses to some pathogens, especially if a TH epitope is also included.
DNA vaccines have been introduced into animal tissues by multiple methods.
The two most popular approaches are injection of DNA in saline, using a standard hypodermic needle and gene gun delivery. Injection in saline is normally conducted intramuscularly (IM) in skeletal muscle, or intradermally (ID), delivering DNA to extracellular spaces. This can be assisted by electroporation; by temporarily damaging muscle fibres with myotoxins such as bupivacaine; or by using hypertonic solutions of saline or sucrose. Immune responses to this method can be affected by factors including needle type, needle alignment, speed of injection, volume of injection, muscle type, and age, sex and physiological condition of the recipient.
Gene gun delivery ballistically accelerates plasmid DNA (pDNA) that has been absorbed onto gold or tungsten microparticles into the target cells, using compressed helium as an accelerant.
The delivery method determines the dose required to raise an effective immune response. Saline injections require variable amounts of DNA, from 10 μg-1 mg, whereas gene gun deliveries require 100 to 1000 times less. Generally, 0.2 μg – 20 μg are required, although quantities as low as 16 ng have been reported. These quantities vary by species. Mice for example, require approximately 10 times less DNA than primates. Saline injections require more DNA because the DNA is delivered to the extracellular spaces of the target tissue (normally muscle), where it has to overcome physical barriers (such as the basal lamina and large amounts of connective tissue, to mention a few) before it is taken up by the cells, while gene gun deliveries bombard DNA directly into the cells, resulting in less “wastage”.
Alternatives included aerosol instillation of naked DNA on mucosal surfaces, such as the nasal and lung mucosa, and topical administration of pDNA to the eye and vaginal mucosa. Mucosal surface delivery has also been achieved using cationic liposome-DNA preparations, biodegradable microspheres, attenuated "Salmonalla", "Shigella" or "Listeria" vectors for oral administration to the intestinal mucosa and recombinant adenovirus vectors. Another alternative vector is a hybrid vehicle composed of bacteria cell and synthetic polymers. An "E. coli" inner core and poly(beta-amino ester) outer coat function synergistically to increase efficiency by addressing barriers associated with antigen-presenting cell gene delivery which include cellular uptake and internalization, phagosomal escape and intracellular cargo concentration. Tested in mice, the hybrid vector was found to induce immune response.
Another approach to DNA vaccination is expression library immunization (ELI). Using this technique, potentially all the genes from a pathogen can be delivered at one time, which may be useful for pathogens that are difficult to attenuate or culture. ELI can be used to identify which genes induce a protective response. This has been tested with "Mycoplasma pulmonis", a murine lung pathogen with a relatively small genome. Even partial expression libraries can induce protection from subsequent challenge.
DNA immunization can raise multiple TH responses, including lymphoproliferation and the generation of a variety of cytokine profiles. A major advantage of DNA vaccines is the ease with which they can be manipulated to bias the type of T-cell help towards a TH1 or TH2 response. Each type has distinctive patterns of lymphokine and chemokine expression, specific types of immunoglobulins, patterns of lymphocyte trafficking and types of innate immune responses.
The type of T-cell help raised is influenced by the delivery method and the type of immunogen expressed, as well as the targeting of different lymphoid compartments. Generally, saline needle injections (either IM or ID) tend to induce TH1 responses, while gene gun delivery raises TH2 responses. This is true for intracellular and plasma membrane-bound antigens, but not for secreted antigens, which seem to generate TH2 responses, regardless of the method of delivery.
Generally the type of T-cell help raised is stable over time, and does not change when challenged or after subsequent immunizations that would normally have raised the opposite type of response in a naïve specimen. However, Mor "et al.". (1995) immunized and boosted mice with pDNA encoding the circumsporozoite protein of the mouse malarial parasite "Plasmodium yoelii" (PyCSP) and found that the initial TH2 response changed, after boosting, to a TH1 response.
How these different methods operate, the forms of antigen expressed, and the different profiles of T-cell help is not understood. It was thought that the relatively large amounts of DNA used in IM injection were responsible for the induction of TH1 responses. However, evidence shows no dose-related differences in TH type. The type of T-cell help raised is determined by the differentiated state of antigen presenting cells. Dendritic cells can differentiate to secrete IL-12 (which supports TH1 cell development) or IL-4 (which supports TH2 responses). pDNA injected by needle is endocytosed into the dendritic cell, which is then stimulated to differentiate for TH1 cytokine production, while the gene gun bombards the DNA directly into the cell, thus bypassing TH1 stimulation.
Polarisation in T-cell help is useful in influencing allergic responses and autoimmune diseases. In autoimmune diseases, the goal is to shift the self-destructive TH1 response (with its associated cytotoxic T cell activity) to a non-destructive TH2 response. This has been successfully applied in predisease priming for the desired type of response in preclinical models and is somewhat successful in shifting the response for an established disease.
One of the advantages of DNA vaccines is that they are able to induce cytotoxic T lymphocytes (CTL) without the inherent risk associated with live vaccines. CTL responses can be raised against immunodominant and immunorecessive CTL epitopes, as well as subdominant CTL epitopes, in a manner that appears to mimic natural infection. This may prove to be a useful tool in assessing CTL epitopes and their role in providing immunity.
Cytotoxic T-cells recognise small peptides (8-10 amino acids) complexed to MHC class I molecules. These peptides are derived from endogenous cytosolic proteins that are degraded and delivered to the nascent MHC class I molecule within the endoplasmic reticulum (ER). Targeting gene products directly to the ER (by the addition of an amino-terminal insertion sequence) should thus enhance CTL responses. This was successfully demonstrated using recombinant vaccinia viruses expressing influenza proteins, but the principle should also be applicable to DNA vaccines. Targeting antigens for intracellular degradation (and thus entry into the MHC class I pathway) by the addition of ubiquitin signal sequences, or mutation of other signal sequences, was shown to be effective at increasing CTL responses.
CTL responses can be enhanced by co-inoculation with co-stimulatory molecules such as B7-1 or B7-2 for DNA vaccines against influenza nucleoprotein, or GM-CSF for DNA vaccines against the murine malaria model "P. yoelii". Co-inoculation with plasmids encoding co-stimulatory molecules IL-12 and TCA3 were shown to increase CTL activity against HIV-1 and influenza nucleoprotein antigens.
Antibody responses elicited by DNA vaccinations are influenced by multiple variables, including antigen type; antigen location (i.e. intracellular vs. secreted); number, frequency and immunization dose; site and method of antigen delivery.
Humoral responses after a single DNA injection can be much longer-lived than after a single injection with a recombinant protein. Antibody responses against hepatitis B virus (HBV) envelope protein (HBsAg) have been sustained for up to 74 weeks without boost, while lifelong maintenance of protective response to influenza haemagglutinin was demonstrated in mice after gene gun delivery. Antibody-secreting cells migrate to the bone marrow and spleen for long-term antibody production, and generally localise there after one year.
Comparisons of antibody responses generated by natural (viral) infection, immunization with recombinant protein and immunization with pDNA are summarised in Table 4. DNA-raised antibody responses rise much more slowly than when natural infection or recombinant protein immunization occurs. As many as 12 weeks may be required to reach peak titres in mice, although boosting can decrease the interval. This response is probably due to the low levels of antigen expressed over several weeks, which supports both primary and secondary phases of antibody response. DNA vaccine expressing HBV small and middle envelope protein was injected into adults with chronic hepatitis. The vaccine resulted in specific interferon gamma cell production. Also specific T-cells for middle envelop proteins antigens were developed. The immune response of the patients was not robust enough to control HBV infection
Additionally, the titres of specific antibodies raised by DNA vaccination are lower than those obtained after vaccination with a recombinant protein. However, DNA immunization-induced antibodies show greater affinity to native epitopes than recombinant protein-induced antibodies. In other words, DNA immunization induces a qualitatively superior response. Antibodies can be induced after one vaccination with DNA, whereas recombinant protein vaccinations generally require a boost. DNA immunization can be used to bias the TH profile of the immune response and thus the antibody isotype, which is not possible with either natural infection or recombinant protein immunization. Antibody responses generated by DNA are useful as a preparative tool. For example, polyclonal and monoclonal antibodies can be generated for use as reagents.
When DNA uptake and subsequent expression was first demonstrated "in vivo" in muscle cells, these cells were thought to be unique because of their extensive network of T-tubules. Using electron microscopy, it was proposed that DNA uptake was facilitated by caveolae (or, non-clathrin coated pits). However, subsequent research revealed that other cells (such as keratinocytes, fibroblasts and epithelial Langerhans cells) could also internalize DNA. The mechanism of DNA uptake is not known.
Two theories dominate – that "in vivo" uptake of DNA occurs non-specifically, in a method similar to phago- or pinocytosis, or through specific receptors. These might include a 30kDa surface receptor, or macrophage scavenger receptors. The 30kDa surface receptor binds specifically to 4500-bp DNA fragments (which are then internalised) and is found on professional APCs and T-cells. Macrophage scavenger receptors bind to a variety of macromolecules, including polyribonucleotides and are thus candidates for DNA uptake. Receptor-mediated DNA uptake could be facilitated by the presence of polyguanylate sequences. Gene gun delivery systems, cationic liposome packaging, and other delivery methods bypass this entry method, but understanding it may be useful in reducing costs (e.g. by reducing the requirement for cytofectins), which could be important in animal husbandry.
Studies using chimeric mice have shown that antigen is presented by bone-marrow derived cells, which include dendritic cells, macrophages and specialised B-cells called professional antigen presenting cells (APC). After gene gun inoculation to the skin, transfected Langerhans cells migrate to the draining lymph node to present antigens. After IM and ID injections, dendritic cells present antigen in the draining lymph node and transfected macrophages have been found in the peripheral blood.
Besides direct transfection of dendritic cells or macrophages, cross priming occurs following IM, ID and gene gun DNA deliveries. Cross-priming occurs when a bone marrow-derived cell presents peptides from proteins synthesised in another cell in the context of MHC class 1. This can prime cytotoxic T-cell responses and seems to be important for a full primary immune response.
IM and ID DNA delivery initiate immune responses differently. In the skin, keratinocytes, fibroblasts and Langerhans cells take up and express antigens and are responsible for inducing a primary antibody response. Transfected Langerhans cells migrate out of the skin (within 12 hours) to the draining lymph node where they prime secondary B- and T-cell responses. In skeletal muscle striated muscle cells are most frequently transfected, but seem to be unimportant in immune response. Instead, IM inoculated DNA “washes” into the draining lymph node within minutes, where distal dendritic cells are transfected and then initiate an immune response. Transfected myocytes seem to act as a “reservoir” of antigen for trafficking professional APCs.
DNA vaccination generates an effective immune memory via the display of antigen-antibody complexes on follicular dendritic cells (FDC), which are potent B-cell stimulators. T-cells can be stimulated by similar, germinal centre dendritic cells. FDC are able to generate an immune memory because antibodies production “overlaps” long-term expression of antigen, allowing antigen-antibody immunocomplexes to form and be displayed by FDC.
Both helper and cytotoxic T-cells can control viral infections by secreting interferons. Cytotoxic T cells usually kill virally infected cells. However, they can also be stimulated to secrete antiviral cytokines such as IFN-γ and TNF-α, which do not kill the cell, but limit viral infection by down-regulating the expression of viral components. DNA vaccinations can be used to curb viral infections by non-destructive IFN-mediated control. This was demonstrated for hepatitis B. IFN-γ is critically important in controlling malaria infections and is a consideration for anti-malarial DNA vaccines.
An effective vaccine must induce an appropriate immune response for a given pathogen. DNA vaccines can polarise T-cell help towards TH1 or TH2 profiles and generate CTL and/or antibody when required. This can be accomplished by modifications to the form of antigen expressed (i.e. intracellular vs. secreted), the method and route of delivery or the dose. It can also be accomplished by the co-administration of plasmid DNA encoding immune regulatory molecules, i.e. cytokines, lymphokines or co-stimulatory molecules. These “genetic adjuvants” can be administered as a:
In general, co-administration of pro-inflammatory agents (such as various interleukins, tumor necrosis factor, and GM-CSF) plus TH2-inducing cytokines increase antibody responses, whereas pro-inflammatory agents and TH1-inducing cytokines decrease humoral responses and increase cytotoxic responses (more important in viral protection). Co-stimulatory molecules such as B7-1, B7-2 and CD40L are sometimes used.
This concept was applied in topical administration of pDNA encoding IL-10. Plasmid encoding B7-1 (a ligand on APCs) successfully enhanced the immune response in tumour models. Mixing plasmids encoding GM-CSF and the circumsporozoite protein of "P. yoelii" (PyCSP) enhanced protection against subsequent challenge (whereas plasmid-encoded PyCSP alone did not). It was proposed that GM-CSF caused dendritic cells to present antigen more efficiently and enhance IL-2 production and TH cell activation, thus driving the increased immune response. This can be further enhanced by first priming with a pPyCSP and pGM-CSF mixture, followed by boosting with a recombinant poxvirus expressing PyCSP. However, co-injection of plasmids encoding GM-CSF (or IFN-γ, or IL-2) and a fusion protein of "P. chabaudi" merozoite surface protein 1 (C-terminus)-hepatitis B virus surface protein (PcMSP1-HBs) abolished protection against challenge, compared to protection acquired by delivery of pPcMSP1-HBs alone.
The advantages of genetic adjuvants are their low cost and simple administration, as well as avoidance of unstable recombinant cytokines and potentially toxic, “conventional” adjuvants (such as alum, calcium phosphate, monophosphoryl lipid A, cholera toxin, cationic and mannan-coated liposomes, QS21, carboxymethyl cellulose and ubenimix). However, the potential toxicity of prolonged cytokine expression is not established. In many commercially important animal species, cytokine genes have not been identified and isolated. In addition, various plasmid-encoded cytokines modulate the immune system differently according to the delivery time. For example, some cytokine plasmid DNAs are best delivered after immunogen pDNA, because pre- or co-delivery can decrease specific responses and increase non-specific responses.
Plasmid DNA itself appears to have an adjuvant effect on the immune system. Bacterially derived DNA can trigger innate immune defence mechanisms, the activation of dendritic cells and the production of TH1 cytokines. This is due to recognition of certain CpG dinucleotide sequences that are immunostimulatory. CpG stimulatory (CpG-S) sequences occur twenty times more frequently in bacterially-derived DNA than in eukaryotes. This is because eukaryotes exhibit “CpG suppression” – i.e. CpG dinucleotide pairs occur much less frequently than expected. Additionally, CpG-S sequences are hypomethylated. This occurs frequently in bacterial DNA, while CpG motifs occurring in eukaryotes are methylated at the cytosine nucleotide. In contrast, nucleotide sequences that inhibit the activation of an immune response (termed CpG neutralising, or CpG-N) are over represented in eukaryotic genomes. The optimal immunostimulatory sequence is an unmethylated CpG dinucleotide flanked by two 5’ purines and two 3’ pyrimidines. Additionally, flanking regions outside this immunostimulatory hexamer must be guanine-rich to ensure binding and uptake into target cells.
The innate system works with the adaptive immune system to mount a response against the DNA encoded protein. CpG-S sequences induce polyclonal B-cell activation and the upregulation of cytokine expression and secretion. Stimulated macrophages secrete IL-12, IL-18, TNF-α, IFN-α, IFN-β and IFN-γ, while stimulated B-cells secrete IL-6 and some IL-12.
Manipulation of CpG-S and CpG-N sequences in the plasmid backbone of DNA vaccines can ensure the success of the immune response to the encoded antigen and drive the immune response toward a TH1 phenotype. This is useful if a pathogen requires a TH response for protection. CpG-S sequences have also been used as external adjuvants for both DNA and recombinant protein vaccination with variable success rates. Other organisms with hypomethylated CpG motifs have demonstrated the stimulation of polyclonal B-cell expansion. The mechanism behind this may be more complicated than simple methylation – hypomethylated murine DNA has not been found to mount an immune response.
Most of the evidence for immunostimulatory CpG sequences comes from murine studies. Extrapolation of this data to other species requires caution – individual species may require different flanking sequences, as binding specificities of scavenger receptors vary across species. Additionally, species such as ruminants may be insensitive to immunostimulatory sequences due to their large gastrointestinal load.
DNA-primed immune responses can be boosted by the administration of recombinant protein or recombinant poxviruses. "Prime-boost" strategies with recombinant protein have successfully increased both neutralising antibody titre, and antibody avidity and persistence, for weak immunogens, such as HIV-1 envelope protein. Recombinant virus boosts have been shown to be very efficient at boosting DNA-primed CTL responses. Priming with DNA focuses the immune response on the required immunogen, while boosting with the recombinant virus provides a larger amount of expressed antigen, leading to a large increase in specific CTL responses.
Prime-boost strategies have been successful in inducing protection against malarial challenge in a number of studies. Primed mice with plasmid DNA encoding "Plasmodium yoelii" circumsporozoite surface protein (PyCSP), then boosted with a recombinant vaccinia virus expressing the same protein had significantly higher levels of antibody, CTL activity and IFN-γ, and hence higher levels of protection, than mice immunized and boosted with plasmid DNA alone. This can be further enhanced by priming with a mixture of plasmids encoding PyCSP and murine GM-CSF, before boosting with recombinant vaccinia virus. An effective prime-boost strategy for the simian malarial model "P. knowlesi" has also been demonstrated. Rhesus monkeys were primed with a multicomponent, multistage DNA vaccine encoding two liver-stage antigens – the circumsporozoite surface protein (PkCSP) and sporozoite surface protein 2 (PkSSP2) – and two blood stage antigens – the apical merozoite surface protein 1 (PkAMA1) and merozoite surface protein 1 (PkMSP1p42). They were then boosted with a recombinant canarypox virus encoding all four antigens (ALVAC-4). Immunized monkeys developed antibodies against sporozoites and infected erythrocytes, and IFN-γ-secreting T-cell responses against peptides from PkCSP. Partial protection against sporozoite challenge was achieved, and mean parasitemia was significantly reduced, compared to control monkeys. These models, while not ideal for extrapolation to "P. falciparum" in humans, will be important in pre-clinical trials.
The efficiency of DNA immunization can be improved by stabilising DNA against degradation, and increasing the efficiency of delivery of DNA into antigen-presenting cells. This has been demonstrated by coating biodegradable cationic microparticles (such as poly(lactide-co-glycolide) formulated with cetyltrimethylammonium bromide) with DNA. Such DNA-coated microparticles can be as effective at raising CTL as recombinant viruses, especially when mixed with alum. Particles 300 nm in diameter appear to be most efficient for uptake by antigen presenting cells.
Recombinant alphavirus-based vectors have been used to improve DNA vaccination efficiency. The gene encoding the antigen of interest is inserted into the alphavirus replicon, replacing structural genes but leaving non-structural replicase genes intact. The Sindbis virus and Semliki Forest virus have been used to build recombinant alphavirus replicons. Unlike conventional DNA vaccinations alphavirus vectors kill transfected cells and are only transiently expressed. Alphavirus replicase genes are expressed in addition to the vaccine insert. It is not clear how alphavirus replicons raise an immune response, but it may be due to the high levels of protein expressed by this vector, replicon-induced cytokine responses, or replicon-induced apoptosis leading to enhanced antigen uptake by dendritic cells. | https://en.wikipedia.org/wiki?curid=45570 |
County Donegal
County Donegal ( ; ) is a county of Ireland in the province of Ulster. It is named after the town of Donegal (, meaning 'fort of the foreigners') in the south of the county. It has also been known as County Tyrconnell (, meaning 'Land of Conall'), after the historic territory of the same name, on which it was based. Donegal County Council is the local council and Lifford the county town.
The population was 159,192 at the 2016 census.
In terms of size and area, it is the largest county in Ulster and the fourth-largest county in all of Ireland. Uniquely, County Donegal shares a small border with only one other county in the Republic of Ireland – County Leitrim. The greater part of its land border is shared with three counties of Northern Ireland: County Londonderry, County Tyrone and County Fermanagh. This geographic isolation from the rest of the Republic has led to Donegal people maintaining a distinct cultural identity and has been used to market the county with the slogan "Up here it's different". While Lifford is the county town, Letterkenny is by far the largest town in the county with a population of 19,588. Letterkenny and the nearby city of Derry form the main economic axis of the northwest of Ireland. Indeed, what became the City of Derry was officially part of County Donegal up until 1610.
There are eight historic baronies in the county:
The county may be informally divided into a number of traditional districts. There are two Gaeltacht districts in the west: The Rosses (), centred on the town of Dungloe (), and Gweedore (). Another Gaeltacht district is located in the north-west: Cloughaneely (), centred on the town of Falcarragh (). The most northerly part of the island of Ireland is the location for three peninsulas: Inishowen, Fanad and Rosguill. The main population centre of Inishowen, Ireland's largest peninsula, is Buncrana. In the east of the county lies the Finn Valley (centred on Ballybofey). The Laggan district (not to be confused with the Lagan Valley in the south of County Antrim) is centred on the town of Raphoe.
According to the 1841 Census, County Donegal had a population of 296,000 people. As a result of famine and emigration, the population had reduced by 41,000 by 1851 and further reduced by 18,000 by 1861. By the time of the 1951 Census the population was only 44% of what it had been in 1841. , the county's population was 159,192.
The county is the most mountainous in Ulster consisting chiefly of two ranges of low mountains; the Derryveagh Mountains in the north and the Blue Stack Mountains in the south, with Errigal at the highest peak. It has a deeply indented coastline forming natural sea loughs, of which both Lough Swilly and Lough Foyle are the most notable. The Slieve League cliffs are the sixth-highest sea cliffs in Europe, while Malin Head is the most northerly point on the island of Ireland.
The climate is temperate and dominated by the Gulf Stream, with warm, damp summers and mild wet winters. Two permanently inhabited islands, Arranmore and Tory Island, lie off the coast, along with a large number of islands with only transient inhabitants. Ireland's second longest river, the Erne, enters Donegal Bay near the town of Ballyshannon. The River Erne, along with other Donegal waterways, has been dammed to produce hydroelectric power. The River Foyle separates part of County Donegal from parts of both counties Londonderry and Tyrone.
A survey of the macroscopic marine algae of County Donegal was published in 2003. The survey was compiled using the algal records held in the herbaria of the following institutions: the Ulster Museum, Belfast; Trinity College, Dublin; NUI Galway, and the Natural History Museum, London.
Records of flowering plants include "Dactylorhiza purpurella" (Stephenson and Stephenson) Soó.
The animals included in the county include the European badger ("Meles meles" L.).
There are habitats for the rare corn crake ("Crex crex") in the county.
At various times in its history, it has been known as County Tirconaill, County Tirconnell or County Tyrconnell (). The former was used as its official name during 1922–1927. This is in reference to both the old "túath" of Tír Chonaill and the earldom that succeeded it.
County Donegal was the home of the once mighty Clann Dálaigh, whose best known branch were the Clann Ó Domhnaill, better known in English as the O'Donnell dynasty. Until around 1600, the O'Donnells were one of Ireland's richest and most powerful native Irish ruling families. Within Ulster, only the Uí Néill (known in English as the O'Neill Clan) of modern County Tyrone were more powerful. The O'Donnells were Ulster's second most powerful "clan" or ruling-family from the early 13th century through to the start of the 17th century. For several centuries the O'Donnells ruled Tír Chonaill, a Gaelic kingdom in West Ulster that covered almost all of modern County Donegal. The head of the O'Donnell family had the titles "An Ó Domhnaill" (meaning "The O'Donnell" in English) and "Rí Thír Chonaill" (meaning "King of Tír Chonaill" in English). Based at Donegal Castle in "Dún na nGall" (modern Donegal), the O'Donnell "Kings of Tír Chonaill" were traditionally inaugurated at Doon Rock near Kilmacrennan. O'Donnell royal or chiefly power was finally ended in what was then the newly created County Donegal in September 1607, following the Flight of the Earls from near Rathmullan. The modern "County Arms of Donegal" (dating from the early 1970s) was influenced by the design of the old O'Donnell royal arms. The "County Arms" is the official coat of arms of both County Donegal and Donegal County Council.
The modern County Donegal was shired by order of the English Crown in 1585. The English authorities at Dublin Castle formed the new county by amalgamating the old Kingdom of Tír Chonaill with the old Lordship of Inishowen. However, although detachments of the Royal Irish Army were stationed there, the Dublin authorities were unable to establish control over Tír Chonaill and Inishowen until after the Battle of Kinsale in 1602. Full control over the new County Donegall was only achieved after the Flight of the Earls in September 1607. It was the centre of O'Doherty's Rebellion of 1608 with the key Battle of Kilmacrennan taking place there. The county was one of those 'planted' during the Plantation of Ulster from around 1610 onwards. What became the City of Derry was officially part of County Donegal up until 1610.
County Donegal was one of the worst affected parts of Ulster during the Great Famine of the late 1840s in Ireland. Vast swathes of the county were devastated by this catastrophe, many areas becoming permanently depopulated. Vast numbers of County Donegal's people emigrated at this time, chiefly through Foyle Port.
The Partition of Ireland in the early 1920s had a massive direct impact on County Donegal. Partition cut the county off, economically and administratively, from Derry, which had acted for centuries as the county's main port, transport hub and financial centre. Derry, together with west Tyrone, was henceforward in a new, different jurisdiction officially called Northern Ireland. Partition also meant that County Donegal was now almost entirely cut off from the rest of the jurisdiction in which it now found itself, the new dominion called the Irish Free State, which in April 1949 became the Republic of Ireland. Only a few miles of the county is physically connected by land to the rest of the Republic. The existence of a border cutting Donegal off from her natural hinterlands in Derry City and West Tyrone greatly exacerbated the economic difficulties of the county after partition. The county's economy is particularly susceptible, just like that of Derry City, to the currency fluctuations of the Euro against sterling.
Added to all this, in the late 20th century County Donegal was adversely affected by The Troubles in Northern Ireland. The county suffered several bombings and assassinations. In June 1987, Constable Samuel McClean, a Donegal man who was a serving member of the Royal Ulster Constabulary, was shot dead by the Provisional Irish Republican Army at his family home near Drumkeen. In May 1991, the prominent Sinn Féin politician Councillor Eddie Fullerton was assassinated by the Ulster Defence Association at his home in Buncrana. This added further to the economic and social difficulties of the county. However, the greater economic and administrative integration following the Good Friday Agreement of April 1998 has been of benefit to the county.
It has been labelled the 'forgotten county' by its own politicians, owing to the perception that it is ignored by the Government of Ireland, even in times of crisis.
The Donegal Gaeltacht (Irish-speaking area) is the second-largest in Ireland. The version of the Irish language spoken in County Donegal is Ulster Irish.
Of the Gaeltacht population of 24,744 (16% of the county's total population), 17,132 say they can speak Irish. There are three Irish-speaking parishes: Gweedore, The Rosses and Cloughaneely. Other Irish-speaking areas include Gaeltacht an Láir: Glencolmcille, Fintown, Fanad and Rosguill, the islands of Arranmore, Tory Island and Inishbofin. Gweedore is the largest Irish-speaking parish, with over 5,000 inhabitants. All schools in the region use Irish as the language of instruction. One of the constituent colleges of NUI Galway, Acadamh na hOllscolaíochta Gaeilge, is based in Gweedore.
Donegal County Council (which has officially been in existence since 1899) has responsibility for local administration, and is headquartered at the County House in Lifford. Until 2014, there were also Town Councils in Letterkenny, Bundoran, Ballyshannon and Buncrana. The Town Councils were abolished in June 2014 when the Local Government Reform Act 2014 was implemented and their functions were taken over by Donegal County Council. Elections to the County Council take place every five years. Thirty seven councillors are elected using the system of proportional representation-single transferable vote (STV). For the purpose of elections the county is divided into 5 Municipal Districts comprising the following local electoral areas: Donegal (6), Glenties (6), Inishowen (9), Letterkenny (10) and Stranorlar (6).
For general elections, the county-wide constituency elects five representatives to Dáil Éireann. For elections to the European Parliament, the county is part of the Midlands–North-West constituency.
Voters have a reputation nationally for being "conservative and contrarian", the county having achieved prominence for having rejected the Fiscal Treaty in 2012 and both the Treaty of Lisbon votes. In 2018, Donegal was the only county in Ireland to vote against the repeal of the Eighth Amendment of the Constitution which had acknowledged the right to life of the unborn.
The Freedom of Donegal is an award that is given to people who have been recognised for outstanding achievements on behalf of the people and County Donegal. Such people include Daniel O'Donnell, Phil Coulter, Shay Given, Packie Bonner, Pat Crerand, Seamus Coleman and the Brennan family.
In 2009 the members of the 28th Infantry Battalion of the Irish Defence Forces were also awarded the Freedom of the County from Donegal County Council "in recognition of their longstanding service to the County of Donegal".
An extensive rail network used to exist throughout the county and was mainly operated by the County Donegal Railways Joint Committee and the Londonderry and Lough Swilly Railway Company (known as the L. & L.S.R. or the Lough Swilly Company for short). Unfortunately all these lines were laid to a 3-foot gauge where the connecting lines were all laid to the Irish standard gauge of . This meant that all goods had to be transhipped at Derry and Strabane. Like all narrow gauge railways this became a major handicap after World War 1 when road transport began to seriously erode the railways goods traffic.
By 1953 the Lough Swilly had closed its entire railway system and become a bus and road haulage concern. The County Donegal lasted until 1960 as it had largely dieselised its passenger trains by 1951. By the late 1950s major work was required to upgrade the track and the Irish Government was unwilling to supply the necessary funds, so 'the Wee Donegal', as it was affectionally known, was closed in 1960. The Great Northern Railway (the G.N.R.) also ran a line from Strabane through The Laggan, a district in the east of the county, along the River Foyle into Derry. However, the railway network within County Donegal was completely closed by 1960. Today, the closest railway station to the county is Waterside Station in the City of Derry, which is operated by Northern Ireland Railways (N.I.R.). Train services along the Belfast–Derry railway line run, via Coleraine railway station, to Belfast Central and Belfast Great Victoria Street railway stations.
County Donegal is served by both Donegal Airport, located at Carrickfinn in The Rosses in the west of the county, and by City of Derry Airport, located at Eglinton to the east. The nearest main international airport to the county is Belfast International Airport (popularly known as Aldergrove Airport), which is located to the east at Aldergrove, near Antrim Town, in County Antrim, from Derry City and from Letterkenny.
The variant of the Irish language spoken in Donegal shares many traits with Scottish Gaelic. The Irish spoken in the Donegal "Gaeltacht" (Irish-speaking area) is of the Ulster dialect, while Inishowen (parts of which only became English-speaking in the early 20th century) used the East Ulster dialect. Ulster Scots is often spoken in both the Finn Valley and The Laggan district of East Donegal. Donegal Irish has a strong influence on learnt Irish across Ulster.
Like other areas on the western seaboard of Ireland, Donegal has a distinctive fiddle tradition which is of world renown. Donegal is also well known for its songs which have, like the instrumental music, a distinctive sound. Donegal musical artists such as the bands Clannad, The Pattersons, and Altan and solo artist Enya, have had international success with traditional or traditional flavoured music. Donegal music has also influenced people not originally from the county including folk and pop singers Paul Brady and Phil Coulter. Singer Daniel O'Donnell has become a popular ambassador for the county. Popular music is also common, the county's most acclaimed rock artist being the Ballyshannon-born Rory Gallagher. Other acts to come out of Donegal include folk-rock band Goats Don't Shave, Eurovision contestant Mickey Joe Harte and indie rock group The Revs. In more recent years, bands such as in Their Thousands and Mojo Gogo have featured on the front page of "Hot Press" magazine.
Donegal has a long literary tradition in both Irish and English. The Irish navvy-turned-novelist Patrick MacGill, author of many books about the experiences of Irish migrant itinerant labourers in Britain at around the start of the 20th century, such as "The Rat Pit" and the autobiographical "Children of the Dead End", is from the Glenties area. The MacGill Summer School in Glenties is named in his honour, and attracts national interest as a forum for the analysis of current affairs. The novelist and socialist politician Peadar O'Donnell hailed from The Rosses in west Donegal. The poet William Allingham was also from Ballyshannon. Modern exponents include the Inishowen playwright and poet Frank McGuinness and the playwright Brian Friel. Many of Friel's plays are set in the fictional Donegal town of Ballybeg.
Authors in Donegal have been creating works, like the "Annals of the Four Masters", in Irish and Latin since the Early Middle Ages. The Irish philosopher John Toland was born in Inishowen in 1670. He was thought of as the original freethinker by George Berkeley. Toland was also instrumental in the spread of freemasonry throughout Continental Europe. In modern Irish, Donegal has produced a number of (sometimes controversial), authors such as the brothers Séamus Ó Grianna and Seosamh Mac Grianna from The Rosses and the contemporary (and controversial) Irish-language poet Cathal Ó Searcaigh from Gortahork in Cloughaneely, and where he is known to locals as "Gúrú na gCnoc" "Guru of the Hills".
Donegal is known for its textiles, whose unique woolen blends are made of short threads with tiny bits of colour blended in for a heathered effect. Sometimes they are woven in a rustic herringbone format and other times in more of a box weave of varied colours. These weaves are known as donegal tweeds (with a small 'd') and are world renowned.
There is a sizeable minority of Ulster Protestants in Donegal and many Donegal Protestants trace their ancestors to settlers who arrived during the Plantation of Ulster in the early 17th century. The Church of Ireland is the largest Protestant denomination with Presbyterianism in second. The areas of Donegal with the highest percentage of Protestants are The Laggan area of East Donegal around Raphoe, the Finn Valley and areas around Ramelton, Milford and Dunfanaghy – where their proportion reaches up to 30–45 percent. There is also a large Protestant population between Donegal Town and Ballyshannon in the south of the county. In absolute terms, Letterkenny has the largest number of Protestants (over 1000) and is the most Presbyterian town (among those settlements with more than 3000 people) in the Republic of Ireland.
The Earagail Arts Festival is held within the county each July.
People from Donegal have also contributed to culture elsewhere. Francis Alison was one of the founders of the College of Philadelphia, which would later become the University of Pennsylvania. Francis Makemie (originally from Ramelton) founded the Presbyterian Church in America. David Steele, from Upper Creevaugh, was a prominent Reformed Presbyterian, or Covenanter, minister who emigrated to the United States in 1824. Charles Inglis, who was the first Church of England bishop of the Diocese of Nova Scotia, was the third son of Archibald Inglis, the Rector in Glencolmcille.
Donegal was voted number one on The National Geographic Traveller (UK) 'cool list' for 2017, and the area's attractions include Glenveagh National Park (formerly part of the Glenveagh Estate), the only official "national park" anywhere in the Province of Ulster. The park is a 140 km² (about 35,000 acre) nature reserve with scenery of mountains, raised boglands, lakes and woodlands. At its heart is Glenveagh Castle, a late Victorian 'folly' that was originally built as a summer residence.
The Donegal Gaeltacht (Irish-speaking district) also attracts young people to County Donegal each year during the school summer holidays. The three-week-long summer Gaeltacht courses give young Irish people from other parts of the country a chance to learn the Irish language and traditional Irish cultural traditions that are still prevalent in parts of Donegal. The Donegal Gaeltacht has traditionally been a very popular destination each summer for young people from Northern Ireland. Scuba diving is also very popular with a club being located in Donegal Town.
Higher education within the county is provided by Letterkenny Institute of Technology (L.Y.I.T.; popularly known locally as 'the Regional'), established in the 1970s in Letterkenny. In addition, many young people from the county attend third-level institutions elsewhere in Ireland, especially in Derry and also at the Ulster University at Coleraine (U.U.C.), Ulster University at Jordanstown (U.U.J.), Queen's University Belfast ('Queen's'), and NUI Galway. Many Donegal students also attend the Limavady Campus of the North West Regional College (popularly known as Limavady Tech) and the Omagh College of Further Education of South West College (popularly known as Omagh Tech or Omagh College).
The Gaelic Athletic Association (G.A.A.) sport of Gaelic football is very popular in County Donegal. Donegal's inter-county football team have won the All-Ireland Senior Football Championship title twice (in 1992 and 2012) and the Ulster Senior Football Championship ten times. Donegal emerged victorious from the 2012 All-Ireland Senior Football Championship Final on 23 September 2012 to take the Sam Maguire Cup for only the second time, with early goals from Michael Murphy and Colm McFadden setting up victory of 2–11 to 0–13 over Mayo. In 2007, Donegal won only their second national title by winning the National Football League. On 24 April 2011, Donegal added their third national title when they defeated Laois to capture the National Football League Division Two, they added another Division Two title in 2019. There are 16 clubs in the Donegal Senior Football Championship, with many others playing at a lower level.
Hurling (often called 'hurley' within County Donegal), handball and rounders are also played but are less widespread, as in other parts of western Ulster. The Donegal county senior hurling team won the Lory Meagher Cup in 2011 and the Nicky Rackard Cup in 2013.
There are several rugby teams in the county. These include Ulster Qualifying League Two side Letterkenny RFC, whose ground is named after Dave Gallaher, the captain of the 1905 New Zealand All Blacks touring team, who have since become known as The Originals. He was born in nearby Ramelton.
Ulster Qualifying League Three sides include Ballyshannon RFC, Donegal Town RFC and Inishowen RFC. Finn Valley RFC and Tir Chonaill RFC both compete in the Ulster Minor League North.
Finn Harps play in the League of Ireland and won promotion to the Premier Division in 2015 following a 2–1 aggregate win over Limerick F.C. in the playoff final. They retained their status in the Premier Division in the 2016 season. Harps' main rivals are Derry City F.C., with whom they contest Ireland's "North-West Derby". Finn Harps are Donegal's only League of Ireland club, with the county's other clubs playing in either the Ulster Senior League or the local junior leagues.
There are a number of golf courses such as Ballyliffin Golf Club, located in the Inishowen peninsula. Other courses of note are Murvagh (located outside Donegal Town) and Rosapenna (Sandy Hills) located in Downings (near Carrigart). The Glashedy Links has been ranked 6th in a recent ranking taken by Golf Digest on the best courses in Ireland. The Old links was ranked 28th, Murvagh 36th and Sandy Hills 38th.
Cricket is chiefly confined to The Laggan district and the Finn Valley in the east of the county. The town of Raphoe and the nearby village of St Johnston, both in The Laggan, are the traditional strongholds of cricket within the county. The game is mainly played and followed by members of the Ulster Protestants of Co. Donegal. St Johnston Cricket Club play in the North West Senior League, while Letterkenny Cricket Club play in the Derry Midweek League.
Donegal's rugged landscape and coastline lends itself to active sports like climbing, mountain biking, hillwalking, surfing and kite-flying. | https://en.wikipedia.org/wiki?curid=45576 |
Duchy of Schleswig
The Duchy of Schleswig (; ; Low German: "Hartogdom Sleswig"; North Frisian: "Härtochduum Slaswik") was a duchy in Southern Jutland ("Sønderjylland") covering the area between about 60 km (35 miles) north and 70 km (45 miles) south of the current border between Germany and Denmark. The territory has been divided between the two countries since 1920, with Northern Schleswig in Denmark and Southern Schleswig in Germany. The region is also called Sleswick in English.
From early medieval times, the area's significance lay in being the buffer province of Scandinavia and the Danish Realm towards the powerful Holy Roman Empire to the south, as well as being a transit area for the transfer of goods between the North Sea and the Baltic Sea, connecting the trade route through Russia with the trade routes along the Rhine and the Atlantic coast (see also Kiel Canal).
Roman sources place the homeland of the tribe of Jutes north of the river Eider and that of the Angles south of it. The Angles in turn bordered the neighbouring Saxons. By the early Middle Ages, the region was inhabited by three groups:
During the 14th century, the population on Schwansen began to speak Low German alongside Danish, but otherwise the ethno-linguistic borders remained remarkably stable until around 1800, with the exception of the population in the towns that became increasingly German from the 14th century onwards.
During the early Viking Age, Haithabu – Scandinavia's biggest trading centre – was located in this region, which is also the location of the interlocking fortifications known as the "Danewerk" or "Danevirke". Its construction, and in particular its great expansion around 737, has been interpreted as an indication of the emergence of a unified Danish state. In May 1931, scientists of the National Museum of Denmark announced that they had unearthed eighteen Viking graves with the remains of eighteen men in them. The discovery came during excavations in Schleswig. The skeletons indicated that the men were bigger proportioned than twentieth-century Danish men. Each of the graves was laid out from east to west. Researchers surmised that the bodies were entombed in wooden coffins originally, but only the iron nails remained. Towards the end of the Early Middle Ages, Schleswig formed part of the historical Lands of Denmark as Denmark unified out of a number of petty chiefdoms in the 8th to 10th centuries in the wake of Viking expansion.
The southern boundary of Denmark in the region of the Eider River and the Danevirke was a source of continuous dispute. The Treaty of Heiligen was signed in 811 between the Danish King Hemming and Charlemagne, by which the border was established at the Eider. During the 10th century, there were several wars between East Francia and Denmark. In 1027, Conrad II and Canute the Great again fixed their mutual border at the Eider.
In 1115, King Niels created his nephew Canute Lavard – a son of his predecessor Eric I – Earl of Schleswig, a title used for only a short time before the recipient began to style himself Duke.
In the 1230s, Southern Jutland (the Duchy of Slesvig) was allotted as an appanage to Abel Valdemarsen, Canute's great-grandson, a younger son of Valdemar II of Denmark. Abel, having wrested the Danish throne to himself for a brief period, left his duchy to his sons and their successors, who pressed claims to the throne of Denmark for much of the next century, so that the Danish kings were at odds with their cousins, the dukes of Slesvig. Feuds and marital alliances brought the Abel dynasty into a close connection with the German Duchy of Holstein by the 15th century. The latter was a fief subordinate to the Holy Roman Empire, while Schleswig remained a Danish fief. These dual loyalties were to become a main root of the dispute between the German states and Denmark in the 19th century, when the ideas of romantic nationalism and the nation-state gained popular support.
The title of Duke of Schleswig was inherited in 1460 by the hereditary kings of Norway, who were also regularly elected kings of Denmark simultaneously, and their sons (unlike Denmark, which was not hereditary). This was an anomaly – a king holding a ducal title of which he as king was the fount and liege lord. The title and anomaly survived presumably because it was already co-regally held by the king's sons. Between 1544 and 1713/20, the ducal reign had become a condominium, with the royal House of Oldenburg and its cadet branch House of Holstein-Gottorp jointly holding the stake. A third branch in the condominium, the short-lived House of Haderslev, was already extinct in 1580 by the time of John the Elder.
Following the Protestant Reformation, when Latin was replaced as the medium of church service by the vernacular languages, the diocese of Schleswig was divided and an autonomous archdeaconry of Haderslev created. On the west coast, the Danish diocese of Ribe ended about 5 km (3 miles) north of the present border. This created a new cultural dividing line in the duchy because German was used for church services and teaching in the diocese of Schleswig and Danish was used in the diocese of Ribe and the archdeaconry of Haderslev. This line corresponds remarkably closely with the present border.
In the 17th century a series of wars between Denmark and Sweden—which Denmark lost—devastated the region economically. However, the nobility responded with a new agricultural system that restored prosperity. In the period 1600 to 1800 the region experienced the growth of manorialism of the sort common in the rye-growing regions of eastern Germany. The manors were large holdings with the work done by feudal peasant farmers. They specialized in high quality dairy products. Feudal lordship was combined with technical modernization, and the distinction between unfree labour and paid work was often vague. The feudal system was gradually abolished in the late 18th century, starting with the crown lands in 1765 and later the estates of the nobility. In 1805 all serfdom was abolished and land tenure reforms allowed former peasants to own their own farms.
From around 1800 to 1840, the Danish-speaking population on the Angeln peninsula between Schleswig and Flensburg began to switch to Low German and in the same period many North Frisians also switched to Low German. This linguistic change created a new de facto dividing line between German and Danish speakers north of Tønder and south of Flensburg.
From around 1830, large segments of the population began to identify with either German or Danish nationality and mobilized politically. In Denmark, the National Liberal Party used the Schleswig Question as part of their agitation and demanded that the Duchy be incorporated into the Danish kingdom under the slogan "Denmark to the Eider". This caused a conflict between Denmark and the German states over Schleswig and Holstein, which led to the Schleswig-Holstein Question of the 19th century. When the National Liberals came to power in Denmark in 1848, it provoked an uprising of ethnic Germans who supported Schleswig's ties with Holstein. This led to the First War of Schleswig. Denmark was victorious and the Prussian troops were ordered to pull out of Schleswig and Holstein following the London Protocol of 1852.
Denmark again attempted to integrate Schleswig by creating a new common constitution (the so-called November Constitution) for Denmark and Schleswig in 1863, but the German Confederation, led by Prussia and Austria, defeated the Danes in the Second War of Schleswig the following year. Prussia and Austria then assumed administration of Schleswig and Holstein respectively under the Gastein Convention of 14 August 1865. However, tensions between the two powers culminated in the Austro-Prussian War of 1866. In the Peace of Prague, the victorious Prussians annexed both Schleswig and Holstein, creating the province of Schleswig-Holstein. Provision for the cession of northern Schleswig to Denmark was made pending a popular vote in favour of this. In 1878, however, Austria went back on this provision, and Denmark recognized in a Treaty of 1907 with Germany that, by the agreement between Austria and Prussia, the frontier between Prussia and Denmark had finally been settled.
The Treaty of Versailles provided for plebiscites to determine the ownership of the region. Thus, two referenda were held in 1920, resulting in the partition of the region. Northern Schleswig voted by a majority of 75% to join Denmark, whereas Central Schleswig voted by a majority of 80% to remain part of Germany. In Southern Schleswig, no referendum was held, as the likely outcome was apparent. The name Southern Schleswig is now used for all of German Schleswig. This decision left substantial minorities on both sides of the new border.
Following the Second World War, a substantial part of the German population in Southern Schleswig changed their nationality and declared themselves as Danish. This change was caused by a number of factors, most importantly the German defeat and an influx of a large number of refugees from eastern Germany, whose culture and appearance differed from the local Germans, who were mostly descendants of Danish families who had changed their nationality in the 19th century. The change created a temporary Danish majority in the region and a demand for a new referendum from the Danish population in South Schleswig and some Danish politicians, including prime minister Knud Kristensen. However, the majority in the Danish parliament refused to support a referendum in South Schleswig, fearing that the "new Danes" were not genuine in their change of nationality. This proved to be the case and, from 1948 the Danish population began to shrink again. By the early 1950s, it had nevertheless stabilised at a level four times higher than the pre-war number.
In the Copenhagen-Bonn declaration of 1955, West Germany (later Germany as a whole) and Denmark promised to uphold the rights of each other's minority population. Today, both parts co-operate as a Euroregion, despite a national border dividing the former duchy. As Denmark and Germany are both part of the Schengen Area, for many years, there were no controls at the border. However, in response to the 2016 European migrant crisis, border checks were reintroduced.
In the 19th century, there was a naming dispute concerning the use of "Schleswig" or "Slesvig" and "Sønderjylland" (Southern Jutland). Originally the duchy was called "Sønderjylland" (Southern Jutland) but in the late 14th century the name of the city Slesvig (now Schleswig) started to be used for the whole territory. The term "Sønderjylland" was hardly used between the 16th and 19th centuries, and in this period the name "Schleswig" had no special political connotations. But around 1830, some Danes started to re-introduce the archaic term Sønderjylland to emphasize the area's history before its association with Holstein and its connection with the rest of Jutland. Its revival and widespread use in the 19th century therefore had a clear Danish nationalist connotation of laying a claim to the territory and objecting to the German claims. "Olsen's Map", published by the Danish cartographer Olsen in the 1830s, used this term, arousing a storm of protests by the duchy's German inhabitants. Even though many Danish nationalists, such as the National Liberal ideologue and agitator Orla Lehmann, used the name "Schleswig", it began to assume a clear German nationalist character in the mid 19th century – especially when included in the combined term "Schleswig-Holstein". A central element of the German nationalistic claim was the insistence on Schleswig and Holstein being a single, indivisible entity. Since Holstein was legally part of the German Confederation, and ethnically entirely German with no Danish population, use of that name implied that both provinces should belong to Germany and that their connection with Denmark should be weakened or altogether severed.
After the German conquest in 1864, the term Sønderjylland became increasingly dominant among the Danish population, even though most Danes still had no objection to the use of "Schleswig" as such (it is etymologically of Danish origin) and many of them still used it themselves in its Danish version "Slesvig". An example is the founding of De Nordslesvigske Landboforeninger (The North Schleswig Farmers Association). In 1866 Schleswig and Holstein were legally merged into the Prussian province of Schleswig-Holstein.
The naming dispute was resolved with the 1920 plebiscites and partition, each side applying its preferred name to the part of the territory remaining in its possession – though both terms can, in principle, still refer to the entire region. Northern Schleswig was, after the 1920 plebiscites, officially named the Southern Jutland districts ("de sønderjyske landsdele"), while Southern Schleswig then remained a part of the Prussian province, which became the German state of Schleswig-Holstein in 1946. | https://en.wikipedia.org/wiki?curid=45582 |
Biodefense
Biodefense refers to measures to restore biosecurity to a group of organisms who are, or may be, subject to biological threats or infectious diseases. Biodefense is frequently discussed in the context of biowar or bioterrorism, and is generally considered a military or emergency response term.
Biodefense applies to two distinct target populations: civilian non-combatant and military combatant (troops in the field). Protection of water supplies and food supplies are often a critical part of biodefense.
Military biodefense in the United States began with the United States Army Medical Unit (USAMU) at Fort Detrick, Maryland, in 1956. (In contrast to the U.S. Army Biological Warfare Laboratories [1943–1969], also at Fort Detrick, the USAMU's mission was purely to develop defensive measures against bio-agents, as opposed to weapons development.) The USAMU was disestablished in 1969 and succeeded by today's United States Army Medical Research Institute of Infectious Diseases (USAMRIID).
The United States Department of Defense (or "DoD") has focused since at least 1998 on the development and application of vaccine-based biodefenses. In a July 2001 report commissioned by the DoD, the "DoD-critical products" were stated as vaccines against anthrax (AVA and Next Generation), smallpox, plague, tularemia, botulinum, ricin, and equine encephalitis. Note that two of these targets are toxins (botulinum and ricin) while the remainder are infectious agents.
It is important to note that all of the classical and modern biological weapons organisms are animal diseases, the only exception being smallpox. Thus, in any use of biological weapons, it is highly likely that animals will become ill either simultaneously with, or perhaps earlier than humans.
Indeed, in the largest biological weapons accident known–the anthrax outbreak in Sverdlovsk (now Yekaterinburg) in the Soviet Union in 1979, sheep became ill with anthrax as far as 200 kilometers from the release point of the organism from a military facility in the southeastern portion of the city (known as Compound 19 and still off limits to visitors today, see Sverdlovsk anthrax leak).
Thus, a robust surveillance system involving human clinicians and veterinarians may identify a bioweapons attack early in the course of an epidemic, permitting the prophylaxis of disease in the vast majority of people (and/or animals) exposed but not yet ill.
For example, in the case of anthrax, it is likely that by 24–36 hours after an attack, some small percentage of individuals (those with compromised immune system or who had received a large dose of the organism due to proximity to the release point) will become ill with classical symptoms and signs (including a virtually unique chest X-ray finding, often recognized by public health officials if they receive timely reports). By making these data available to local public health officials in real time, most models of anthrax epidemics indicate that more than 80% of an exposed population can receive antibiotic treatment before becoming symptomatic, and thus avoid the moderately high mortality of the disease.
The goal of biodefense is to integrate the sustained efforts of the national and homeland security, medical, public health, intelligence, diplomatic, and law enforcement communities. Health care providers and public health officers are among the first lines of defense. In some countries private, local, and provincial (state) capabilities are being augmented by and coordinated with federal assets, to provide layered defenses against biological weapons attacks. During the first Gulf War the United Nations activated a biological and chemical response team, Task Force Scorpio, to respond to any potential use of weapons of mass destruction on civilians.
The traditional approach toward protecting agriculture, food, and water: focusing on the natural or unintentional introduction of a disease is being strengthened by focused efforts to address current and anticipated future biological weapons threats that may be deliberate, multiple, and repetitive.
The growing threat of biowarfare agents and bioterrorism has led to the development of specific field tools that perform on-the-spot analysis and identification of encountered suspect materials. One such technology, being developed by researchers from the Lawrence Livermore National Laboratory (LLNL), employs a "sandwich immunoassay", in which fluorescent dye-labeled antibodies aimed at specific pathogens are attached to silver and gold nanowires.
The U.S. National Institute of Allergy and Infectious Diseases (NIAID) also participates in the identification and prevention of biowarfare and first released a strategy for biodefense in 2002, periodically releasing updates as new pathogens are becoming topics of discussion. Within this list of strategies, responses for specific infectious agents are provided, along with the classification of these agents. NIAID provides countermeasures after the U.S. Department of Homeland Security details which pathogens hold the most threat.
Planning may involve the development of biological identification systems. Until recently in the United States, most biological defense strategies have been geared to protecting soldiers on the battlefield rather than ordinary people in cities. Financial cutbacks have limited the tracking of disease outbreaks. Some outbreaks, such as food poisoning due to "E. coli" or "Salmonella", could be of either natural or deliberate origin.
PreparednessBiological agents are relatively easy to obtain by terrorists and are becoming more threatening in the U.S., and laboratories are working on advanced detection systems to provide early warning, identify contaminated areas and populations at risk, and to facilitate prompt treatment. Methods for predicting the use of biological agents in urban areas as well as assessing the area for the hazards associated with a biological attack are being established in major cities. In addition, forensic technologies are working on identifying biological agents, their geographical origins and/or their initial son. Efforts include decontamination technologies to restore facilities without causing additional environmental concerns.
Early detection and rapid response to bioterrorism depend on close cooperation between public health authorities and law enforcement; however, such cooperation is currently lacking. National detection assets and vaccine stockpiles are not useful if local and state officials do not have access to them.
United States strategy
In September 2018, President Trump and his administration unveiled a new comprehensive plan, the National Biodefense Strategy, for how the government will oversee bioterrorism defense. Currently, there are 15 federal departments and agencies and 16 branches of intelligence community that work against biological threats. The work of these groups often overlaps. So one of the goals of the National Biodefense Strategy to streamline the efforts of these agencies to prevent overlapping responsibilities.
The group of people in charge of overseeing biodefense policy will be the U.S. National Security Council. The Department of Health and Human Services will be in charge of carrying out the plan. Additionally, each year a special steering committee will review the policy and update changes and make budget requests as necessary.
The U.S. government had a comprehensive defense strategy against bioterror attacks in 2004, when then-President George W. Bush signed a Homeland Security Presidential Directive 10. The directive laid out the country's 21st Century biodefense system and assigned various tasks to federal agencies that would prevent, protect and mitigate biological attacks against our homeland and global interests. Since that time, however, the federal government has not had a comprehensive biodefense strategy. Daniel Gerstein, a senior policy researcher at the RAND Corporation and former acting undersecretary and deputy undersecretary of the Department of Homeland Security's Science and Technology Directorate said, "...we haven't had any major bioterror attacks [since the anthrax attacks of 2001] so this sort of leaves the public's consciousness and that's when complacency sets in."
However, one viable strategy proposed by Gregory Parnell, Christopher Smith, and Frederick Moxley, three Professors from the United States Military Academy at West Point, suggests the feasibility of modeling intelligent adversary risk using a defender-attacker-defender analysis approach which does not require a major intelligent adversary program – only the willingness to change.
BiosurveillanceIn 1999, the University of Pittsburgh's Center for Biomedical Informatics deployed the first automated bioterrorism detection system, called RODS (Real-Time Outbreak Disease Surveillance). RODS is designed to draw collect data from many data sources and use them to perform signal detection, that is, to detect a possible bioterrorism event at the earliest possible moment. RODS, and other systems like it, collect data from sources including clinic data, laboratory data, and data from over-the-counter drug sales. In 2000, Michael Wagner, the codirector of the RODS laboratory, and Ron Aryel, a subcontractor, conceived the idea of obtaining live data feeds from "non-traditional" (non-health-care) data sources. The RODS laboratory's first efforts eventually led to the establishment of the National Retail Data Monitor, a system which collects data from 20,000 retail locations nationwide.
On February 5, 2002, George W. Bush visited the RODS laboratory and used it as a model for a $300 million spending proposal to equip all 50 states with biosurveillance systems. In a speech delivered at the nearby Masonic temple, Bush compared the RODS system to a modern "DEW" line (referring to the Cold War ballistic missile early warning system).
The principles and practices of biosurveillance, a new interdisciplinary science, were defined and described in the "Handbook of Biosurveillance", edited by Michael Wagner, Andrew Moore and Ron Aryel, and published in 2006. Biosurveillance is the science of real-time disease outbreak detection. Its principles apply to both natural and man-made epidemics (bioterrorism).
Data which potentially could assist in early detection of a bioterrorism event include many categories of information. Health-related data such as that from hospital computer systems, clinical laboratories, electronic health record systems, medical examiner record-keeping systems, 911 call center computers, and veterinary medical record systems could be of help; researchers are also considering the utility of data generated by ranching and feedlot operations, food processors, drinking water systems, school attendance recording, and physiologic monitors, among others. Intuitively, one would expect systems which collect more than one type of data to be more useful than systems which collect only one type of information (such as single-purpose laboratory or 911 call-center based systems), and be less prone to false alarms, and this appears to be the case.
In Europe, disease surveillance is beginning to be organized on the continent-wide scale needed to track a biological emergency. The system not only monitors infected persons, but attempts to discern the origin of the outbreak.
Researchers are experimenting with devices to detect the existence of a threat:
New research shows that ultraviolet avalanche photodiodes offer the high gain, reliability and robustness needed to detect anthrax and other bioterrorism agents in the air. The fabrication methods and device characteristics were described at the 50th Electronic Materials Conference in Santa Barbara on June 25, 2008. Details of the photodiodes were also published in the February 14, 2008 issue of the journal Electronics Letters and the November 2007 issue of the journal IEEE Photonics Technology Letters.
The United States Department of Defense conducts global biosurveillance through several programs, including the Global Emerging Infections Surveillance and Response System.
Government agencies which would be called on to respond to a bioterrorism incident would include law enforcement, hazardous materials/decontamination units and emergency medical units. The US military has specialized units, which can respond to a bioterrorism event; among them are the United States Marine Corps' Chemical Biological Incident Response Force and the U.S. Army's 20th Support Command (CBRNE), which can detect, identify, and neutralize threats, and decontaminate victims exposed to bioterror agents.
There are four hospitals capable of caring for anyone with an exposure to a BSL3 or BSL4 pathogen, the special clinical studies unit at National Institutes of Health is one of them. National Institutes of Health built a facility in April 2010. This unit has state of the art isolation capabilities with a unique airflow system. This unit is also being trained to care for patients who are ill due to a highly infectious pathogen outbreak, such as ebola. The doctors work closely with USAMRIID, NBACC and IRF. Special trainings take place regularly in order to maintain a high level of confidence to care for these patients.
In 2015, global biodefense market was estimated at $9.8 billion. Experts correlated the large marketplace to an increase in government attention and support as a result of rising bioterrorism threats worldwide. Government's heightened interest is anticipated expand the industry into the foreseeable future. According to Medgadget.com, "Many government legislations like Project Bioshield offers nations with counter measures against chemical, radiological, nuclear and biological attack."
Project Bioshield offers accessible biological countermeasures targeting various strains of smallpox and anthrax. "Main goal of the project is creating funding authority to build next generation counter measures, make innovative research & development programs and create a body like FDA (Food & Drug Administration) that can effectively use treatments in case of emergencies." Increased funding, in addition to public health organizations' elevated consideration in biodefense technology investments, could trigger growth in the global biodefense market.
The global biodefense market is divided into geographical locations such as APAC, Latin America, Europe, MEA, and North America. The biodefense industry in North America lead the global industry by a large margin, making it the highest regional revenue share for 2015, contributing approximately $8.91 billion of revenue this year, due to immense funding and government reinforcements. The biodefense market in Europe is predicted to register a CAGR of 11.41% by the forecast timeline. The United Kingdom's Ministry of Defense granted $75.67 million designated for defense & civilian research, making it the highest regional industry share for 2012.
Recently, Global Market Insights released a report covering the new trends in the biodefense market backed by detailed, scientific data. Industry leaders profiled in the report include the following corporations: Emergent Biosolutions, SIGA Technologies, Ichor Medical Systems Incorporation, PharmaAthene, Cleveland BioLabs Incorporation, Achaogen, Alnylam Pharmaceuticals, Xoma Corporation, Dynavax Technologies Incorporation, Elusys Therapeutics, DynPort Vaccine Company LLC, Bavarian Nordic and Nanotherapeutics Incorporation.
During the 115th Congress in July 2018, four Members of Congress, both Republican and Democrat (Anna Eshoo, Susan Brooks, Frank Palone and Greg Walden) introduced biodefense legislation called the Pandemic and All Hazards Preparedness and Advancing Innovation Act (PAHPA) (H.R. 6378). The bill strengthens the federal government's preparedness to deal with a wide range of public health emergencies, whether created through an act of bioterrorism or occurring through a natural disaster. The bill reauthorizes funding to improve bioterrorism and other public health emergency preparedness and response activities such as the Hospital Preparedness Program, the Public Health Emergency Preparedness Cooperative Agreement, Project BioShield, and BARDA for the advanced research and development of medical countermeasures (MCMs).
H.R. 6378 has 24 cosponsors from both political parties. On September 25, 2018, the House of Representatives passed the bill. | https://en.wikipedia.org/wiki?curid=45584 |
Indifference curve
In economics, an indifference curve connects points on a graph representing different quantities of two goods, points between which a consumer is "indifferent". That is, any combinations of two products indicated by the curve will provide the consumer with equal levels of utility, and the consumer has no preference for one combination or bundle of goods over a different combination on the same curve. One can also refer to each point on the indifference curve as rendering the same level of utility (satisfaction) for the consumer. In other words, an indifference curve is the locus of various points showing different combinations of two goods providing equal utility to the consumer. Utility is then a device to represent preferences rather than something from which preferences come. The main use of indifference curves is in the representation of potentially observable demand patterns for individual consumers over commodity bundles.
There are infinitely many indifference curves: one passes through each combination. A collection of (selected) indifference curves, illustrated graphically, is referred to as an indifference map.
The theory of indifference curves was developed by Francis Ysidro Edgeworth, who explained in his 1881 book the mathematics needed for their drawing; later on, Vilfredo Pareto was the first author to actually draw these curves, in his 1906 book. The theory can be derived from William Stanley Jevons' ordinal utility theory, which posits that individuals can always rank any consumption bundles by order of preference.
A graph of indifference curves for several utility levels of an individual consumer is called an indifference map. Points yielding different utility levels are each associated with distinct indifference curves and these indifference curves on the indifference map are like contour lines on a topographical graph. Each point on the curve represents the same elevation. If you move "off" an indifference curve traveling in a northeast direction (assuming positive marginal utility for the goods) you are essentially climbing a mound of utility. The higher you go the greater the level of utility. The non-satiation requirement means that you will never reach the "top," or a "bliss point," a consumption bundle that is preferred to all others.
Indifference curves are typically represented to be:
It also implies that the commodities are good rather than bad. Examples of bad commodities can be disease, pollution etc. because we always desire less of such things.
Consumer theory uses indifference curves and budget constraints to generate consumer demand curves. For a single consumer, this is a relatively simple process. First, let one good be an example market e.g., carrots, and let the other be a composite of all other goods. Budget constraints give a straight line on the indifference map showing all the possible distributions between the two goods; the point of maximum utility is then the point at which an indifference curve is tangent to the budget line (illustrated). This follows from common sense: if the market values a good more than the household, the household will sell it; if the market values a good less than the household, the household will buy it. The process then continues until the market's and household's marginal rates of substitution are equal. Now, if the price of carrots were to change, and the price of all other goods were to remain constant, the gradient of the budget line would also change, leading to a different point of tangency and a different quantity demanded. These price / quantity combinations can then be used to deduce a full demand curve. A line connecting all points of tangency between the indifference curve and the budget constraint is called the expansion path.
In Figure 1, the consumer would rather be on "I3" than "I2", and would rather be on "I2" than "I1", but does not care where he/she is on a given indifference curve. The slope of an indifference curve (in absolute value), known by economists as the marginal rate of substitution, shows the rate at which consumers are willing to give up one good in exchange for more of the other good. For "most" goods the marginal rate of substitution is not constant so their indifference curves are curved. The curves are convex to the origin, describing the negative substitution effect. As price rises for a fixed money income, the consumer seeks the less expensive substitute at a lower indifference curve. The substitution effect is reinforced through the income effect of lower real income (Beattie-LaFrance). An example of a utility function that generates indifference curves of this kind is the Cobb–Douglas function formula_1. The negative slope of the indifference curve incorporates the willingness of the consumer to make trade offs.
If two goods are perfect substitutes then the indifference curves will have a constant slope since the consumer would be willing to switch between at a fixed ratio. The marginal rate of substitution between perfect substitutes is likewise constant. An example of a utility function that is associated with indifference curves like these would be formula_2.
If two goods are perfect complements then the indifference curves will be L-shaped. Examples of perfect complements include left shoes compared to right shoes: the consumer is no better off having several right shoes if she has only one left shoe - additional right shoes have zero marginal utility without more left shoes, so bundles of goods differing only in the number of right shoes they include - however many - are equally preferred. The marginal rate of substitution is either zero or infinite. An example of the type of utility function that has an indifference map like that above is the Leontief function: formula_3.
The different shapes of the curves imply different responses to a change in price as shown from demand analysis in consumer theory. The results will only be stated here. A price-budget-line change that kept a consumer in equilibrium on the same indifference curve:
Choice theory formally represents consumers by a preference relation, and use this representation to derive indifference curves showing combinations of equal preference to the consumer.
Let
In the language of the example above, the set formula_4 is made of combinations of apples and bananas. The symbol formula_5 is one such combination, such as 1 apple and 4 bananas and formula_6 is another combination such as 2 apples and 2 bananas.
A preference relation, denoted formula_11, is a binary relation define on the set formula_4.
The statement
is described as 'formula_5 is weakly preferred to formula_6.' That is, formula_5 is at least as good as formula_6 (in preference satisfaction).
The statement
is described as 'formula_5 is weakly preferred to formula_6, and formula_6 is weakly preferred to formula_5.' That is, one is "indifferent" to the choice of formula_5 or formula_6, meaning not that they are unwanted but that they are equally good in satisfying preferences.
The statement
is described as 'formula_5 is weakly preferred to formula_6, but formula_6 is not weakly preferred to formula_5.' One says that 'formula_5 is strictly preferred to formula_6.'
The preference relation formula_11 is complete if all pairs formula_33 can be ranked. The relation is a transitive relation if whenever formula_13 and formula_35 then formula_36.
For any element formula_37, the corresponding indifference curve, formula_38 is made up of all elements of formula_4 which are indifferent to formula_40. Formally,
formula_41.
In the example above, an element formula_5 of the set formula_4 is made of two numbers: The number of apples, call it formula_44 and the number of bananas, call it formula_45
In utility theory, the utility function of an agent is a function that ranks "all" pairs of consumption bundles by order of preference ("completeness") such that any set of three or more bundles forms a transitive relation. This means that for each bundle formula_46 there is a unique relation, formula_47, representing the utility (satisfaction) relation associated with formula_46. The relation formula_49 is called the utility function. The range of the function is a set of real numbers. The actual values of the function have no importance. Only the ranking of those values has content for the theory. More precisely, if formula_50, then the bundle formula_46 is described as at least as good as the bundle formula_52. If formula_53, the bundle formula_46 is described as strictly preferred to the bundle formula_52.
Consider a particular bundle formula_56 and take the total derivative of formula_47 about this point:
or, without loss of generality,
where formula_60 is the partial derivative of formula_47 with respect to its first argument, evaluated at formula_46. (Likewise for formula_63)
The indifference curve through formula_56 must deliver at each bundle on the curve the same utility level as bundle formula_56. That is, when preferences are represented by a utility function, the indifference curves are the level curves of the utility function. Therefore, if one is to change the quantity of formula_66 by formula_67, without moving off the indifference curve, one must also change the quantity of formula_68 by an amount formula_69 such that, in the end, there is no change in "U":
Thus, the ratio of marginal utilities gives the absolute value of the slope of the indifference curve at point formula_56. This ratio is called the marginal rate of substitution between formula_66 and formula_68.
If the utility function is of the form formula_75 then the marginal utility of formula_66 is formula_77 and the marginal utility of formula_68 is formula_79. The slope of the indifference curve is, therefore,
Observe that the slope does not depend on formula_66 or formula_68: the indifference curves are straight lines.
If the utility function is of the form formula_83 the marginal utility of formula_66 is formula_85 and the marginal utility of formula_68 is formula_87.Where formula_88. The slope of the indifference curve, and therefore the negative of the marginal rate of substitution, is then
A general CES (Constant Elasticity of Substitution) form is
where formula_91 and formula_92. (The Cobb–Douglas is a special case of the CES utility, with formula_93.) The marginal utilities are given by
and
Therefore, along an indifference curve,
These examples might be useful for modelling individual or aggregate demand.
As used in biology, the indifference curve is a model for how animals 'decide' whether to perform a particular behavior, based on changes in two variables which can increase in intensity, one along the x-axis and the other along the y-axis. For example, the x-axis may measure the quantity of food available while the y-axis measures the risk involved in obtaining it. The indifference curve is drawn to predict the animal's behavior at various levels of risk and food availability.
Indifference curves inherit the criticisms directed at utility more generally.
Herbert Hovenkamp (1991) has argued that the presence of an endowment effect has significant implications for law and economics, particularly in regard to welfare economics. He argues that the presence of an endowment effect indicates that a person has no indifference curve (see however Hanemann, 1991) rendering the neoclassical tools of welfare analysis useless, concluding that courts should instead use WTA as a measure of value. Fischel (1995) however, raises the counterpoint that using WTA as a measure of value would deter the development of a nation's infrastructure and economic growth. | https://en.wikipedia.org/wiki?curid=45586 |
Bartolomeu Dias
Bartolomeu Dias (; ; Anglicized: Bartholomew Diaz; c. 1450 – 29 May 1500), a nobleman of the Portuguese royal household, was a Portuguese explorer. He sailed around the southernmost tip of Africa in 1488, the first European to do so, setting up the route from Europe to Asia later on. Dias is the first European during the Age of Discovery to anchor at what is present-day South Africa.
Bartolomeu Dias was a squire of the royal court, superintendent of the royal warehouses, and sailing-master of the man-of-war "São Cristóvão" (Saint Christopher). Very little is known of his early life. King John II of Portugal appointed him, on 10 October 1486, to head an expedition to sail around the southern tip of Africa in the hope of finding a trade route to India. Dias was also charged with searching for the lands ruled by Prester John, a fabled Christian priest and ruler of territory somewhere beyond Europe. He left 10 months later in August 1487. In the previous decades Portuguese mariners, most famously Prince Henry the Navigator (whose contribution was more as a patron and sponsor of voyages of discovery than as a sailor), had explored the areas of the Atlantic Ocean off Southern Europe and Western Africa as far as the Cape Verde Islands and modern-day Sierra Leone, and had gained sufficient knowledge of oceanic shipping and wind patterns to enable subsequent voyages of greater distance. In the early 1480s Diogo Cão in two voyages (he died towards the end of the second) had explored the mouth of the Congo River and sailed south of the Equator to present-day Angola and Namibia.
"São Cristóvão" was piloted by Pêro de Alenquer. A second caravel, the "São Pantaleão", was commanded by João Infante and piloted by Álvaro Martins. Dias's brother Pêro Dias was the captain of the square-rigged support ship with João de Santiago as pilot.
The expedition sailed down the west coast of Africa; provisions were picked up on the way at the Portuguese fortress of São Jorge de Mina on the Gold Coast. After sailing south of modern-day Angola, Dias reached the Golfo da Conceicão (Walvis Bay, in modern Namibia) by December. Continuing south, he discovered Angra dos Ilheus, then was hit by a violent storm. Thirteen days later, from the open ocean, he searched the coast again to the east, discovering and using the westerlies winds—the ocean gyre, but finding just ocean. Having rounded the Cape of Good Hope at a considerable distance to the west and southwest, he turned east, and taking advantage of the winds of Antarctica that blow strongly in the South Atlantic, sailed northeast. After 30 days without seeing land, he entered what he named Aguada de São Brás (Bay of Saint Blaise)—later renamed Mossel Bay—on 4 February 1488. Dias's expedition reached its furthest point on 12 March 1488 when it anchored at Kwaaihoek, near the mouth of the Boesmans River, where a padrão—the Padrão de São Gregório—was erected before turning back. Dias wanted to continue to India, but he was forced to turn back when his crew refused to go further and the rest of the officers unanimously favoured returning to Portugal. It was only on the return voyage that he actually discovered the Cape of Good Hope, in May 1488. Dias returned to Lisbon in December of that year, after an absence of 16 months and 17 days.
The discovery of the passage around southern Africa was significant because, for the first time, Europeans could trade directly with India and the Far East, bypassing the overland Euro-Asian route with its expensive European, Middle Eastern and Central Asian middlemen. The official report of the expedition has been lost.
Dias originally named the Cape of Good Hope the Cape of Storms ("Cabo das Tormentas"). It was later renamed (by King John II of Portugal) the Cape of Good Hope ("Cabo da Boa Esperança") because it represented the opening of a route to the east.
After these early attempts, the Portuguese took a decade-long break from Indian Ocean exploration. During that hiatus, it is likely that they received valuable information from a secret agent, Pêro da Covilhã, who had been sent overland to India and returned with reports useful to their navigators.
Using his experience with explorative travel, Dias helped in the construction of the "São Gabriel" and its sister ship the "São Rafael" that were used in 1498 by Vasco da Gama to sail past the Cape of Good Hope and continue to India. Dias only participated in the first leg of Da Gama's voyage, until the Cape Verde Islands. Two years later he was one of the captains of the second Indian expedition, headed by Pedro Álvares Cabral. This flotilla first reached the coast of Brazil, landing there in 1500, and then continued east to India. Dias perished near the Cape of Good Hope that he presciently had named "Cape of Storms". Four ships, including Dias's, encountered a huge storm off the cape and were lost on 29 May 1500. A shipwreck found in 2008 by the Namdeb Diamond Corporation off Namibia was at first thought to be Dias's ship, but recovered coins come from a later time.
Dias was married and had two children: | https://en.wikipedia.org/wiki?curid=45592 |
Thomas E. Dewey
Thomas Edmund Dewey (March 24, 1902 – March 16, 1971) was an American lawyer, prosecutor, and politician. Raised in Owosso, Michigan, Dewey was a member of the Republican Party. He served as the 47th governor of New York from 1943 to 1954. In 1944, he was the Republican Party's nominee for president, but lost the election to incumbent Franklin D. Roosevelt in the closest of Roosevelt's four presidential elections. He was again the Republican presidential nominee in 1948, but lost to President Harry S. Truman in one of the greatest upsets in presidential election history. Dewey played a large role in winning the Republican presidential nomination for Dwight D. Eisenhower in 1952, and helped Eisenhower win the presidential election that year. He also played a large part in the choice of Richard M. Nixon as the Republican vice-presidential nominee in 1952 and 1956.
As a New York City prosecutor and District Attorney in the 1930s and early 1940s, Dewey was relentless in his effort to curb the power of the American Mafia and of organized crime in general. Most famously, he successfully prosecuted Mafioso kingpin Charles "Lucky" Luciano on charges of forced prostitution in 1936. Luciano was given a thirty to fifty year prison sentence. He also prosecuted and convicted Waxey Gordon, another prominent New York City gangster and bootlegger, on charges of tax evasion. Dewey almost succeeded in apprehending Jewish mobster Dutch Schultz as well, but Schultz was murdered in 1935 in a hit ordered by The Commission itself; he had disobeyed The Commission's order forbidding him from making an attempt on Dewey's life.
Dewey led the moderate faction of the Republican Party during the 1940s and 1950s, in opposition to conservative Ohio Senator Robert A. Taft. Dewey was an advocate for the professional and business community of the Northeastern United States, which would later be called the Eastern Establishment. This group consisted of internationalists who were in favor of the United Nations and the Cold War fight against communism and the Soviet Union, and it supported most of the New Deal social-welfare reforms enacted during the administration of Democrat Franklin D. Roosevelt.
Following his political retirement, Dewey served from 1955 to 1971 as a corporate lawyer and senior partner in his law firm Dewey Ballantine in New York City. In March 1971, while on a golfing vacation in Miami, Florida, he died from a heart attack. Following a public memorial ceremony at St. James' Episcopal Church in New York City, Dewey was buried in the town cemetery of Pawling, New York.
Dewey was born and raised in Owosso, Michigan, where his father, George Martin Dewey, owned, edited, and published the local newspaper, the "Owosso Times." His mother, Annie (Thomas), whom he called "Mater," bequeathed her son "a healthy respect for common sense and the average man or woman who possessed it." She also left "a headstrong assertiveness that many took for conceit, a set of small-town values never entirely erased by exposure to the sophisticated East, and a sense of proportion that moderated triumph and eased defeat." One journalist noted that "[as a boy] he did show leadership and ambition above the average; by the time he was thirteen, he had a crew of nine other youngsters working for him" selling newspapers and magazines in Owosso. In his senior year in high school he served as the president of his class, and was the chief editor of the school yearbook. His senior caption in the yearbook stated "First in the council hall to steer the state, and ever foremost in a tongue debate", and a biographer wrote that "the bent of his mind, from his earliest days, was towards debate." He received his B.A. degree from the University of Michigan in 1923, and his J.D. degree from Columbia Law School in 1925.
While at the University of Michigan, Dewey joined Phi Mu Alpha Sinfonia, a national fraternity for men of music, and was a member of the Men's Glee Club. While growing up in Owosso, he was a member of the choir at Christ Episcopal Church. He was an excellent singer with a deep, baritone voice, and in 1923 he finished in third place in the National Singing Contest. He briefly considered a career as a professional singer, but decided against it after a temporary throat ailment convinced him that such a career would be risky. He then decided to pursue a career as a lawyer. He also wrote for "The Michigan Daily," the university's student newspaper.
On June 16, 1928, Dewey married Frances Eileen Hutt. A native of Sherman, Texas, she was a stage actress; after their marriage she dropped her acting career. They had two sons, Thomas E. Dewey Jr. and John Martin Dewey. Although Dewey served as a prosecutor and District Attorney in New York City for many years, his home from 1939 until his death was a large farm, called "Dapplemere," located near the town of Pawling some north of New York City. According to biographer Richard Norton Smith, Dewey "loved Dapplemere as [he did] no other place", and Dewey was once quoted as saying that "I work like a horse five days and five nights a week for the privilege of getting to the country on the weekend." In 1945, Dewey told a reporter that "my farm is my roots ... the heart of this nation is the rural small town." Dapplemere was part of a tight-knit rural community called Quaker Hill, which was known as a haven for the prominent and well-to-do. Among Dewey's neighbors on Quaker Hill were the famous reporter and radio broadcaster Lowell Thomas, the Reverend Norman Vincent Peale, and the legendary CBS News journalist Edward R. Murrow. During his twelve years as governor, Dewey also kept a New York City residence and office in Suite 1527 of the Roosevelt Hotel. Dewey was an active, lifelong member of the Episcopal Church.
Dewey was a lifelong Republican, and in the 1920s and 1930s, he was a party worker in New York City, eventually rising to become Chair of The New York Young Republican Club in 1931. When asked in 1946 why he was a Republican, Dewey replied, "I believe that the Republican Party is the best instrument for bringing sound government into the hands of competent men and by this means preserving our liberties... But there is another reason why I am a Republican. I was born one."
Dewey first served as a federal prosecutor, then started a lucrative private practice on Wall Street; however, he left his practice for an appointment as special prosecutor to look into corruption in New York City—with the official title of Chief Assistant U.S. Attorney for the Southern District of New York. It was in this role that he first achieved headlines in the early 1930s, when he prosecuted bootlegger Waxey Gordon.
Dewey had used his excellent recall of details of crimes to trip up witnesses as a federal prosecutor; as a state prosecutor, he used telephone taps (which were perfectly legal at the time per "Olmstead v. United States" of 1928) to gather evidence, with the ultimate goal of bringing down entire criminal organizations. On that account, Dewey successfully lobbied for an overhaul in New York's criminal procedure law, which at that time required separate trials for each count of an indictment. Dewey's thoroughness and attention to detail became legendary; for one case he and his staff sifted "through 100,000 telephone slips to convict a Prohibition-era bootlegger."
Dewey became famous in 1935, when he was appointed special prosecutor in New York County (Manhattan) by Governor Herbert H. Lehman. A "runaway grand jury" had publicly complained that William C. Dodge, the District Attorney, was not aggressively pursuing the mob and political corruption. Lehman, to avoid charges of partisanship, asked four prominent Republicans to serve as special prosecutor. All four refused and recommended Dewey.
Dewey moved ahead vigorously. He recruited a staff of over 60 assistants, investigators, process servers, stenographers, and clerks. New York Mayor Fiorello H. La Guardia assigned a picked squad of 63 police officers to Dewey's office. Dewey's targets were "organized" racketeering: the large-scale criminal enterprises, especially extortion, the "numbers racket" and prostitution. One writer stated that "Dewey ... put on a very impressive show. All the paraphernalia, the hideouts and tapped telephones and so on, became famous. More than any other American of his generation except [Charles] Lindbergh, Dewey became a creature of folklore and a national hero. What he appealed to most was the great American love of "results." People were much more interested in his ends than in his means. Another key to all this may be expressed in a single word: honesty. Dewey was honest."
One of his biggest prizes was gangster Dutch Schultz, whom he had battled as both a federal and state prosecutor. Schultz's first trial ended in a deadlock; prior to his second trial, Schultz had the venue moved to Malone, New York, then moved there and garnered the sympathy of the townspeople through charitable acts so that when it came time for his trial, the jury found him innocent, liking him too much to convict him.
Dewey and La Guardia threatened Schultz with instant arrest and further charges. Schultz now proposed to murder Dewey. Dewey would be killed while he made his daily morning call to his office from a pay phone near his home. However, New York crime boss Lucky Luciano and the "Mafia Commission" decided that Dewey's murder would provoke an all-out crackdown. Instead they had Schultz killed. Schultz was shot to death in the restroom of a bar in Newark.
Dewey's legal team turned their attention to Lucky Luciano. Assistant DA Eunice Carter oversaw investigations into prostitution racketeering. She raided 80 houses of prostitution in the New York City area and arrested hundreds of prostitutes and "madams". Carter had developed trust with many of these women, and through her coaching, many of the arrested prostitutes – some of whom told of being beaten and abused by Mafia thugs – were willing to testify to avoid prison time. Three implicated Luciano as controller of organized prostitution in the New York/New Jersey area – one of the largest prostitution rings in American history. Carter's investigation was the first to link Luciano to a crime. Dewey prosecuted the case, and in the greatest victory of his legal career, he won the conviction of Luciano for the prostitution racket, with a sentence of 30 to 50 years on June 18, 1936.
In January 1937, Dewey successfully prosecuted Tootsie Herbert, the leader of New York's poultry racket, for embezzlement. Following his conviction, New York's poultry "marketplace returned to normal, and New York consumers saved $5 million in 1938 alone." That same month, Dewey, his staff, and New York City police made a series of dramatic raids that led to the arrest of 65 of New York's leading operators in various rackets, including the bakery racket, numbers racket, and restaurant racket. The "New York Times" ran an editorial praising Dewey for breaking up the "shadow government" of New York's racketeers, and the "Philadelphia Inquirer" wrote "If you don't think Dewey is Public Hero No. 1, listen to the applause he gets every time he is shown in a newsreel."
In 1936 Dewey received The Hundred Year Association of New York's Gold Medal Award "in recognition of outstanding contributions to the City of New York".
In 1937 Dewey was elected New York County District Attorney (Manhattan), defeating the Democratic nominee after Dodge decided not to run for re-election. Dewey was such a popular candidate for District Attorney that "election officials in Brooklyn posted large signs at polling places reading 'Dewey Isn't Running in This County'."
As District Attorney, Dewey successfully prosecuted and convicted Richard Whitney, former president of the New York Stock Exchange, for embezzlement. Whitney was given a five-year prison sentence. Dewey also successfully prosecuted Tammany Hall political boss James Joseph Hines on thirteen counts of racketeering. Following the favorable national publicity he received after his conviction of Hines, a May 1939 Gallup poll showed Dewey as the frontrunner for the 1940 Republican presidential nomination, and gave him a lead of 58% to 42% over President Franklin D. Roosevelt in a potential 1940 presidential campaign. In 1939, Dewey also tried and convicted American Nazi leader Fritz Julius Kuhn for embezzlement, crippling Kuhn's organization and limiting its ability to support Nazi Germany in World War II.
During his four years as District Attorney, Dewey and his staff compiled a 94 percent conviction rate of defendants brought to trial, created new bureaus for Fraud, Rackets, and Juvenile Detention, and led an investigation into tenement houses with inadequate fire safety features that reduced "their number from 13,000 to 3,500" in a single year. When he left the District Attorney's office in 1942 to run for governor, Dewey said that "It has been learned in high places that clean government can also be good politics...I don't like Republican thieves any more than Democratic ones."
By the late 1930s Dewey's successful efforts against organized crime—and especially his conviction of Lucky Luciano—had turned him into a national celebrity. His nickname, the "Gangbuster", was used for the popular 1930s "Gang Busters" radio series based on his fight against the mob. Hollywood film studios made several movies inspired by his exploits; "Marked Woman" starred Humphrey Bogart as a Dewey-like DA and Bette Davis as a "party girl" whose testimony helps convict the mob boss. A popular story from the time, possibly apocryphal, featured a young girl who told her father that she wanted to sue God to stop a prolonged spell of rain. When her father replied "you can't sue God and win", the girl said "I can if Dewey is my lawyer."
The journalists Neal Peirce and Jerry Hagstrom summarized Dewey's governorship by writing that "for sheer administrative talent, it is difficult to think of a twentieth-century governor who has excelled Thomas E. Dewey ... hundreds of thousands of New York youngsters owe Dewey thanks for his leadership in creating a state university ... a vigorous health-department program virtually eradicated tuberculosis in New York, highway building was pushed forward, and the state's mental hygiene program was thoroughly reorganized." Dewey also created a powerful political organization that allowed him to dominate New York state politics and influence national politics.
In 1938 Edwin Jaeckle, the New York Republican Party Chairman, selected Dewey to run for Governor of New York against the Democratic incumbent, Herbert H. Lehman. Dewey was only 36 years of age. He based his campaign on his record as a famous prosecutor of organized-crime figures in New York City. Although he was defeated, Dewey's surprisingly strong showing against the popular Lehman (he lost by only 1.4%) brought him national political attention and made him a front runner for the 1940 Republican presidential nomination.
Jaeckle was one of Dewey's top advisors and mentors for the remainder of his political career.
In 1942, Dewey ran for governor again and won with a large plurality over Democrat John J. Bennett Jr., the outgoing state attorney general. Bennett was not endorsed by the American Labor Party, whose candidate, Dean Alfange, drew almost 10 percent of the ballots cast. The ALP endorsed for re-election incumbent lieutenant governor Charles Poletti, who lost narrowly to Dewey's running mate Thomas W. Wallace.
In 1946, Dewey was re-elected by the greatest margin in state history to that point, almost 700,000 votes.
In 1950, he was elected to a third term by 572,000 votes.
Usually regarded as an honest and highly effective governor, Dewey doubled state aid to education, increased salaries for state employees and still reduced the state's debt by over $100 million. He referred to his program as "pay-as-you-go liberalism ... government can be progressive and solvent at the same time." Additionally he put through the first state law in the country that prohibited racial discrimination in employment. As governor, Dewey signed legislation that created the State University of New York. Shortly after becoming governor in 1943, Dewey learned that some state workers and teachers were being paid only $900 a year, leading him to give "hefty raises, some as high as 150%" to state workers and teachers.
Dewey played a leading role in securing support and funding for the New York State Thruway, which was eventually named in his honor. Dewey also streamlined and consolidated many state agencies to make them more efficient. During the Second World War construction in New York was limited, which allowed Dewey to create a $623 million budget surplus, which he placed into his "Postwar Reconstruction Fund." The fund would eventually create 14,000 new beds in the state's mental health system, provide public housing for 30,000 families, allow for the reforestation of 34 million trees, create a water pollution program, provide slum clearance, and pay for a "model veterans' program." His governorship was also "friendlier by far than his [Democratic] predecessors to the private sector", as Dewey created a state Department of Commerce to "lure new businesses and tourists to the Empire State, ease the shift from wartime boom, and steer small businessmen, in particular, through the maze of federal regulation and restriction." Between 1945 and 1948, 135,000 new businesses were started in New York.
Dewey supported the decision of the New York legislature to end state funding for child care centers, which were established during the war. The child care centers allowed mothers to participate in wartime industries. The state was forced to provide funding for local communities that could not obtain money under the Lanham Act. Although working mothers, helped by various civic and social groups, fought to retain funding, federal support for child care facilities was considered temporary and ended on March 1, 1946. New York state aid to child care ended on January 1, 1948. When protesters asked Dewey to keep the child care centers open, he called them "Communists."
He also strongly supported the death penalty. During his twelve years as governor, more than ninety people were electrocuted under New York authority. Among these were several of the mob-affiliated hitmen belonging to the murder-for-hire group Murder, Inc., which was headed up by major mob leaders Louis "Lepke" Buchalter and Albert Anastasia. Buchalter himself went to the chair in 1944.
Dewey sought the 1940 Republican presidential nomination. He was considered the early favorite for the nomination, but his support ebbed in the late spring of 1940, as World War II suddenly became much more dangerous for America.
Some Republican leaders considered Dewey to be too young (at 38, just three years above the minimum age required by the US Constitution) and too inexperienced to lead the nation in wartime. Furthermore, Dewey's non-interventionist stance became problematic when Germany quickly conquered France, and seemed poised to invade Britain. As a result, many Republicans switched to Wendell Willkie, who was a decade older and supported aid to the Allies fighting Germany. Willkie lost to Franklin D. Roosevelt in the general election.
Dewey's foreign-policy position evolved during the 1940s; by 1944 he was considered an internationalist and a supporter of projects such as the United Nations. It was in 1940 that Dewey first clashed with Robert A. Taft. Taft—who maintained his non-interventionist views and economic conservatism to his death—became Dewey's great rival for control of the Republican Party in the 1940s and early 1950s. Dewey became the leader of moderate Republicans, who were based in the Eastern states, while Taft became the leader of conservative Republicans who dominated most of the Midwest.
Dewey was the frontrunner for the 1944 Republican nomination. In April 1944 he won the key Wisconsin primary, where he defeated Wendell Willkie and former Minnesota governor Harold Stassen. Willkie's poor showing in Wisconsin forced him to quit the race. At the 1944 Republican Convention, Dewey's chief rivals—Stassen and Ohio governor John W. Bricker—both withdrew and Dewey was nominated almost unanimously. Dewey then made Bricker (who was supported by Taft) his running mate. This made Dewey the first presidential candidate to be born in the 20th century. As of 2019, he was also the youngest Republican presidential nominee.
In the general election campaign, Dewey crusaded against the alleged inefficiencies, corruption and Communist influences in incumbent president Roosevelt's New Deal programs, but mostly avoided military and foreign policy debates. Dewey had considered including the conspiracy theory that Roosevelt knew about the attack on Pearl Harbor beforehand and allowed it to happen and to say: "...and instead of being re-elected he should be impeached." The allegation would have suggested the then-secret fact that the U.S. had broken the Purple code still in use by the Japanese military. Dewey eventually yielded to Army Chief of Staff George C. Marshall's urging not to touch this topic. Marshall informed Harry Hopkins of his action in late October that year; Hopkins then told the president. Roosevelt reasoned that "Dewey would not, for political purposes, give secret and vital information to the enemy". Dewey lost the election on November 7, 1944, to President Roosevelt. He had polled 45.9% of the popular vote compared to Roosevelt's 53.4%, a stronger showing against FDR than any previous Republican opponent. In the Electoral College, Roosevelt defeated Dewey by a margin of 432 to 99.
Dewey was the Republican candidate again in the 1948 presidential election and was almost unanimously projected as the winner against incumbent Harry S. Truman, who had taken over from FDR when he died in office in 1945.
During the primaries, Dewey was repeatedly urged to engage in red-baiting, but he refused. In a debate before the Oregon primary with Harold Stassen, Dewey argued against outlawing the Communist Party of the United States of America, saying "you can't shoot an idea with a gun." He later told Styles Bridges, the Republican national campaign manager, that he was not "going around looking under beds".
Given Truman's sinking popularity and the Democratic Party's three-way split (the left-winger Henry A. Wallace and the Southern segregationist Strom Thurmond ran third-party campaigns), Dewey seemed unstoppable. Republicans believed that all they had to do to win was to avoid making any major mistakes, and as such Dewey took no risks. He spoke in platitudes, trying to transcend politics. Speech after speech was filled with empty statements of the obvious, such as the famous quote: "You know that your future is still ahead of you." An editorial in the "Louisville Courier-Journal" summed it up:
No presidential candidate in the future will be so inept that four of his major speeches can be boiled down to these historic four sentences: Agriculture is important. Our rivers are full of fish. You cannot have freedom without liberty. Our future lies ahead.
Another reason Dewey ran such a cautious, vague campaign came from his experience as a presidential candidate in 1944. In that election, Dewey felt that he had allowed Roosevelt to draw him into a partisan, verbal "mudslinging" match, and he believed that this had cost him votes. Dewey was convinced in 1948 to appear as non-partisan as possible, and to emphasize the positive aspects of his campaign while ignoring his opponent. This strategy proved to be a major mistake, as it allowed Truman to repeatedly criticize and ridicule Dewey, while Dewey never answered any of Truman's criticisms.
Although Dewey was not as conservative as the Republican-controlled 80th Congress, the association proved problematic. Truman tied Dewey to the "do-nothing" Congress.
Near the end of the campaign, Dewey considered adopting a more aggressive style and responding directly to Truman's criticisms, going so far as to tell his aides one evening that he wanted to "tear to shreds" a speech draft and make it more critical of the Democratic ticket. However, nearly all his major advisors insisted that it would be a mistake to change tactics. Dewey's wife Frances strongly opposed her husband changing tactics, telling him, "If I have to stay up all night to see that you don't tear up that speech [draft], I will." Dewey relented and continued to ignore Truman's attacks and to focus on positive generalities instead of issue specifics.
The "Chicago Daily Tribune" printed "DEWEY DEFEATS TRUMAN" as its post-election headline, issuing 150,000 copies before the returns showed Truman winning.
Dewey received 45.1% of the popular vote to Truman's 49.6%. In the Electoral College, Dewey won 16 states with 189 electoral votes, Truman 28 states with 303 electoral votes, and Thurmond four states (all in the South) with 39 electoral votes. The key states in the election were Illinois, California, and Ohio, which together had a combined 78 electoral votes. Truman won each of these three states by less than one percentage point. Had Dewey won all three states, he would have won the election in the Electoral College. Summarizing Dewey's campaign, a biographer wrote that "Dewey had swept the industrial Northeast, pared Democratic margins in the big cities by a third, run better than any Republican since Herbert Hoover in the South—and still lost decisively." After the election, Dewey told publisher Henry Luce that "you can analyze figures from now to kingdom come, and all they will show is that we lost the farm vote which we had in 1944 and that lost us the election."
A biographer noted that Dewey "rarely mentioned 1948 in the years thereafter. It was like a locked room in a musty mansion whose master never entered ... he seemed a bit bewildered at the unanimous front put up by his Albany advisers [during the campaign], regretted not having taken a final poll when his own senses detected slippage, and couldn't resist a potshot at "that bastard Truman" for having successfully exploited farmer's fears of a new depression." Dewey remains the only Republican presidential candidate to have been nominated twice and to have lost on both occasions.
Dewey did not run for president in 1952, but he played a key role in securing the Republican nomination for General Dwight D. Eisenhower. Taft was an announced candidate and, given his age, he freely admitted 1952 would be his last chance to win the presidency. Once Eisenhower became a candidate, Dewey used his powerful political machine to win Eisenhower the support of delegates in New York and elsewhere.
The 1952 campaign culminated in a climactic moment in the fierce rivalry between Dewey and Taft for control of the Republican Party. At the Republican Convention, pro-Taft delegates and speakers verbally attacked Dewey as the real power behind Eisenhower, but Dewey had the satisfaction of seeing Eisenhower win the nomination and end Taft's presidential hopes for the last time.
Dewey played a major role in helping California Senator Richard Nixon become Eisenhower's running mate. When Eisenhower won the presidency later that year, many of Dewey's closest aides and advisors became leading figures in the Eisenhower Administration. Among them were Herbert Brownell, who would become Eisenhower's Attorney General; James Hagerty, who would become White House Press Secretary; and John Foster Dulles, who would become Eisenhower's Secretary of State.
Dewey's biographer Richard Norton Smith wrote, "For fifteen years ... these two combatants waged political warfare. Their dispute pitted East against Midwest, city against countryside, internationalist against isolationist, pragmatic liberals against principled conservatives. Each man thought himself the genuine spokesman of the future; each denounced the other as a political heretic."
In a 1949 speech, Dewey criticized Taft and his followers by saying that "we have in our party some fine, high-minded patriotic people who honestly oppose farm price supports, unemployment insurance, old age benefits, slum clearance, and other social programs... these people believe in a laissez-faire society and look back wistfully to the miscalled 'good old days' of the nineteenth century... if such efforts to turn back the clock are actually pursued, you can bury the Republican Party as the deadest pigeon in the country." He added that people who opposed such social programs should "go out and try to get elected in a typical American community and see what happens to them. But they ought not to do it as Republicans."
However, in the speech Dewey added that the Republican Party believed in social progress "under a flourishing, competitive system of private enterprise where every human right is expanded ... we are opposed to delivering the nation into the hands of any group who will have the power to tell the American people whether they may have food or fuel, shelter or jobs." Dewey believed in what he called "compassionate capitalism", and argued that "in the modern age, man's needs include as much economic security as is consistent with individual freedom." When Taft and his supporters criticized Dewey's policies as liberal "me-tooism", or "aping the New Deal in a vain attempt to outbid Roosevelt's heirs", Dewey responded that he was following in the tradition of Republicans such as Abraham Lincoln and Theodore Roosevelt, and that "it was conservative reforms like anti-trust laws and federal regulation of railroads ... that retained the allegiance of the people for a capitalist system combining private incentive and public conscience."
In May 1953, Governor Dewey set up a nine-member Advisory Board to help the State Safety Division's Bureau of Safety and Accident Prevention and appointed Edward Burton Hughes (the Deputy New York State Superintendent of Public Works) as Chairman. The Advisory Board was formed to draft accident prevention policies and programs.
Dewey's third term as governor of New York expired at the end of 1954, after which he retired from public service and returned to his law practice, Dewey Ballantine, although he remained a power broker behind the scenes in the Republican Party. In 1956, when Eisenhower mulled not running for a second term, he suggested Dewey as his choice as successor, but party leaders made it plain that they would not entrust the nomination to Dewey yet again, and ultimately Eisenhower decided to run for re-election. Dewey also played a major role that year in convincing Eisenhower to keep Nixon as his running mate; Eisenhower had considered dropping Nixon from the Republican ticket and picking someone he felt would be less partisan and controversial. However, Dewey argued that dropping Nixon from the ticket would only anger Republican voters while winning Eisenhower few votes from the Democrats. Dewey's arguments helped convince Eisenhower to keep Nixon on the ticket. In 1960, Dewey would strongly support Nixon's ultimately unsuccessful presidential campaign against Democrat John F. Kennedy.
Although Dewey publicly supported Nelson Rockefeller in all four of his campaigns for Governor of New York, and backed Rockefeller in his losing 1964 bid for the Republican presidential nomination against Arizona Senator Barry Goldwater, he did privately express concern and disappointment with what he regarded as Rockefeller's "spendthrift" methods as governor, and once told him "I like you Nelson, but I don't think I can afford you." In 1968, when both Rockefeller and Nixon were competing for the Republican presidential nomination, Dewey was publicly neutral, but "privately, according to close friends, he favored Nixon."
By the 1960s, as the conservative wing assumed more and more power within the Republican Party, Dewey removed himself further and further from party matters. When the Republicans in 1964 gave the conservative Senator Goldwater their presidential nomination Dewey declined to even attend the Republican Convention in San Francisco; it was the first Republican Convention he had missed since 1936.
Although closely identified with the Republican Party for virtually his entire adult life, Dewey was a close friend of Democratic Senator Hubert H. Humphrey, and Dewey aided Humphrey in being named as the Democratic nominee for vice-president in 1964, advising President Lyndon Johnson on ways to block efforts at the party convention by Kennedy loyalists to stampede Robert Kennedy onto the ticket as Johnson's running mate.
In the mid-1960s, President Johnson tried to convince Dewey to accept positions on several government commissions, especially a national crime commission, which Johnson wanted Dewey to chair. After Nixon won the presidency in 1968, there were rumors that Dewey would be offered a cabinet position, or a seat on the U.S. Supreme Court. However, Dewey declined all offers to return to government service, preferring instead to concentrate on his highly profitable law firm. By the early 1960s, his share of the firm's profits had made him a millionaire, and his net worth at the time of his death was estimated at over $3 million (or $19 million in 2019 dollars).
Dewey was offered the position of Chief Justice of the United States by Dwight D. Eisenhower and again by Richard Nixon in 1969. He declined the offer both times.
Dewey's wife Frances died in July 1970, after battling breast cancer for six years. In the autumn of 1970, Dewey began to date actress Kitty Carlisle, and there was talk of marriage between them. On March 15, 1971, Dewey traveled to Miami, Florida for a brief golfing vacation with friend Dwayne Andreas and other associates.
On March 16, following a round of golf with Boston Red Sox player Carl Yastrzemski, he returned to his room in the Seaview Hotel to pack; he was due that evening at the White House in Washington to help celebrate the engagement of President Nixon's daughter, Tricia. When Dewey failed to appear for his ride to the Miami airport, a concerned Andreas convinced the hotel management to take him to Dewey's room. They found Dewey, fully dressed, lying on his back across the bed, and packed to leave. An autopsy determined that he had died suddenly from a massive heart attack eight days before his 69th birthday.
Following a public memorial service at Saint James' Episcopal Church in New York City, which was attended by President Nixon, former vice president Hubert Humphrey, New York governor Nelson Rockefeller, and other prominent politicians, Dewey was buried next to his wife Frances in the town cemetery of Pawling, New York. After his death, his farm of Dapplemere was sold and renamed "Dewey Lane Farm" in his honor.
Dewey received varied reactions from the public and fellow politicians, with praise for his good intentions, honesty, administrative talents, and inspiring speeches, but most also criticizing his ambition and perceived stiffness in public. One of his biographers wrote that he had "a personality that attracted contempt and adulation in equal proportion."
Dewey was a forceful and inspiring speaker, traveling the whole country during his presidential campaigns and attracting uncommonly huge crowds. His friend and neighbor Lowell Thomas believed that Dewey was "an authentic colossus" whose "appetite for excellence [tended] to frighten less obsessive types", and his 1948 running mate Earl Warren "professed little personal affection for Dewey, but [believed] him a born executive who would make a great president." The pollster George Gallup once described Dewey as "the ablest public figure of his lifetime... the most misunderstood man in recent American history."
On the other hand, President Franklin D. Roosevelt privately called Dewey "the little man" and a "son of a bitch", and to Robert Taft and other conservative Republicans Dewey "became synonymous with ... New York newspapers, New York banks, New York arrogance – the very city Taft's America loves to hate." A Taft supporter once referred to Dewey as "that snooty little governor of New York."
Dewey grew his mustache when he was dating Frances, and because "she liked it, the mustache stayed, to delight cartoonists and dismay political advisers for twenty years." During the 1944 election campaign, Dewey suffered an unexpected blow when Alice Roosevelt Longworth was reported as having mocked Dewey as "the little man on the wedding cake", alluding to his neat mustache and dapper dress. It was ridicule he could never shake.
Roger Masters, a professor of government at Dartmouth College, wrote: "The shaved face has become a reflection of the Protestant ethic. Politicians are supposed to control nature in some sense, so beards and mustaches, which imply a reluctance to control nature, are now reserved for artisans or academics."
Dewey alienated former Republican president Herbert Hoover, who confided to a friend "Dewey has no inner reservoir of knowledge on which to draw for his thinking," elaborating that "A man couldn't wear a mustache like that without having it affect his mind."
Several commentators and analysts in 1948 attributed the falloff in Dewey's popularity late in his presidential campaign, in part, to his distinctive mustache and resemblance to actor Clark Gable, which was said to raise doubts with voters as to the seriousness of Dewey as prospective leader of the Free World.
Dewey had a tendency towards pomposity and was considered stiff and unapproachable in public, with his aide Ruth McCormick Simms once describing him as "cold, cold as a February iceberg". She added that "he was brilliant and thoroughly honest."
During his governorship, one writer observed: "A blunt fact about Mr. Dewey should be faced: it is that many people do not like him. He is, unfortunately, one of the least seductive personalities in public life. That he has made an excellent record as governor is indisputable. Even so, people resent what they call his vindictiveness, the 'metallic' nature of his efficiency, his cockiness (which actually conceals a nature basically shy), and his suspiciousness. People say... that he is as devoid of charm as a rivet or a lump of stone."
However, Dewey's friends considered him a warm and friendly companion. Journalist Irwin Ross noted that, "more than most politicians, [Dewey] displayed an enormous gap between his private and his public manner. To friends and colleagues he was warm and gracious, considerate of others' views… He could tell a joke and was not dismayed by an off-color story. In public, however, he tended to freeze up, either out of diffidence or too stern a sense of the dignity of office. The smiles would seem forced… the glad-handing gesture awkward."
A magazine writer described the difference between Dewey's private and public behavior by noting that, "Till he gets to the door, he may be cracking jokes and laughing like a schoolboy. But the moment he enters a room he ceases to be Tom Dewey and becomes what he thinks the Governor of New York ought to be."
Leo O'Brien, a reporter for the United Press International (UPI), recalled Dewey in an interview by saying that "I hated his guts when he first came to Albany, and I loved him by the time he left. It was almost tragic – how he put on a pose that alienated people. Behind a pretty thin veneer he was a wonderful guy." John Gunther wrote in 1947 that many supporters were fiercely loyal to Dewey.
Dewey's presidential campaigns were hampered by Dewey's habit of not being "prematurely specific" on controversial issues. President Truman poked fun at Dewey's vague campaign by joking that G.O.P. actually stood for "grand old platitudes."
Dewey's frequent refusal to discuss specific issues and proposals in his campaigns was based partly on his belief in public opinion polls; one biographer claimed that he "had an almost religious belief in the revolutionary science of public-opinion sampling." He was the first presidential candidate to employ his own team of pollsters, and when a worried businessman told Dewey in the 1948 presidential campaign that he was losing ground to Truman and urged him to "talk specifics in his closing speeches", Dewey and his aide Paul Lockwood displayed polling data that showed Dewey still well ahead of Truman, and Dewey told the businessman "when you're leading, don't talk."
Walter Lippman regarded Dewey as an opportunist, who "changes his views from hour to hour… always more concerned with taking the popular position than he is in dealing with the real issues."
The journalist John Gunther wrote that "There are plenty of vain and ambitious and uncharming politicians. This would not be enough to cause Dewey's lack of popularity. What counts more is that so many people think of him as opportunistic. Dewey seldom goes out on a limb by taking a personal position which may be unpopular... every step is carefully calculated and prepared."
As governor, Dewey had a reputation for ruthless treatment of New York legislators and political opponents. "[Dewey] cracked the whip ruthlessly on [Republican] legislators who strayed from the party fold. Assemblymen have found themselves under investigation by the State Tax Department after opposing the Governor over an insurance regulation bill. Others discover job-rich construction projects, state buildings, even highways, directed to friendlier [legislators]... [He] forced the legislature his own party dominates to reform its comfortable ways of payroll padding. Now legislative workers must verify in writing every two weeks what they have been doing to earn their salary; every state senator and assemblyman must verify that [they] are telling the truth. All this has occasioned more than grumbling. Some Assemblymen have quit in protest. Others have been denied renomination by Dewey's formidable political organization. Reporters mutter among themselves about government by blackmail."
Dewey received positive publicity for his reputation for honesty and integrity. The newspaper editor William Allen White praised Dewey as "an honest cop with the mind of an honest cop."
He insisted on having every candidate for a job paying $2,500 or more rigorously probed by state police. He was so concerned about the elected public official being motivated by the wealth his position could produce that he frequently said, "No man should be in public office who can't make more money in private life." Dewey accepted no anonymous campaign contributions and had every large contributor not known personally to him investigated "for motive." When he signed autographs, he would date them so that no one could imply a closer relationship than actually existed.
A journalist noted in 1947 that Dewey "has never made the slightest attempt to capitalize on his enormous fame, except politically. Even when temporarily out of office, in the middle 1930s, he rigorously resisted any temptation to be vulgarized or exploited...he could easily have become a millionaire several times over by succumbing to various movie and radio offers. He would have had to do nothing except give permission for movies or radio serials to be built around his career and name. Be it said to his honor, he never did so."
In 1964, the New York State legislature officially renamed the New York State Thruway in honor of Dewey. Signs on Interstate 95 between the end of the Bruckner Expressway (in the Bronx) and the Connecticut state line, as well as on the Thruway mainline (Interstate 87 between the Bronx-Westchester line and Albany, and Interstate 90 between Albany and the New York-Pennsylvania line) designate the name as "Governor Thomas E. Dewey Thruway," though this official designation is rarely used in reference to these roads.
Dewey's official papers from his years in politics and public life were given to the University of Rochester; they are housed in the university library and are available to historians and other writers.
In 2005, the New York City Bar Association named an award after Dewey. The Thomas E. Dewey Medal, formerly sponsored by the law firm of Dewey & LeBoeuf LLP, is awarded annually to one outstanding Assistant District Attorney in each of New York City's five counties (New York, Kings, Queens, Bronx, and Richmond). The Medal was first awarded on November 29, 2005. The Thomas E. Dewey Medal is now sponsored by the law firm Dewey Pegno & Kramarsky LLP.
In May 2012, Dewey & LeBoeuf (the successor firm to Dewey Ballantine) filed for bankruptcy. | https://en.wikipedia.org/wiki?curid=45596 |
Strepsiptera
The Strepsiptera are an endopterygote order of insects with nine extant families that include about 600 described species. They are endoparasites in other insects, such as bees, wasps, leafhoppers, silverfish, and cockroaches. Females of most species never emerge from the host after entering its body, finally dying inside it. The early-stage larvae do emerge because they must find an unoccupied living host, and the short-lived males must emerge to seek a receptive female in her host.
The order is not well known to non-specialists, and the nearest they have to a common name is "stylops". The name of the order translates to "twisted wing"', giving rise to another name used for the order, twisted-wing insects.
Males of the Strepsiptera have wings, legs, eyes, and antennae, though their mouthparts cannot be used for feeding. Many have mouthparts modified into sensory structures. To the uninitiated the males superficially look like flies. Adult males are very short-lived, usually surviving less than five hours, and do not feed. Females, in all families except the Mengenillidae, are not known to leave their hosts and are neotenic in form, lacking wings, legs, and eyes. Virgin females release a pheromone which the males use to locate them.
In the Stylopidia, the female's anterior region protrudes out of the host body and the male mates by rupturing the female's brood canal opening, which lies between the head and prothorax. Sperm passes through the opening in a process termed hypodermic insemination. The offspring consume their mother from the inside in a process known as hemocelous viviparity. Each female then produces many thousands of triungulin larvae that emerge from the brood opening on the head, which protrudes outside the host body. These larvae have legs and actively search out new hosts. Their legs are partly vestigial in that they lack a trochanter, the leg segment that forms the articulation between the basal coxa and the femur).
Strepsiptera of various species have been documented to attack hosts in many orders, including members of the orders Zygentoma, Orthoptera, Blattodea, Mantodea, Heteroptera, Hymenoptera, and Diptera. In the strepsipteran family Myrmecolacidae, the males parasitize ants, while the females parasitize Orthoptera.
Strepsiptera eggs hatch inside the female, and the planidium larvae can move around freely within the female's haemocoel; this behavior is unique to these insects. The larvae escape through the female's brood canal, which communicates with the outside world. The larvae are very active, because they only have a limited amount of time to find a host before they exhaust their food reserves. These first-instar larvae have stemmata (simple, single-lens eyes). When the larvae latch onto a host, they enter it by secreting enzymes that soften the cuticle, usually in the abdominal region of the host. Some species have been reported to enter the eggs of hosts. Larvae of "Stichotrema dallatorreanurn" Hofeneder from Papua New Guinea were found to enter their orthopteran host's tarsus (foot). Once inside the host, they undergo hypermetamorphosis and become a less-mobile, legless larval form. They induce the host to produce a bag-like structure inside which they feed and grow. This structure, made from host tissue, protects them from the immune defences of the host. Larvae go through four more
instars, and in each moult the older cuticle separates but is not discarded ("apolysis without ecdysis"), so multiple layers form around the larvae.
Male larvae pupate after the last moult, but females directly become neotenous adults. The colour and shape of the host's abdomen may be changed and the host usually becomes sterile. The parasites then undergo pupation to become adults. Adult males emerge from the host bodies, while females stay inside. Females may occupy up to 90% of the abdominal volume of their hosts.
Adult male Strepsiptera have eyes unlike those of any other insect, resembling the schizochroal eyes found in the trilobite group known as the Phacopina. Instead of a compound eye consisting of hundreds to thousands of ommatidia, that each produce a pixel of the entire image - the strepsipteran eyes consist of only a few dozen "eyelets" that each produce a complete image. These eyelets are separated by cuticle and/or setae, giving the cluster eye as a whole a blackberry-like appearance.
Very rarely, multiple females may live within a single stylopized host; multiple males within a single host are somewhat more common. Adult males are rarely observed, however, although specimens may be lured using cages containing virgin females. Nocturnal specimens can also be collected at light traps.
Strepsiptera of the family Myrmecolacidae can cause their ant hosts to linger on the tips of grass leaves, increasing the chance of being found by the parasite's males (in case of females) and putting them in a good position for male emergence (in case of males).
The order, named by William Kirby in 1813, is named for the hind wings, which are held at a twisted angle when at rest (from Greek (strephein), to twist; and (pteron), wing). The fore wings are reduced to halteres.
Strepsiptera were once believed to be the sister group to the beetle families Meloidae and Ripiphoridae, which have similar parasitic development and forewing reduction; early molecular research suggested their inclusion as a sister group to the flies, in a clade called the halteria, which have one pair of the wings modified into halteres, and failed to support their relationship to the beetles. Further molecular studies, however, suggested they are outside the clade Mecopterida (containing the Diptera and Lepidoptera), but found no strong evidence for affinity with any other extant group. Study of their evolutionary position has been problematic due to difficulties in phylogenetic analysis arising from long branch attraction. The most basal strepsipteran is the fossil "Protoxenos janzeni" discovered in Baltic amber, while the most basal living strepsipteran is "Bahiaxenos relictus", the sole member of the family Bahiaxenidae. The earliest known strepsipteran fossil is that of "Cretostylops engeli", discovered in middle Cretaceous amber from Myanmar.
In 2012, a fresh molecular study revived the assertion that the Stepsiptera are the sister group of the Coleoptera (beetles).
The Strepsiptera have two major groups: the Stylopidia and Mengenillidia. The Mengenillidia include three extinct families (Cretostylopidae, Protoxenidae, and Mengeidae) plus two extant families (Bahiaxenidae and Mengenillidae; the latter is not monophyletic, however.) They are considered more primitive, and the known females (Mengenillidae only) are free-living, with rudimentary legs and antennae. The females have a single genital opening. The males have strong mandibles, a distinct labrum, and more than five antennal segments.
The other group, the Stylopidia, includes seven families: the Corioxenidae, Halictophagidae, Callipharixenidae, Bohartillidae, Elenchidae, Myrmecolacidae, and Stylopidae. All Stylopidia have endoparasitic females having multiple genital openings.
The Stylopidae have four-segmented tarsi and four- to six-segmented antennae, with the third segment having a lateral process. The family Stylopidae may be paraphyletic. The Elenchidae have two-segmented tarsi and four-segmented antennae, with the third segment having a lateral process. The Halictophagidae have three-segmented tarsi and seven-segmented antennae, with lateral processes from the third and fourth segments.
The Stylopidae mostly parasitize wasps and bees, the Elenchidae are known to parasitize Fulgoroidea, while the Halictophagidae are found on leafhoppers, treehoppers, and mole cricket hosts.
Strepsipteran insects in the genus "Xenos" parasitize "Polistes carnifex", a species of social wasps. These obligate parasites infect the developing wasp larvae in the nest and are present within the abdomens of female wasps when they hatch out. Here they remain until they thrust through the cuticle and pupate (males) or release infective first-instar larvae onto flowers (females). These larvae are transported back to their nests by foraging wasps.
Some insects which have been considered as pests may have strepsipteran endoparasites. Inoculation of a pest population with the corresponding parasitoid may sometimes aid in reducing the impact of such pests, although no strepsipterans have ever been tested for use in this capacity, let alone being available for such purposes, either commercially or experimentally. In India the species "Halictophagus palmae" was first described as a new species in 2000, and in the original description the authors mused about the possible future uses of their discovery. A 2011 book later mentioned these musings. | https://en.wikipedia.org/wiki?curid=45598 |
Surgery
Surgery is a medical specialty that uses operative manual and instrumental techniques on a person to investigate or treat a pathological condition such as a disease or injury, to help improve bodily function or appearance or to repair unwanted ruptured areas.
The act of performing surgery may be called a surgical procedure, operation, or simply "surgery". In this context, the verb "operate" means to perform surgery. The adjective surgical means pertaining to surgery; e.g. surgical instruments or surgical nurse. The person or subject on which the surgery is performed can be a person or an animal. A surgeon is a person who practices surgery and a surgeon's assistant is a person who practices surgical assistance. A surgical team is made up of surgeon, surgeon's assistant, anaesthetist, circulating nurse and surgical technologist. Surgery usually spans minutes to hours, but it is typically not an ongoing or periodic type of treatment. The term "surgery" can also refer to the place where surgery is performed, or, in British English, simply the office of a physician, dentist, or veterinarian.
Surgery is an invasive technique with the fundamental principle of physical intervention on organs/organ systems/tissues for diagnostic or therapeutic reasons.
As a general rule, a procedure is considered surgical when it involves cutting of a person's tissues or closure of a previously sustained wound. Other procedures that do not necessarily fall under this rubric, such as angioplasty or endoscopy, may be considered surgery if they involve "common" surgical procedure or settings, such as use of a sterile environment, anesthesia, antiseptic conditions, typical surgical instruments, and suturing or stapling. All forms of surgery are considered invasive procedures; so-called "noninvasive surgery" usually refers to an excision that does not penetrate the structure being excised (e.g. laser ablation of the cornea) or to a radiosurgical procedure (e.g. irradiation of a tumor).
Surgical procedures are commonly categorized by urgency, type of procedure, body system involved, the degree of invasiveness, and special instrumentation.
Inpatient surgery is performed in a hospital, and the person undergoing surgery stays at least one night in the hospital after the surgery. Outpatient surgery occurs in a hospital outpatient department or freestanding ambulatory surgery center, and the person who had surgery is discharged the same working day. Office surgery occurs in a physician's office, and the person is discharged the same working day.
At a hospital, modern surgery is often performed in an operating theater using surgical instruments, an operating table, and other equipment. Among United States hospitalizations for nonmaternal and nonneonatal conditions in 2012, more than one-fourth of stays and half of hospital costs involved stays that included operating room (OR) procedures. The environment and procedures used in surgery are governed by the principles of aseptic technique: the strict separation of "sterile" (free of microorganisms) things from "unsterile" or "contaminated" things. All surgical instruments must be sterilized, and an instrument must be replaced or re-sterilized if, it becomes contaminated (i.e. handled in an unsterile manner, or allowed to touch an unsterile surface). Operating room staff must wear sterile attire (scrubs, a scrub cap, a sterile surgical gown, sterile latex or non-latex polymer gloves and a surgical mask), and they must scrub hands and arms with an approved disinfectant agent before each procedure.
Prior to surgery, the person is given a medical examination, receives certain pre-operative tests, and their physical status is rated according to the ASA physical status classification system. If these results are satisfactory, the person requiring surgery signs a consent form and is given a surgical clearance. If the procedure is expected to result in significant blood loss, an autologous blood donation may be made some weeks prior to surgery. If the surgery involves the digestive system, the person requiring surgery may be instructed to perform a bowel prep by drinking a solution of polyethylene glycol the night before the procedure. People preparing for surgery are also instructed to abstain from food or drink (an NPO order after midnight on the night before the procedure), to minimize the effect of stomach contents on pre-operative medications and reduce the risk of aspiration if the person vomits during or after the procedure.
Some medical systems have a practice of routinely performing chest x-rays before surgery. The premise behind this practice is that the physician might discover some unknown medical condition which would complicate the surgery, and that upon discovering this with the chest x-ray, the physician would adapt the surgery practice accordingly. | https://en.wikipedia.org/wiki?curid=45599 |
Reed–Solomon error correction
Reed–Solomon codes are a group of error-correcting codes that were introduced by Irving S. Reed and Gustave Solomon in 1960.
They have many applications, the most prominent of which include consumer technologies such as CDs, DVDs, Blu-ray discs, QR codes, data transmission technologies such as DSL and WiMAX, broadcast systems such as satellite communications, DVB and ATSC, and storage systems such as RAID 6.
Reed–Solomon codes operate on a block of data treated as a set of finite field elements called symbols. Reed–Solomon codes are able to detect and correct multiple symbol errors. By adding check symbols to the data, a Reed–Solomon code can detect (but not correct) any combination of up to and including erroneous symbols, OR locate and correct up to and including erroneous symbols at unknown locations. As an erasure code, it can correct up to and including erasures at locations that are known and provided to the algorithm, or it can detect and correct combinations of errors and erasures. Reed–Solomon codes are also suitable as multiple-burst bit-error correcting codes, since a sequence of consecutive bit errors can affect at most two symbols of size . The choice of is up to the designer of the code, and may be selected within wide limits.
There are two basic types of Reed–Solomon codes, original view and BCH view, with BCH view being the most common as BCH view decoders are faster and require less working storage than original view decoders.
Reed–Solomon codes were developed in 1960 by Irving S. Reed and Gustave Solomon, who were then staff members of MIT Lincoln Laboratory. Their seminal article was titled "Polynomial Codes over Certain Finite Fields". . The original encoding scheme described in the Reed & Solomon article used a variable polynomial based on the message to be encoded where only a fixed set of values (evaluation points) to be encoded are known to encoder and decoder. The original theoretical decoder generated potential polynomials based on subsets of "k" (unencoded message length) out of "n" (encoded message length) values of a received message, choosing the most popular polynomial as the correct one, which was impractical for all but the simplest of cases. This was initially resolved by changing the original scheme to a BCH code like scheme based on a fixed polynomial known to both encoder and decoder, but later, practical decoders based on the original scheme were developed, although slower than the BCH schemes. The result of this is that there are two main types of Reed Solomon codes, ones that use the original encoding scheme, and ones that use the BCH encoding scheme.
Also in 1960, a practical fixed polynomial decoder for BCH codes developed by Daniel Gorenstein and Neal Zierler was described in an MIT Lincoln Laboratory report by Zierler in January 1960 and later in a paper in June 1961. The Gorenstein–Zierler decoder and the related work on BCH codes are described in a book Error Correcting Codes by W. Wesley Peterson (1961). By 1963 (or possibly earlier), J. J. Stone (and others) recognized that Reed Solomon codes could use the BCH scheme of using a fixed generator polynomial, making such codes a special class of BCH codes, but Reed Solomon codes based on the original encoding scheme, are not a class of BCH codes, and depending on the set of evaluation points, they are not even cyclic codes.
In 1969, an improved BCH scheme decoder was developed by Elwyn Berlekamp and James Massey, and has since been known as the Berlekamp–Massey decoding algorithm.
In 1975, another improved BCH scheme decoder was developed by Yasuo Sugiyama, based on the extended Euclidean algorithm.
In 1977, Reed–Solomon codes were implemented in the Voyager program in the form of concatenated error correction codes. The first commercial application in mass-produced consumer products appeared in 1982 with the compact disc, where two interleaved Reed–Solomon codes are used. Today, Reed–Solomon codes are widely implemented in digital storage devices and digital communication standards, though they are being slowly replaced by more modern low-density parity-check (LDPC) codes or turbo codes. For example, Reed–Solomon codes are used in the Digital Video Broadcasting (DVB) standard DVB-S, but LDPC codes are used in its successor, DVB-S2.
In 1986, an original scheme decoder known as the Berlekamp–Welch algorithm was developed.
In 1996, variations of original scheme decoders called list decoders or soft decoders were developed by Madhu Sudan and others, and work continues on these types of decoders – see "Guruswami–Sudan list decoding algorithm".
In 2002, another original scheme decoder was developed by Shuhong Gao, based on the extended Euclidean algorithm Gao_RS.pdf .
Reed–Solomon coding is very widely used in mass storage systems to correct
the burst errors associated with media defects.
Reed–Solomon coding is a key component of the compact disc. It was the first use of strong error correction coding in a mass-produced consumer product, and DAT and DVD use similar schemes. In the CD, two layers of Reed–Solomon coding separated by a 28-way convolutional interleaver yields a scheme called Cross-Interleaved Reed–Solomon Coding (CIRC). The first element of a CIRC decoder is a relatively weak inner (32,28) Reed–Solomon code, shortened from a (255,251) code with 8-bit symbols. This code can correct up to 2 byte errors per 32-byte block. More importantly, it flags as erasures any uncorrectable blocks, i.e., blocks with more than 2 byte errors. The decoded 28-byte blocks, with erasure indications, are then spread by the deinterleaver to different blocks of the (28,24) outer code. Thanks to the deinterleaving, an erased 28-byte block from the inner code becomes a single erased byte in each of 28 outer code blocks. The outer code easily corrects this, since it can handle up to 4 such erasures per block.
The result is a CIRC that can completely correct error bursts up to 4000 bits, or about 2.5 mm on the disc surface. This code is so strong that most CD playback errors are almost certainly caused by tracking errors that cause the laser to jump track, not by uncorrectable error bursts.
DVDs use a similar scheme, but with much larger blocks, a (208,192) inner code, and a (182,172) outer code.
Reed–Solomon error correction is also used in parchive files which are commonly posted accompanying multimedia files on USENET. The Distributed online storage service Wuala (discontinued in 2015) also used to make use of Reed–Solomon when breaking up files.
Almost all two-dimensional bar codes such as PDF-417, MaxiCode, Datamatrix, QR Code, and Aztec Code use Reed–Solomon error correction to allow correct reading even if a portion of the bar code is damaged. When the bar code scanner cannot recognize a bar code symbol, it will treat it as an erasure.
Reed–Solomon coding is less common in one-dimensional bar codes, but is used by the PostBar symbology.
Specialized forms of Reed–Solomon codes, specifically Cauchy-RS and Vandermonde-RS, can be used to overcome the unreliable nature of data transmission over erasure channels. The encoding process assumes a code of RS("N", "K") which results in "N" codewords of length "N" symbols each storing "K" symbols of data, being generated, that are then sent over an erasure channel.
Any combination of "K" codewords received at the other end is enough to reconstruct all of the "N" codewords. The code rate is generally set to 1/2 unless the channel's erasure likelihood can be adequately modelled and is seen to be less. In conclusion, "N" is usually 2"K", meaning that at least half of all the codewords sent must be received in order to reconstruct all of the codewords sent.
Reed–Solomon codes are also used in xDSL systems and CCSDS's Space Communications Protocol Specifications as a form of forward error correction.
One significant application of Reed–Solomon coding was to encode the digital pictures sent back by the Voyager space probe.
Voyager introduced Reed–Solomon coding concatenated with convolutional codes, a practice that has since become very widespread in deep space and satellite (e.g., direct digital broadcasting) communications.
Viterbi decoders tend to produce errors in short bursts. Correcting these burst errors is a job best done by short or simplified Reed–Solomon codes.
Modern versions of concatenated Reed–Solomon/Viterbi-decoded convolutional coding were and are used on the Mars Pathfinder, Galileo, Mars Exploration Rover and Cassini missions, where they perform within about 1–1.5 dB of the ultimate limit, being the Shannon capacity.
These concatenated codes are now being replaced by more powerful turbo codes.
The Reed–Solomon code is actually a family of codes, where every code is characterised by three parameters: an alphabet size "q", a block length "n", and a message length "k," with "k < n ≤ q." The set of alphabet symbols is interpreted as the finite field of order "q", and thus, "q" has to be a prime power. In the most useful parameterizations of the Reed–Solomon code, the block length is usually some constant multiple of the message length, that is, the rate is some constant, and furthermore, the block length is equal to or one less than the alphabet size, that is, or .
There are different encoding procedures for the Reed–Solomon code, and thus, there are different ways to describe the set of all codewords.
In the original view of , every codeword of the Reed–Solomon code is a sequence of function values of a polynomial of degree less than "k". In order to obtain a codeword of the Reed–Solomon code, the message is interpreted as the description of a polynomial "p" of degree less than "k" over the finite field "F" with "q" elements.
In turn, the polynomial "p" is evaluated at "n" ≤ "q" distinct points formula_1 of the field "F", and the sequence of values is the corresponding codeword. Common choices for a set of evaluation points include {0, 1, 2, ..., "n" − 1}, {0, 1, "α", "α"2, ..., "α""n"−2}, or for "n" < "q", {1, "α", "α"2, ..., "α""n"−1}, ... , where "α" is a primitive element of "F".
Formally, the set formula_2 of codewords of the Reed–Solomon code is defined as follows:
Since any two "distinct" polynomials of degree less than formula_4 agree in at most formula_5 points, this means that any two codewords of the Reed–Solomon code disagree in at least formula_6 positions.
Furthermore, there are two polynomials that do agree in formula_5 points but are not equal, and thus, the distance of the Reed–Solomon code is exactly formula_8.
Then the relative distance is formula_9, where formula_10 is the rate.
This trade-off between the relative distance and the rate is asymptotically optimal since, by the Singleton bound, "every" code satisfies formula_11.
Being a code that achieves this optimal trade-off, the Reed–Solomon code belongs to the class of maximum distance separable codes.
While the number of different polynomials of degree less than "k" and the number of different messages are both equal to formula_12, and thus every message can be uniquely mapped to such a polynomial, there are different ways of doing this encoding.
The original construction of interprets the message "x" as the "coefficients" of the polynomial "p", whereas subsequent constructions interpret the message as the "values" of the polynomial at the first "k" points formula_13 and obtain the polynomial "p" by interpolating these values with a polynomial of degree less than "k".
The latter encoding procedure, while being slightly less efficient, has the advantage that it gives rise to a systematic code, that is, the original message is always contained as a subsequence of the codeword.
In the original construction of , the message formula_14 is mapped to the polynomial formula_15 with
The codeword of formula_17 is obtained by evaluating formula_15 at formula_19 different points formula_1 of the field formula_21.
Thus the classical encoding function formula_22 for the Reed–Solomon code is defined as follows:
This function formula_24 is a linear mapping, that is, it satisfies formula_25 for the following formula_26-matrix formula_27 with elements from formula_21:
This matrix is the transpose of a Vandermonde matrix over formula_21. In other words, the Reed–Solomon code is a linear code, and in the classical encoding procedure, its generator matrix is formula_27.
There is an alternative encoding procedure that also produces the Reed–Solomon code, but that does so in a systematic way. Here, the mapping from the message formula_17 to the polynomial formula_15 works differently: the polynomial formula_15 is now defined as the unique polynomial of degree less than formula_4 such that
To compute this polynomial formula_15 from formula_17, one can use Lagrange interpolation.
Once it has been found, it is evaluated at the other points formula_40 of the field.
The alternative encoding function formula_22 for the Reed–Solomon code is then again just the sequence of values:
Since the first formula_4 entries of each codeword formula_44 coincide with formula_17, this encoding procedure is indeed systematic.
Since Lagrange interpolation is a linear transformation, formula_46 is a linear mapping. In fact, we have formula_47, where
A discrete Fourier transform is essentially the same as the encoding procedure; it uses the generator polynomial "p"(x) to map a set of evaluation points into the message values as shown above:
The inverse Fourier transform could be used to convert an error free set of "n" < "q" message values back into the encoding polynomial of "k" coefficients, with the constraint that in order for this to work, the set of evaluation points used to encode the message must be a set of increasing powers of "α":
However, Lagrange interpolation performs the same conversion without the constraint on the set of evaluation points or the requirement of an error free set of message values and is used for systematic encoding, and in one of the steps of the Gao decoder.
In this view, the sender again maps the message formula_17 to a polynomial formula_15, and for this, any of the two mappings just described can be used (where the message is either interpreted as the coefficients of formula_15 or as the initial sequence of values of formula_15). Once the sender has constructed the polynomial formula_15 in some way, however, instead of sending the "values" of formula_15 at all points, the sender computes some related polynomial formula_58 of degree at most formula_59 for formula_60 and sends the formula_19 "coefficients" of that polynomial. The polynomial formula_62 is constructed by multiplying the message polynomial formula_63, which has degree at most formula_5, with a generator polynomial formula_65 of degree formula_66 that is known to both the sender and the receiver. The generator polynomial formula_67 is defined as the polynomial whose roots are exactly formula_68, i.e.,
The transmitter sends the formula_60 coefficients of formula_71. Thus, in the BCH view of Reed–Solomon codes, the set formula_2 of codewords is defined for formula_60 as follows:
The encoding procedure for the BCH view of Reed–Solomon codes can be modified to yield a systematic encoding procedure, in which each codeword contains the message as a prefix, and simply appends error correcting symbols as a suffix. Here, instead of sending formula_75, the encoder constructs the transmitted polynomial formula_76 such that the coefficients of the formula_4 largest monomials are equal to the corresponding coefficients of formula_78, and the lower-order coefficients of formula_76 are chosen exactly in such a way that formula_76 becomes divisible by formula_67. Then the coefficients of formula_78 are a subsequence (specifically, a prefix) of the coefficients of formula_76. To get a code that is overall systematic, we construct the message polynomial formula_78 by interpreting the message as the sequence of its coefficients.
Formally, the construction is done by multiplying formula_78 by formula_86 to make room for the formula_87 check symbols, dividing that product by formula_67 to find the remainder, and then compensating for that remainder by subtracting it. The formula_89 check symbols are created by computing the remainder formula_90:
The remainder has degree at most formula_92, whereas the coefficients of formula_93 in the polynomial formula_94 are zero. Therefore, the following definition of the codeword formula_76 has the property that the first formula_4 coefficients are identical to the coefficients of formula_78:
As a result, the codewords formula_76 are indeed elements of formula_2, that is, they are divisible by the generator polynomial formula_67:
The Reed–Solomon code is a ["n", "k", "n" − "k" + 1] code; in other words, it is a linear block code of length "n" (over "F") with dimension "k" and minimum Hamming distance formula_103 The Reed–Solomon code is optimal in the sense that the minimum distance has the maximum value possible for a linear code of size ("n", "k"); this is known as the Singleton bound. Such a code is also called a maximum distance separable (MDS) code.
The error-correcting ability of a Reed–Solomon code is determined by its minimum distance, or equivalently, by formula_104, the measure of redundancy in the block. If the locations of the error symbols are not known in advance, then a Reed–Solomon code can correct up to formula_105 erroneous symbols, i.e., it can correct half as many errors as there are redundant symbols added to the block. Sometimes error locations are known in advance (e.g., "side information" in demodulator signal-to-noise ratios)—these are called erasures. A Reed–Solomon code (like any MDS code) is able to correct twice as many erasures as errors, and any combination of errors and erasures can be corrected as long as the relation 2"E" + "S" ≤ "n" − "k" is satisfied, where formula_106 is the number of errors and formula_107 is the number of erasures in the block.
The theoretical error bound can be described via the following formula for the AWGN channel for FSK:
and for other modulation schemes:
where formula_110, formula_111, formula_112, formula_58 is the symbol error rate in uncoded AWGN case and formula_114 is the modulation order.
For practical uses of Reed–Solomon codes, it is common to use a finite field formula_21 with formula_116 elements. In this case, each symbol can be represented as an formula_117-bit value.
The sender sends the data points as encoded blocks, and the number of symbols in the encoded block is formula_118. Thus a Reed–Solomon code operating on 8-bit symbols has formula_119 symbols per block. (This is a very popular value because of the prevalence of byte-oriented computer systems.) The number formula_4, with formula_121, of "data" symbols in the block is a design parameter. A commonly used code encodes formula_122 eight-bit data symbols plus 32 eight-bit parity symbols in an formula_123-symbol block; this is denoted as a formula_124 code, and is capable of correcting up to 16 symbol errors per block.
The Reed–Solomon code properties discussed above make them especially well-suited to applications where errors occur in bursts. This is because it does not matter to the code how many bits in a symbol are in error — if multiple bits in a symbol are corrupted it only counts as a single error. Conversely, if a data stream is not characterized by error bursts or drop-outs but by random single bit errors, a Reed–Solomon code is usually a poor choice compared to a binary code.
The Reed–Solomon code, like the convolutional code, is a transparent code. This means that if the channel symbols have been inverted somewhere along the line, the decoders will still operate. The result will be the inversion of the original data. However, the Reed–Solomon code loses its transparency when the code is shortened. The "missing" bits in a shortened code need to be filled by either zeros or ones, depending on whether the data is complemented or not. (To put it another way, if the symbols are inverted, then the zero-fill needs to be inverted to a one-fill.) For this reason it is mandatory that the sense of the data (i.e., true or complemented) be resolved before Reed–Solomon decoding.
Whether the Reed–Solomon code is cyclic or not depends on subtle details of the construction. In the original view of Reed and Solomon, where the codewords are the values of a polynomial, one can choose the sequence of evaluation points in such a way as to make the code cyclic. In particular, if formula_125 is a primitive root of the field formula_21, then by definition all non-zero elements of formula_21 take the form formula_128 for formula_129, where formula_130. Each polynomial formula_131 over formula_21 gives rise to a codeword formula_133. Since the function formula_134 is also a polynomial of the same degree, this function gives rise to a codeword formula_135; since formula_136 holds, this codeword is the cyclic left-shift of the original codeword derived from formula_131. So choosing a sequence of primitive root powers as the evaluation points makes the original view Reed–Solomon code cyclic. Reed–Solomon codes in the BCH view are always cyclic because BCH codes are cyclic.
Designers are not required to use the "natural" sizes of Reed–Solomon code blocks. A technique known as "shortening" can produce a smaller code of any desired size from a larger code. For example, the widely used (255,223) code can be converted to a (160,128) code by padding the unused portion of the source block with 95 binary zeroes and not transmitting them. At the decoder, the same portion of the block is loaded locally with binary zeroes. The Delsarte–Goethals–Seidel theorem illustrates an example of an application of shortened Reed–Solomon codes. In parallel to shortening, a technique known as puncturing allows omitting some of the encoded parity symbols.
The decoders described in this section use the Reed Solomon original view of a codeword as a sequence of polynomial values where the polynomial is based on the message to be encoded. The same set of fixed values are used by the encoder and decoder, and the decoder recovers the encoding polynomial (and optionally an error locating polynomial) from the received message.
described a theoretical decoder that corrected errors by finding the most popular message polynomial. The decoder only knows the set of values formula_138 to formula_139 and which encoding method was used to generate the codeword's sequence of values. The original message, the polynomial, and any errors are unknown. A decoding procedure could use a method like Lagrange interpolation on various subsets of n codeword values taken k at a time to repeatedly produce potential polynomials, until a sufficient number of matching polynomials are produced to reasonably eliminate any errors in the received codeword. Once a polynomial is determined, then any errors in the codeword can be corrected, by recalculating the corresponding codeword values. Unfortunately, in all but the simplest of cases, there are too many subsets, so the algorithm is impractical. The number of subsets is the binomial coefficient, formula_140, and the number of subsets is infeasible for even modest codes. For a formula_141 code that can correct 3 errors, the naive theoretical decoder would examine 359 billion subsets.
In 1986, a decoder known as the Berlekamp–Welch algorithm was developed as a decoder that is able to recover the original message polynomial as well as an error "locator" polynomial that produces zeroes for the input values that correspond to errors, with time complexity O(n^3), where n is the number of values in a message. The recovered polynomial is then used to recover (recalculate as needed) the original message.
Using RS(7,3), GF(929), and the set of evaluation points "a""i" = "i" − 1
If the message polynomial is
The codeword is
Errors in transmission might cause this to be received instead.
The key equations are:
Assume maximum number of errors: "e" = 2. The key equations become:
Using Gaussian elimination:
Recalculate where to correct resulting in the corrected codeword:
In 2002, an improved decoder was developed by Shuhong Gao, based on the extended Euclid algorithm Gao_RS.pdf .
Using the same data as the Berlekamp Welch example above:
divide "Q"(x) and "E"(x) by most significant coefficient of "E"(x) = 708. (Optional)
Recalculate where to correct resulting in the corrected codeword:
The decoders described in this section use the BCH view of a codeword as a sequence of coefficients. They use a fixed generator polynomial known to both encoder and decoder.
Daniel Gorenstein and Neal Zierler developed a decoder that was described in a MIT Lincoln Laboratory report by Zierler in January 1960 and later in a paper in June 1961. The Gorenstein–Zierler decoder and the related work on BCH codes are described in a book "Error Correcting Codes" by W. Wesley Peterson (1961).
The transmitted message, formula_151, is viewed as the coefficients of a polynomial "s"("x") that is divisible by a generator polynomial "g"("x").
where "α" is a primitive root.
Since "s"("x") is a multiple of the generator "g"("x"), it follows that it "inherits" all its roots:
The transmitted polynomial is corrupted in transit by an error polynomial "e"("x") to produce the received polynomial "r"("x").
where "ei" is the coefficient for the "i"-th power of "x". Coefficient "ei" will be zero if there is no error at that power of "x" and nonzero if there is an error. If there are "ν" errors at distinct powers "ik" of "x", then
The goal of the decoder is to find the number of errors ("ν"), the positions of the errors ("ik"), and the error values at those positions ("eik"). From those, "e"("x") can be calculated and subtracted from "r"("x") to get the originally sent message "s"("x").
The decoder starts by evaluating the polynomial as received at certain points. We call the results of that evaluation the "syndromes", "S""j". They are defined as:
The advantage of looking at the syndromes is that the message polynomial drops out. In other words, the syndromes only relate to the error, and are unaffected by the actual contents of the message being transmitted. If the syndromes are all zero, the algorithm stops here and reports that the message was not corrupted in transit.
For convenience, define the error locators "Xk" and error values "Yk" as:
Then the syndromes can be written in terms of the error locators and error values as
This definition of the syndrome values is equivalent to the previous since formula_161.
The syndromes give a system of "n" − "k" ≥ 2"ν" equations in 2"ν" unknowns, but that system of equations is nonlinear in the "Xk" and does not have an obvious solution. However, if the "Xk" were known (see below), then the syndrome equations provide a linear system of equations that can easily be solved for the "Yk" error values.
Consequently, the problem is finding the "Xk", because then the leftmost matrix would be known, and both sides of the equation could be multiplied by its inverse, yielding Y"k"
In the variant of this algorithm where the locations of the errors are already known (when it is being used as an erasure code), this is the end. The error locations ("Xk") are already known by some other method (for example, in a FM transmission, the sections where the bitstream was unclear or overcome with interference are probabilistically determinable from frequency analysis). In this scenario, up to formula_66 errors can be corrected.
The rest of the algorithm serves to locate the errors, and will require syndrome values up to formula_164, instead of just the formula_165 used thus far. This is why 2x as many error correcting symbols need to be added as can be corrected without knowing their locations.
There is a linear recurrence relation that gives rise to a system of linear equations. Solving those equations identifies those error locations "Xk".
Define the error locator polynomial Λ("x") as
The zeros of Λ("x") are the reciprocals formula_167. This follows from the above product notation construction since if formula_168 then one of the multiplied terms will be zero formula_169, making the whole polynomial evaluate to zero.
Let formula_171 be any integer such that formula_172. Multiply both sides by formula_173 and it will still be zero.
Sum for "k" = 1 to "ν"
Collect each term into its own sum, and extract the constant values of formula_176 that are unaffected by the summation
These summations are now equivalent to the syndrome values, which we know and can substitute in! This therefore reduces to
Subtracting formula_179 from both sides yields
Recall that "j" was chosen to be any integer between 1 and "v" inclusive, and this equivalence is true for any and all such values. Therefore, we have "v" linear equations, not just one. This system of linear equations can therefore be solved for the coefficients Λ"i" of the error location polynomial:
The above assumes the decoder knows the number of errors "ν", but that number has not been determined yet. The PGZ decoder does not determine "ν" directly but rather searches for it by trying successive values. The decoder first assumes the largest value for a trial "ν" and sets up the linear system for that value. If the equations can be solved (i.e., the matrix determinant is nonzero), then that trial value is the number of errors. If the linear system cannot be solved, then the trial "ν" is reduced by one and the next smaller system is examined.
Use the coefficients Λ"i" found in the last step to build the error location polynomial. The roots of the error location polynomial can be found by exhaustive search. The error locators "Xk" are the reciprocals of those roots. The order of coefficients of the error location polynomial can be reversed, in which case the roots of that reversed polynomial are the error locators formula_182 (not their reciprocals formula_167). Chien search is an efficient implementation of this step.
Once the error locators "Xk" are known, the error values can be determined. This can be done by direct solution for "Yk" in the error equations matrix given above, or using the Forney algorithm.
Calculate "ik" by taking the log base formula_125 of "Xk". This is generally done using a precomputed lookup table.
Finally, e(x) is generated from "ik" and "eik" and then is subtracted from r(x) to get the originally sent message s(x), with errors corrected.
Consider the Reed–Solomon code defined in with and (this is used in PDF417 barcodes) for a RS(7,3) code. The generator polynomial is
If the message polynomial is , then a systematic codeword is encoded as follows.
Errors in transmission might cause this to be received instead.
The syndromes are calculated by evaluating "r" at powers of "α".
Using Gaussian elimination:
The coefficients can be reversed to produce roots with positive exponents, but typically this isn't used:
with the log of the roots corresponding to the error locations (right to left, location 0 is the last term in the codeword).
To calculate the error values, apply the Forney algorithm.
Subtracting "e"1 "x"3 and "e"2 "x"4 from the received polynomial "r" reproduces the original codeword "s".
The Berlekamp–Massey algorithm is an alternate iterative procedure for finding the error locator polynomial. During each iteration, it calculates a discrepancy based on a current instance of Λ(x) with an assumed number of errors "e":
and then adjusts Λ("x") and "e" so that a recalculated Δ would be zero. The article Berlekamp–Massey algorithm has a detailed description of the procedure. In the following example, "C"("x") is used to represent Λ("x").
Using the same data as the Peterson Gorenstein Zierler example above:
The final value of "C" is the error locator polynomial, Λ("x").
Another iterative method for calculating both the error locator polynomial and the error value polynomial is based on Sugiyama's adaptation of the extended Euclidean algorithm .
Define S(x), Λ(x), and Ω(x) for "t" syndromes and "e" errors:
The key equation is:
For "t" = 6 and "e" = 3:
The middle terms are zero due to the relationship between Λ and syndromes.
The extended Euclidean algorithm can find a series of polynomials of the form
where the degree of "R" decreases as "i" increases. Once the degree of "R""i"("x") < "t"/2, then
Ai(x) = Λ(x)
Bi(x) = −Q(x)
Ri(x) = Ω(x).
B(x) and Q(x) don't need to be saved, so the algorithm becomes:
to set low order term of Λ(x) to 1, divide Λ(x) and Ω(x) by Ai(0):
Ai(0) is the constant (low order) term of Ai.
Using the same data as the Peterson–Gorenstein–Zierler example above:
A discrete Fourier transform can be used for decoding. To avoid conflict with syndrome names, let "c"("x") = "s"("x") the encoded codeword. "r"("x") and "e"("x") are the same as above. Define "C"("x"), "E"("x"), and "R"("x") as the discrete Fourier transforms of "c"("x"), "e"("x"), and "r"("x"). Since "r"("x") = "c"("x") + "e"("x"), and since a discrete Fourier transform is a linear operator, "R"("x") = "C"("x") + "E"("x").
Transform "r"("x") to "R"("x") using discrete Fourier transform. Since the calculation for a discrete Fourier transform is the same as the calculation for syndromes, "t" coefficients of "R"("x") and "E"("x") are the same as the syndromes:
Use formula_201 through formula_202 as syndromes (they're the same) and generate the error locator polynomial using the methods from any of the above decoders.
Let v = number of errors. Generate E(x) using the known coefficients formula_203 to formula_204, the error locator polynomial, and these formulas
Then calculate "C"("x") = "R"("x") − "E"("x") and take the inverse transform (polynomial interpolation) of "C"("x") to produce "c"("x").
The Singleton bound states that the minimum distance "d" of a linear block code of size ("n","k") is upper-bounded by "n" − "k" + 1. The distance "d" was usually understood to limit the error-correction capability to ⌊"d"/2⌋. The Reed–Solomon code achieves this bound with equality, and can thus correct up to ⌊("n" − "k" + 1)/2⌋ errors. However, this error-correction bound is not exact.
In 1999, Madhu Sudan and Venkatesan Guruswami at MIT published "Improved Decoding of Reed–Solomon and Algebraic-Geometry Codes" introducing an algorithm that allowed for the correction of errors beyond half the minimum distance of the code. It applies to Reed–Solomon codes and more generally to algebraic geometric codes. This algorithm produces a list of codewords (it is a list-decoding algorithm) and is based on interpolation and factorization of polynomials over formula_208 and its extensions.
The algebraic decoding methods described above are hard-decision methods, which means that for every symbol a hard decision is made about its value. For example, a decoder could associate with each symbol an additional value corresponding to the channel demodulator's confidence in the correctness of the symbol. The advent of LDPC and turbo codes, which employ iterated soft-decision belief propagation decoding methods to achieve error-correction performance close to the theoretical limit, has spurred interest in applying soft-decision decoding to conventional algebraic codes. In 2003, Ralf Koetter and Alexander Vardy presented a polynomial-time soft-decision algebraic list-decoding algorithm for Reed–Solomon codes, which was based upon the work by Sudan and Guruswami.
In 2016, Steven J. Franke and Joseph H. Taylor published a novel soft-decision decoder.
Here we present a simple Matlab implementation for an encoder.
function [ encoded ] = rsEncoder( msg, m, prim_poly, n, k )
end
% Find the Reed-Solomon generating polynomial g(x), by the way this is the
% same as the rsgenpoly function on matlab
function g = genpoly(k, n, alpha)
end
Now the decoding part:
function [ decoded, error_pos, error_mag, g, S ] = rsDecoder( encoded, m, prim_poly, n, k )
end
% Remove leading zeros from Galois array
function gt = trim(g)
end
% Add leading zeros
function xpad = pad(x,k)
end | https://en.wikipedia.org/wiki?curid=45600 |
Big Four accounting firms
The Big Four is the nickname used to refer collectively to the four largest professional services networks in the world, consisting of Deloitte, Ernst & Young, KPMG, and PricewaterhouseCoopers. The four networks are often grouped together for a number of reasons; they are each comparable in size relative to the rest of the market, both in terms of revenue and workforce; they are each considered equal in their ability to provide a wide scope of quality professional services to their clients; and, among those looking to start a career in professional services, particularly accounting, they are considered equally attractive networks to work in, because of the frequency with which these firms engage with "Fortune" 500 companies.
The Big Four each offer audit, assurance, taxation, management consulting, actuarial, corporate finance, and legal services to their clients. A significant majority of the audits of public companies, as well as many audits of private companies, are conducted by these four networks.
Until the late 20th century, the market for professional services was actually dominated by eight networks which were aptly nicknamed the "Big 8". The Big Eight consisted of Arthur Andersen, Coopers and Lybrand, Deloitte Haskins and Sells, Ernst and Whinney, Peat Marwick Mitchell, Price Waterhouse, Touche Ross, and Arthur Young.
The Big Eight gradually reduced due to mergers between these firms, as well as the 2002 collapse of Arthur Andersen, leaving four networks dominating the market at the turn of the 21st century. In the United Kingdom in 2011, it was reported that the Big Four account for the audits of 99% of the companies in the FTSE 100, and 96% of the companies in the FTSE 250 Index, an index of the leading mid-cap listing companies. Such a high level of industry concentration has caused concern, and a desire among some in the investment community for the Competition and Markets Authority to consider breaking up the Big Four. In October 2018, the CMA announced it would launch a detailed study of the Big Four's dominance of the audit sector.
None of the "firms" within the Big Four is actually a single firm; rather, they are professional services networks. Each is a network of firms, owned and managed independently, which have entered into agreements with the other member firms in the network to share a common name, brand, intellectual property, and quality standards. Each network has established a global entity to co-ordinate the activities of the network. In the case of KPMG, the co-ordinating entity is a Swiss association, and in the cases of Deloitte, PricewaterhouseCoopers and Ernst & Young, the co-ordinating entity is a UK limited company. Those entities "do not" themselves perform external professional services, nor do they own or control the member firms. Nevertheless, these networks colloquially are referred to as "firms" for the sake of simplicity and to reduce confusion with lay-people. These accounting and professional services networks are similar in nature to how law firm networks in the legal profession work.
In many cases, each member firm practices in a single country, and is structured to comply with the regulatory environment in that country.
Ernst & Young also includes separate legal entities which manage three of its four geographic areas: the Americas, Asia-Pacific, and EMEIA (Europe, the Middle East, India and Africa) groups, the fourth area being Japan, which has no larger co-ordination branch. These entities coordinate services performed by local firms within their respective areas, but "do" "not perform" services or hold ownership in the local entities. There are rare exceptions to this convention; in 2007, KPMG announced a merger of four internationally distinct member firms (in the United Kingdom, Germany, Switzerland and Liechtenstein) to form a single firm.
Since the 1980s, numerous mergers and one major scandal involving Arthur Andersen, have reduced the number of major professional-services firms from eight to four.
The firms were called the Big Eight for most of the 20th century, reflecting the international dominance of the eight largest firms:
Most of the Big Eight originated in an alliance formed between British and U.S. audit firms in the 19th or early 20th centuries. The firms' initial international expansion were driven by the needs of British and American based multinationals for worldwide service. They expanded by forming local partnerships, or by forming alliances with local firms. Arthur Andersen was the exception: the firm originated in the United States, and then expanded internationally by establishing its own offices in other markets, including the United Kingdom.
Price Waterhouse was a U.K. firm which opened a U.S. office in 1890, and later established a separate U.S. partnership. The UK and U.S. Peat Marwick Mitchell firms adopted a common name in 1925. Other firms used separate names for domestic business, and did not adopt common names until much later. For instance, Touche Ross was named such in 1960, Arthur Young, McLelland, Moores & Co in 1968, Coopers & Lybrand in 1973, Deloitte Haskins & Sells in 1978, and Ernst & Whinney in 1979.
In the 1980s the Big 8, each with global branding, adopted modern marketing and grew rapidly. They merged with many smaller firms. KPMG was the result of one of the largest of these mergers. In 1987, Peat Marwick merged with the Klynveld Main Goerdeler group to become KPMG Peat Marwick, later known simply as KPMG. Note that this was not the result of a merger between any of the Big Eight.
Competition among these firms intensified, and the Big Eight became the Big Six in 1989. In that year, Ernst & Whinney merged with Arthur Young to form Ernst & Young in June, and Deloitte, Haskins & Sells merged with Touche Ross to form Deloitte & Touche in August.
The Big Six after both mergers occurred were:
There has been some merging of ancestor firms, in some localities, which would aggregate brands belonging to the Big Four today, but in different combinations than the present-day names would otherwise suggest. For example, the United Kingdom local firm of Deloitte, Haskins & Sells merged instead with the United Kingdom firm of Coopers & Lybrand. The resulting firm was called Coopers & Lybrand Deloitte, and the local firm of Touche Ross kept its original name. It wasn't until the mid-1990s that both UK firms changed their names to match those of their respective international organizations. Meanwhile, in Australia, the local firm of Touche Ross merged instead with KPMG. It is for these reasons that the Deloitte & Touche international organization was known as DRT International (later DTT International), to avoid use of names which would have been ambiguous, as well as contested, in certain markets.
The Big Six became the Big Five, in July 1998, when Price Waterhouse merged with Coopers & Lybrand to form PricewaterhouseCoopers.
The Big Five at this point in time were:
Finally, the insolvency of Arthur Andersen stemming from their involvement in the 2001 Enron Scandal, produced the Big Four:
The Enron collapse and ensuing investigation prompted scrutiny of the company's financial reporting, which that year was audited by Arthur Andersen. Arthur Andersen was eventually indicted for obstruction of justice for shredding documents related to the audit in the Enron scandal. The resulting conviction, although it was later overturned, still effectively meant the end of Arthur Andersen, because the firm was not allowed to take on new clients while they were under investigation. Most of its country practices around the world were sold to members of what is now the Big Four -notably Ernst & Young (now known as EY) globally; Deloitte & Touche in the United Kingdom, Canada, Spain, and Brazil; and PricewaterhouseCoopers (now known as PwC) in China and Hong Kong.
In 2010, Deloitte, with its 1.8% growth, was able to outpace PricewaterhouseCoopers' 1.5% growth, gaining "first place" in revenue size, and became the largest firm in the professional services industry. In 2011, PwC re-gained first place with 10% revenue growth. In 2013, these two firms claimed the top two spots with only a $200 million revenue difference, that is, within half a percent. However, Deloitte saw faster growth than PwC over the next few years (largely due to acquisitions) and reclaimed the title of largest of the Big Four in Fiscal Year 2016.
It was estimated that the Big Four had about a 67% share of the global accountancy market in 2012, while most of the rest was divided among so called mid-tier players, such as Grant Thornton, BDO, and Crowe Global.
In Australia, the heads of the big four firms have met regularly for dinner, a parliamentary committee was told in 2018. The revelation was among issues which led to an inquiry by the Australian Competition and Consumer Commission into possible collusion in the selling of audit and services. However, Ernst & Young told the inquiry that the dinners, which were held once or twice a year, were to discuss industry trends and issues of corporate culture such as inclusion and diversity.
According to Australian taxation expert George Rozvany, the Big Four are "the masterminds of multinational tax avoidance and the architects of tax schemes which cost governments and their taxpayers an estimated $US1 trillion a year". At the same time they are advising governments on tax reforms, they are advising their multinational clients how to avoid taxes.
In the wake of industry concentration and the occasional firm failure, the issue of a credible alternative industry structure has been raised. The limiting factor on the expansion of the Big Four to include additional firms, is that although some of the firms in the next tier have become quite substantially large, or have formed international networks, effectively all large public companies insist on having an audit performed by a Big Four network. This creates the complication that smaller firms have no way to compete well enough to make it into the top end of the market.
In 2011, the House of Lords of United Kingdom completed an inquiry into the financial crisis, and called for an Office of Fair Trading investigation into the dominance of the Big Four. It is reported that the Big Four audit all but one of the companies that constitute the FTSE 100, and 240 of the companies in the FTSE 250, an index of the leading mid-cap listing companies.
Documents published in June 2010 show that some UK companies' banking covenants required them to use one of the Big Four. This approach from the lender prevents firms in the next tier from competing for audit work for such companies. The British Bankers' Association said that such clauses are rare. Current discussions in the UK consider outlawing such clauses.
In Ireland, the Director of Corporate Enforcement, in February 2011 said, auditors "report surprisingly few types of company law offences to us", with the so-called "big four" auditing firms reporting the least often to his office, at just 5% of all reports.
The January 2018 collapse of the UK construction and services company Carillion raised further questions about the Big Four, all of which had advised the company before its liquidation. On 13 February 2018, the Big Four were described by MP and chair of the Work and Pensions Select Committee Frank Field as "feasting on what was soon to become a carcass" after collecting fees of £72m for Carillion work during the years leading up to its collapse. The final report of a Parliamentary inquiry into the collapse of Carillion, published on 16 May 2018, accused the Big Four accounting firms of as a "cosy club", with KPMG singled out for its "complicity" in signing off Carillion’s "increasingly fantastical figures" and internal auditor Deloitte accused of failing to identify, or ignoring, "terminal failings". The report recommended the Government refer the statutory audit market to the Competition and Markets Authority (CMA), urging consideration of breaking up the Big Four. In September 2018, Business Secretary Greg Clark announced he had asked the CMA to conduct an inquiry into competition in the audit sector, and on 9 October 2018, the CMA announced it had launched a detailed study. MPs have asked for the separation of the "Big Four" into multiple units after the collapse of Carillion and BHS as it would help in providing the professional skepticism required to furnish high quality audits.
A 2019 analysis by PCAOB in the United States observed that the big four accounting firms bungled almost 31% of their audits since 2009. In another project study on government oversight, it was seen that while the auditors colluded to present audit reports that pleased their clients, the times when they didn't they lost business. Despite this large-scale collusion in audits, the PCAOB in its 16-year history has only made 18 enforcement cases against the "big four". Although these auditors have failed audits in 31% of cases (808 cases in total), they have only faced action by PCAOB in 6.6% of the cases. KPMG has never been fined despite having the worst audit failure rate of 36.6%.
Branding list
Showing year of formation through merger, or adoption of single brand name. | https://en.wikipedia.org/wiki?curid=38798 |
Galanthus
Galanthus (snowdrop; Greek "gála" "milk", "ánthos" "flower") is a small genus of approximately 20 species of bulbous perennial herbaceous plants in the family Amaryllidaceae. The plants have two linear leaves and a single small white drooping bell shaped flower with six petal-like (petaloid) tepals in two circles (whorls). The smaller inner petals have green markings.
Snowdrops have been known since the earliest times under various names, but were named "Galanthus" in 1753. As the number of recognised species increased, various attempts were made to divide the species into subgroups, usually on the basis of the pattern of the emerging leaves (vernation). In the era of molecular phylogenetics this characteristic has been shown to be unreliable and now seven molecularly defined clades are recognised that correspond to the biogeographical distribution of species. New species continue to be discovered.
Most species flower in winter, before the vernal equinox (20 or 21 March in the Northern Hemisphere), but some flower in early spring and late autumn. Sometimes snowdrops are confused with the two related genera within the tribe Galantheae, snowflakes "Leucojum" and "Acis".
All species of "Galanthus" are perennial petaloid herbaceous bulbous (growing from bulbs) monocot plants. The genus is characterised by the presence of two leaves, pendulous white flowers with six free perianth segments in two whorls. The inner whorl is smaller than the outer whorl and has green markings.
These are basal, emerging from the bulb initially enclosed in a tubular membranous sheath of cataphylls. Generally, these are two (sometimes three) in number and linear, strap-shaped, or oblanceolate. Vernation, the arrangement of the emerging leaves relative to each other, varies among species. These may be applanate (flat), supervolute (conduplicate), or explicative (pleated). In applanate vernation the two leaf blades are pressed flat to each other within the bud and as they emerge; explicative leaves are also pressed flat against each other, but the edges of the leaves are folded back (externally recurved) or sometimes rolled; in supervolute plants, one leaf is tightly clasped around the other within the bud and generally remains at the point where the leaves emerge from the soil (for illustration, see Stearn and Davis). In the past, this feature has been used to distinguish between species and to determine the parentage of hybrids, but now has been shown to be homoplasious, and not useful in this regard.
The scape (flowering stalk) is erect, leafless, terete, or compressed.
The scape bears at the top a pair of bract-like spathe valves usually fused down one side and joined by a papery membrane, appearing monophyllous (single). From between them emerges a solitary (rarely two), pendulous, nodding, bell-shaped white flower, held on a slender pedicel. The flower bears six free perianth segments (tepals) rather than petals, arranged in two whorls of three, the outer whorl being larger and more convex than the inner series. The outer tepals are acute to more or less obtuse, spathulate or oblanceolate to narrowly obovate or linear, shortly clawed, and erect spreading. The inner tepals are much shorter (half to two thirds as long), oblong, spathulate or oblanceolate, somewhat unguiculate (claw like) and tapered to the base and erect. These tepals also bear green markings at the base, apex, or both, that when at the apex, are bridge-shaped over the small sinus (notch) at the tip of each tepal, which are emarginate. Occasionally the markings are either green-yellow, yellow, or absent, and the shape and size varies by species.
The six stamens are inserted at the base of the perianth, and are very short (shorter than the inner perianth segments), the anthers basifixed (attached at their base) with filaments much shorter than the anthers and dehisce (open) by terminal pores or short slits.
The inferior ovary is three-celled. The style is slender and longer than the anthers, the stigma are minutely capitate. The ovary ripens into a three-celled capsule fruit. This fruit is fleshy, ellipsoid or almost spherical, opening by three flaps with seeds that are light brown to white and oblong with a small appendage or tail (elaiosome) containing substances attractive to ants, which distribute the seeds.
The chromosome number is 2n=24.
Floral formula: formula_1
The genus "Galanthus" is native to Europe and the Middle East, from the Spanish and French Pyrenees in the west through to the Caucasus and Iran in the east, and south to Sicily, the Peloponnese, the Aegean, Turkey, Lebanon, and Syria. The northern limit is uncertain because "G. nivalis" has been widely introduced and cultivated throughout Europe. "G. nivalis" and some other species valued as ornamentals have become widely naturalised in Europe, North America, and other regions.
"Galanthus nivalis" is the best-known and most widespread representative of the genus "Galanthus". It is native to a large area of Europe, stretching from the Pyrenees in the west, through France and Germany to Poland in the north, Italy, northern Greece, Bulgaria, Romania, Ukraine, and European Turkey. It has been introduced and is widely naturalised elsewhere. Although it is often thought of as a British native wild flower, or to have been brought to the British Isles by the Romans, it probably was introduced around the early sixteenth century and is currently not a protected species in the UK. It was first recorded as naturalised in the UK in Worcestershire and Gloucestershire in 1770. Most other "Galanthus" species are from the eastern Mediterranean, but several are found in southern Russia, Georgia, Armenia, and Azerbaijan. "Galanthus fosteri" comes from Jordan, Lebanon, Syria, Turkey, and perhaps, Palestine.
"Galanthus" grows best in woodland, in acid or alkaline soil, although some are grassland or mountain species.
Snowdrops have been known since early times, being described by the classical Greek author, Theophrastus, in the fourth century BC in his, "Περὶ φυτῶν ἱστορία" (Latin: "Historia plantarum", "Enquiry into plants"). He gave it, and similar plants, the name λευκόἲον (λευκος, leukos "white" and ἰόν, ion "violet") from which the later name "Leucojum" was derived. He described the plant as "ἑπεἰ τοῖς γε χρώμασι λευκἂ καἱ οὐ λεπυριώδη" (in colour white and bulbs without scales) and of their habits "Ἰῶν δ' ἁνθῶν τὀ μἑν πρῶτον ἑκφαἱνεται τὁ λευκόἲον, ὅπου μἑν ό ἀἠρ μαλακώτερος εὐθὑς τοῦ χειμῶνος, ὅπου δἐ σκληρότερος ὕστερον, ἑνιαχοῡ τοῡ ἣρος" (Of the flowers, the first to appear is the white violet. Where the climate is mild, it appears with the first sign of winter, but in more severe climes, later in spring)
Rembert Dodoens, a Flemish botanist, had described and illustrated this plant in 1583 as did Gerard in England in 1597 (probably using much of Dodoens' material), calling it, "Leucojum bulbosum praecox" (Timely bulbous violet). Gerard refers to Theophrastus' description as "Viola alba" or "Viola bulbosa" using Pliny's translation, and comments that the plant had originated in Italy and had "taken possession" in England "many years past". The genus was formally named "Galanthus" and described by Carl Linnaeus in 1753, with the single species, "Galanthus nivalis", which is the type species. Consequently, Linnaeus is granted the botanical authority. In doing so, he distinguished this genus and species from "Leucojum" ("Leucojum bulbosum trifolium minus"), a name by which it previously had been known.
In 1763 Adanson began a system of arranging genera in families. Using the synonym "Acrocorion" (also spelt "Akrokorion"), he placed "Galanthus" in the family Liliaceae, section Narcissi. Lamarck provided a description of the genus in his encyclopedia (1786), and later, "Illustrations des genres" (1793). In 1789 de Jussieu, who is credited with the modern concept of genera organised in families, placed "Galanthus" and related genera within a division of Monocotyledons, using a modified form of Linnaeus' sexual classification, but with the respective topography of stamens to carpels rather than just their numbers. In doing so he restored the name "Galanthus" and retained their placement under Narcissi, this time as a family (known as "Ordo", at that time) and referred to the French vernacular name, "" (Snow-pierce), based on the plants tendency to push through early spring snow (see Ecology for illustration)]. The modern family of Amaryllidaceae, in which "Galanthus" is placed, dates to Jaume Saint-Hilaire (1805) who replaced Jussieu's Narcissi with "Amaryllidées". In 1810 Brown proposed that a subgroup of Liliaceae be distinguished on the basis of the position of the ovaries and be referred to as Amaryllideae, and in 1813, de Candolle separated them by describing Liliacées Juss. and Amaryllidées Brown as two quite separate families. However, in his comprehensive survey of the Flora of France (Flore française, 1805–1815) he divided Liliaceae into a series of "Ordres", and placed Galanthus into the Narcissi "Ordre". This relationship of Galanthus to either liliaceous or amaryllidaceaous taxa (see Taxonomy of Liliaceae) was to last for another two centuries until the two were formally divided at the end of the twentieth century. Lindley (1830) followed this general pattern, placing "Galanthus" and related genera such as "Amaryllis" and "Narcissus" in his Amaryllideae (which he called The Narcissus Tribe in English). By 1853, the number of known plants was increasing considerably and he revised his schema in his last work, placing "Galanthus" together, and the other two genera in the modern Galantheae in tribe Amarylleae, order Amaryllidaceae, alliance Narcissales. These three genera have been treated together taxonomically by most authors, on the basis of an inferior ovary. As the number of plant species increased, so did the taxonomic complexity. By the time Bentham and Hooker published their "Genera plantarum" (1862–1883) ordo Amaryllideae contained five tribes, and tribe Amarylleae 3 subtribes (see Bentham & Hooker system). They placed "Galanthus" in subtribe Genuinae and included three species.
"Galanthus" is one of three closely related genera making up the tribe Galantheae within subfamily Amaryllidoideae (family Amaryllidaceae). Sometimes snowdrops are confused with the other two genera, "Leucojum" and "Acis" (both called snowflakes). "Leucojum" species are much larger and flower in spring (or early summer, depending on the species), with all six tepals in the flower being the same size, although some "poculiform" (goblet- or cup-shaped) "Galanthus" species may have inner segments similar in shape and length to the outer ones. Galantheae are likely to have arisen in the Caucusus.
"Galanthus" has approximately 20 species, but new species continue to be described. "G. trojanus" was identified in Turkey in 2001. "G. panjutinii" (Panjutin's snowdrop) was discovered in 2012 in five locations in a small area (estimated at 20 km2) of the northern Colchis area (western Transcaucasus) of Georgia and Russia. "G. samothracicus" was identified in Greece in 2014. Since it has not been subjected to genetic sequencing, it remains unplaced. It resembles "G. nivalis", but is outside the distribution of that species.
Many species are difficult to identify, however, and traditional infrageneric classification based on morphological alone, such as those of Stern (1956), Traub (1963) and Davis (1999, 2001), has not reflected what is known about its evolutionary history due to the morphological similarities among the species, and relative lack of easily discernible distinguishing characteristics. Stern divided the genus into three series according to leaf vernation (the way the leaves are folded in the bud, when viewed in transverse section, see Description);
Stern further utilised characteristics such as the markings of the inner segments, length of the pedicels in relation to the spathe, and the colour and shape of the leaves in identifying and classifying species
Traub considered them as subgenera;
By contrast Davis, with much more information and specimens, included biogeography in addition to vernation, forming two series. He used somewhat different terminology for vernation, namely applanate (flat), explicative (plicate), and supervolute (convolute). He merged "Nivalis" and "Plicati" into series "Galanthus", and divided "Latifolii" into two subseries, "Glaucaefolii" (Kem.-Nath) A.P.Davis and "Viridifolii" (Kem.-Nath) A.P.Davis.
Early molecular phylogenetic studies confirmed the genus was monophyletic and suggested four clades, which were labelled as series, and showed that Davis' subseries were not monophyletic. An expanded study in 2013 demonstrated seven major clades corresponding to biogeographical distribution. This study used nuclear encoded nrITS (Nuclear ribosomal internal transcribed spacer), and plastid encoded "matK" (Maturase K), "trnL-F", "ndhF", and "psbK–psbI", and examined all species recognised at the time, as well as two naturally occurring putative hybrids. The morphological characteristic of vernation that earlier authors had mainly relied on was shown to be highly homoplasious. A number of species, such as "G. nivalis" and "G. elwesii" demonstrated intraspecific biogeographical clades, indicating problems with speciation and there may be a need for recircumscription. These clades were assigned names, partly according to Davis' previous groupings. In this model clade Platyphyllus is sister to the rest of the genus.
By contrast another study performed at the same time, using both nuclear and chloroplast DNA, but limited to the 14 species found in Turkey, largely confirmed Davis' series and subseries, and with biogeographical correlation. Series "Galanthus" in this study corresponded to clade nivalis, subseries "Glaucaefolii" with clade Elwesii and subseries "Viridifolii" with clades Woronowii and Alpinus. However, the model did not provide complete resolution.
"sensu" Ronsted et al. 2013
"Galanthus" is derived from the Greek γάλα ("gala"), meaning "milk" and ἄνθος ("anthos") meaning "flower", alluding to the colour of the flowers. The epithet "nivalis" is derived from the Latin, meaning "of the snow". The word "Snowdrop" may be derived from the German "Schneetropfen" (Snow-drop), the tear drop shaped pearl earrings popular in the sixteenth and seventeenth centuries. Other, earlier, common names include Candlemas bells, Fair maids of February, and White ladies (see Symbols).
Snowdrops are hardy herbaceous plants that perennate by underground bulbs. They are among the earliest spring bulbs to bloom, although a few forms of "G. nivalis" are autumn flowering. In colder climates, they will emerge through snow (see illustration). They naturalise relatively easily forming large drifts. These are often sterile, found near human habitation, and also former monastic sites. The leaves die back a few weeks after the flowers have faded. "Galanthus" plants are relatively vigorous and may spread rapidly by forming bulb offsets. They also spread by dispersal of seed, animals disturbing bulbs, and water if disturbed by floods.
Some snowdrop species are threatened in their wild habitats, due to habitat destruction, illegal collecting, and climate change. In most countries collecting bulbs from the wild is now illegal. Under CITES regulations, international trade in any quantity of "Galanthus", whether bulbs, live plants, or even dead ones, is illegal without a CITES permit. This applies to hybrids and named cultivars, as well as species. CITES lists all species, but allows a limited trade in wild-collected bulbs of just three species ("G. nivalis", "G. elwesii", and "G. woronowii") from Turkey and Georgia (see Horticulture). A number of species are on the IUCN Red List of threatened species, with the conservation status being "G. trojanus" as critically endangered, four species vulnerable, "G. nivalis" is near threatened and several species show decreasing populations. "G. panjutinii" is considered endangered. One of its five known sites, at Sochi, was destroyed by preparations for the 2014 Winter Olympics.
"Galanthus" species and cultivars are extremely popular as symbols of spring and are traded more than any other wild-source ornamental bulb genus. Millions of bulbs are exported annually from Turkey and Georgia. For instance export quotas for 2016 for "G. elwesii" are 7 million for Turkey and 15 million for Georgia. Data for "G. worononowii" are 15 million for Georgia. These figures include both wild-taken and artificially propagated bulbs.
Celebrated as a sign of spring, snowdrops may form impressive carpets of white in areas where they are native or have been naturalised. These displays may attract large numbers of sightseers. There are a number of snowdrop gardens in England, Wales, Scotland, and Ireland. Several gardens open specially in February for visitors to admire the flowers. Sixty gardens took part in Scotland's first Snowdrop Festival (1 Feb–11 March 2007). Several gardens in England open during snowdrop season for the National Gardens Scheme (NGS) and in Scotland for Scotland's Gardens. Colesbourne Park in Gloucestershire is one of the best known of the English snowdrop gardens, being the home of Henry John Elwes, a collector of Galanthus specimens, and after whom "Galanthus elwesii" is named.
Numerous single- and double-flowered cultivars of "Galanthus nivalis" are known, and also of several other "Galanthus" species, particularly "G. plicatus" and "G. elwesii". Also, many hybrids between these and other species exist (more than 500 cultivars are described in Bishop, Davis, and Grimshaw's book, plus lists of many cultivars that have now been lost, and others not seen by the authors). They differ particularly in the size, shape, and markings of the flower, the period of flowering, and other characteristics, mainly of interest to the keen (even fanatical) snowdrop collectors, known as "galanthophiles", who hold meetings where the scarcer cultivars change hands. Double-flowered cultivars and forms, such as the extremely common "Galanthus nivalis" f. "pleniflorus" 'Flore Pleno', may be less attractive to some people, but they can have greater visual impact in a garden setting. Many hybrids have also occurred in cultivation.
In the UK these species:
and these cultivars:
have gained the Royal Horticultural Society's Award of Garden Merit. A list of Irish cultivars can be found here
Propagation is by offset bulbs, either by careful division of clumps in full growth ("in the green"), or removed when the plants are dormant, immediately after the leaves have withered; or by seeds sown either when ripe, or in spring. Professional growers and keen amateurs also use such methods as "twin-scaling" to increase the stock of choice cultivars quickly.
Snowdrops contain an active lectin or agglutinin named GNA for "Galanthus nivalis" agglutinin.
In 1995, Árpád Pusztai genetically modified potatoes to express the GNA gene, which he discussed on a radio interview in 1998, and published in the "Lancet" in 1999. These remarks started the so-called Pusztai affair. This early research caused the GNA to be found in the edible part of the plant, i.e. in the potato.
Using improved techniques, 22 years later, in 2017, a research team at the Gansu Agricultural University, Lanzhou, China created another transgenic potato plant. These plants produce potatoes that do not contain any GNA. These plants express GNA in their leaves, stems, and roots. They show a reduction in the number of potato aphids and peach-potato aphids per plant of up to 50%.
In 1983, Andreas Plaitakis and Roger Duvoisin suggested that the mysterious magical herb, moly, that appears in Homer's "Odyssey" is the snowdrop. An active substance in snowdrop is called galantamine, which, as an acetylcholinesterase inhibitor, could have acted as an antidote to Circe's poisons. Further supporting this notion are notes made during the fourth century BC by the Greek scholar Theophrastus who wrote in "Historia plantarum" that moly was "used as an antidote against poisons" although which specific poisons it was effective against remains unclear. Galantamine (or galanthamine) may be helpful in the treatment of Alzheimer's disease, although it is not a cure; the substance also occurs naturally in daffodils and other narcissi.
Snowdrops figure prominently in art and literature, often as a symbol in poetry of spring, purity, and religion (see Symbols), such as Walter de la Mare's poem "The Snowdrop" (1929). In this poem, he likened the triple tepals in each whorl ("A triplet of green-pencilled snow") to the Holy Trinity. He used snowdrop imagery several times in his poetry, such as "Blow, Northern Wind" (1950) – see Box. Another instance is the poem by Letitia Elizabeth Landon in which she asks "Thou fairy gift from summer, Why art thou blooming now?"
Early names refer to the association with the religious feast of Candlemas (February 2) the optimum flowering time in which white-robed young women proceeded in the procession associated with the Purification, an alternative name for the feast day. The French name of "Violette de la Chandaleur" refers to Candlemas, while an Italian name, "Fiore della purificazone", refers to purification. The German name of "" (Little snow bells) also invokes the symbol of bells.
In the Language of flowers, the "Snowdrop" is synonymous with 'Hope', as it blooms in early springtime, just before the vernal equinox, and so, is seen as 'heralding' the new spring and new year.
In more recent times, the snowdrop was adopted as a symbol of sorrow and of hope following the Dunblane massacre in Scotland, and lent its name to the subsequent campaign to restrict the legal ownership of handguns in the UK. | https://en.wikipedia.org/wiki?curid=38799 |
Algebraic topology
Algebraic topology is a branch of mathematics that uses tools from abstract algebra to study topological spaces. The basic goal is to find algebraic invariants that classify topological spaces up to homeomorphism, though usually most classify up to homotopy equivalence.
Although algebraic topology primarily uses algebra to study topological problems, using topology to solve algebraic problems is sometimes also possible. Algebraic topology, for example, allows for a convenient proof that any subgroup of a free group is again a free group.
Below are some of the main areas studied in algebraic topology:
In mathematics, homotopy groups are used in algebraic topology to classify topological spaces. The first and simplest homotopy group is the fundamental group, which records information about loops in a space. Intuitively, homotopy groups record information about the basic shape, or holes, of a topological space.
In algebraic topology and abstract algebra, homology (in part from Greek ὁμός "homos" "identical") is a certain general procedure to associate a sequence of abelian groups or modules with a given mathematical object such as a topological space or a group.
In homology theory and algebraic topology, cohomology is a general term for a sequence of abelian groups defined from a co-chain complex. That is, cohomology is defined as the abstract study of cochains, cocycles, and coboundaries. Cohomology can be viewed as a method of assigning algebraic invariants to a topological space that has a more refined algebraic structure than does homology. Cohomology arises from the algebraic dualization of the construction of homology. In less abstract language, cochains in the fundamental sense should assign 'quantities' to the "chains" of homology theory.
A manifold is a topological space that near each point resembles Euclidean space. Examples include the plane, the sphere, and the torus, which can all be realized in three dimensions, but also the Klein bottle and real projective plane which cannot be realized in three dimensions, but can be realized in four dimensions. Typically, results in algebraic topology focus on global, non-differentiable aspects of manifolds; for example Poincaré duality.
Knot theory is the study of mathematical knots. While inspired by knots that appear in daily life in shoelaces and rope, a mathematician's knot differs in that the ends are joined together so that it cannot be undone. In precise mathematical language, a knot is an embedding of a circle in 3-dimensional Euclidean space, formula_1. Two mathematical knots are equivalent if one can be transformed into the other via a deformation of formula_1 upon itself (known as an ambient isotopy); these transformations correspond to manipulations of a knotted string that do not involve cutting the string or passing the string through itself.
A simplicial complex is a topological space of a certain kind, constructed by "gluing together" points, line segments, triangles, and their "n"-dimensional counterparts (see illustration). Simplicial complexes should not be confused with the more abstract notion of a simplicial set appearing in modern simplicial homotopy theory. The purely combinatorial counterpart to a simplicial complex is an abstract simplicial complex.
A CW complex is a type of topological space introduced by J. H. C. Whitehead to meet the needs of homotopy theory. This class of spaces is broader and has some better categorical properties than simplicial complexes, but still retains a combinatorial nature that allows for computation (often with a much smaller complex).
An older name for the subject was combinatorial topology, implying an emphasis on how a space X was constructed from simpler ones (the modern standard tool for such construction is the CW complex). In the 1920s and 1930s, there was growing emphasis on investigating topological spaces by finding correspondences from them to algebraic groups, which led to the change of name to algebraic topology. The combinatorial topology name is still sometimes used to emphasize an algorithmic approach based on decomposition of spaces.
In the algebraic approach, one finds a correspondence between spaces and groups that respects the relation of homeomorphism (or more general homotopy) of spaces. This allows one to recast statements about topological spaces into statements about groups, which have a great deal of manageable structure, often making these statement easier to prove.
Two major ways in which this can be done are through fundamental groups, or more generally homotopy theory, and through homology and cohomology groups. The fundamental groups give us basic information about the structure of a topological space, but they are often nonabelian and can be difficult to work with. The fundamental group of a (finite) simplicial complex does have a finite presentation.
Homology and cohomology groups, on the other hand, are abelian and in many important cases finitely generated. Finitely generated abelian groups are completely classified and are particularly easy to work with.
In general, all constructions of algebraic topology are functorial; the notions of category, functor and natural transformation originated here. Fundamental groups and homology and cohomology groups are not only "invariants" of the underlying topological space, in the sense that two topological spaces which are homeomorphic have the same associated groups, but their associated morphisms also correspond — a continuous mapping of spaces induces a group homomorphism on the associated groups, and these homomorphisms can be used to show non-existence (or, much more deeply, existence) of mappings.
One of the first mathematicians to work with different types of cohomology was Georges de Rham. One can use the differential structure of smooth manifolds via de Rham cohomology, or Čech or sheaf cohomology to investigate the solvability of differential equations defined on the manifold in question. De Rham showed that all of these approaches were interrelated and that, for a closed, oriented manifold, the Betti numbers derived through simplicial homology were the same Betti numbers as those derived through de Rham cohomology. This was extended in the 1950s, when Samuel Eilenberg and Norman Steenrod generalized this approach. They defined homology and cohomology as functors equipped with natural transformations subject to certain axioms (e.g., a weak equivalence of spaces passes to an isomorphism of homology groups), verified that all existing (co)homology theories satisfied these axioms, and then proved that such an axiomatization uniquely characterized the theory.
Classic applications of algebraic topology include: | https://en.wikipedia.org/wiki?curid=38801 |
Louis IV, Holy Roman Emperor
Louis IV (; 1 April 1282 – 11 October 1347), called the Bavarian, of the house of Wittelsbach, was King of the Romans from 1314, King of Italy from 1327, and Holy Roman Emperor from 1328.
Louis IV was Duke of Upper Bavaria from 1294/1301 together with his elder brother Rudolf I, served as Margrave of Brandenburg until 1323, as Count Palatine of the Rhine until 1329, and he became Duke of Lower Bavaria in 1340. He obtained the titles Count of Hainaut, Holland, Zeeland, and Friesland in 1345 when his wife Margaret inherited them.
Louis was born in Munich, the son of Louis II, Duke of Upper Bavaria and Count Palatine of the Rhine, and Matilda, a daughter of King Rudolph I.
Though Louis was partly educated in Vienna and became co-regent of his brother Rudolf I in Upper Bavaria in 1301 with the support of his Habsburg mother and her brother, King Albert I, he quarrelled with the Habsburgs from 1307 over possessions in Lower Bavaria. A civil war against his brother Rudolf due to new disputes on the partition of their lands was ended in 1313, when peace was made at Munich.
In the same year, on November 9, Louis defeated his Habsburg cousin Frederick the Fair who was further aided by duke Leopold I. Originally, he was a friend of Frederick, with whom he had been raised. However, armed conflict arose when the guardianship over the young Dukes of Lower Bavaria (Henry XIV, Otto IV, and Henry XV) was entrusted to Frederick, even though the late Duke Otto III, the former King of Hungary, had chosen Louis. On 9 November 1313, Frederick was defeated by Louis in the Battle of Gammelsdorf and had to renounce the tutelage. This victory caused a stir within the Holy Roman Empire and increased the reputation of the Bavarian Duke.
The death of Holy Roman Emperor Henry VII in August 1313 necessitated the election of a successor. Henry's son John, King of Bohemia since 1310, was considered by many prince-electors to be too young, and by others to be already too powerful. One alternative was Frederick the Fair, the son of Henry's predecessor, Albert I, of the House of Habsburg. In reaction, the pro-Luxembourg party among the prince electors settled on Louis as its candidate to prevent Frederick's election.
On 19 October 1314, Archbishop Henry II of Cologne chaired an assembly of four electors at Sachsenhausen, south of Frankfurt. Participants were Louis' brother, Rudolph I of the Palatinate, who objected to the election of his younger brother, Duke Rudolph I of Saxe-Wittenberg, and Henry of Carinthia, whom the Luxembourgs had deposed as King of Bohemia. These four electors chose Frederick as King.
The Luxembourg party did not accept this election and the next day a second election was held. Upon the instigation of Peter of Aspelt, Archbishop of Mainz, five different electors convened at Frankfurt and elected Louis as King. These electors were Archbishop Peter himself, Archbishop Baldwin of Trier and King John of Bohemia - both of the House of Luxembourg - Margrave Waldemar of Brandenburg and Duke John II of Saxe-Lauenburg, who contested Rudolph of Wittenberg's claim to the electoral vote.
This double election was quickly followed by two coronations: Louis was crowned at Aachen - the customary site of coronations - by Archbishop Peter of Mainz, while the Archbishop of Cologne, who by custom had the right to crown the new king, crowned Frederick at Bonn. In the following conflict between the kings, Louis recognized in 1316 the independence of Switzerland from the Habsburg dynasty.
After several years of bloody war, victory finally seemed within the grasp of Frederick, who was strongly supported by his brother Leopold. However, Frederick's army was decisively defeated in the Battle of Mühldorf on 28 September 1322 on the Ampfing Heath, where Frederick and 1300 nobles from Austria and Salzburg were captured.
Louis held Frederick captive in Trausnitz Castle (Schwandorf) for three years, but the determined resistance by Frederick's brother Leopold, the retreat of John of Bohemia from his alliance, and the Pope's ban induced Louis to release Frederick in the Treaty of Trausnitz of 13 March 1325. In this agreement, Frederick recognized Louis as legitimate ruler and undertook to return to captivity if he did not succeed in convincing his brothers to submit to Louis.
As he did not manage to overcome Leopold's obstinacy, Frederick returned to Munich as a prisoner, even though the Pope had released him from his oath. Louis, who was impressed by such nobility, renewed the old friendship with Frederick, and they agreed to rule the Empire jointly. Since the Pope and the electors strongly objected to this agreement, another treaty was signed at Ulm on 7 January 1326, according to which Frederick would administer Germany as King of the Romans, while Louis would be crowned as Holy Roman Emperor in Italy. However, after Leopold's death in 1326, Frederick withdrew from the regency of the Empire and returned to rule only Austria. He died on 13 January 1330.
Despite Louis' victory, Pope John XXII still refused to ratify his election, and in 1324 he excommunicated Louis, but the sanction had less effect than in earlier disputes between emperors and the papacy.
After the reconciliation with the Habsburgs in 1326, Louis marched to Italy and was crowned King of Italy in Milan in 1327. Already in 1323, Louis had sent an army to Italy to protect Milan against the Kingdom of Naples, which was together with France the strongest ally of the papacy. But now the Lord of Milan Galeazzo I Visconti was deposed since he was suspected of conspiring with the pope.
In January 1328, Louis entered Rome and had himself crowned emperor by the aged senator Sciarra Colonna, called "captain of the Roman people". Three months later, Louis published a decree declaring Pope John XXII ("Jacques Duèze") deposed on grounds of heresy. He then installed a Spiritual Franciscan, Pietro Rainalducci as Nicholas V, but both left Rome in August 1328. In the meantime, Robert, King of Naples had sent both a fleet and an army against Louis and his ally Frederick II of Sicily. Louis spent the winter 1328/29 in Pisa and stayed then in Northern Italy until his co-ruler Frederick of Habsburg had died. In fulfillment of an oath, Louis founded Ettal Abbey on 28 April 1330 on his return from Italy.
Franciscan theologians Michael of Cesena and William of Ockham, and the philosopher Marsilius of Padua, who were all on bad terms with the Pope as well, joined Emperor Louis in Italy and accompanied him to his court at Alter Hof in Munich which became the first imperial residence of the Holy Roman Empire.
In 1333, Emperor Louis sought to counter French influence in the southwest of the empire so he offered Humbert II of Viennois the Kingdom of Arles which was an opportunity to gain full authority over Savoy, Provence, and its surrounding territories. Humbert was reluctant to take the crown due to the conflict that would follow with all around him, so he declined, telling the emperor that he should make peace with the church first.
Emperor Louis also allied with King Edward III of England in 1337 against King Philip VI of France, the protector of the new Pope Benedict XII in Avignon. King Philip VI had prevented any agreement between the Emperor and the Pope. Thus, the failure of negotiations with the papacy led to the declaration at Rhense in 1338 by six electors to the effect that election by all or the majority of the electors automatically conferred the royal title and rule over the empire, without papal confirmation. King Edward III was the Emperor's guest at the Imperial Diet in the Kastorkirche at Coblence in 1338 and was named Vicar-General of the Holy Roman Empire. However in 1341, the Emperor deserted Edward III but came to terms with Philip VI only temporarily. For the expected English payments were missing and Louis intended to reach an agreement with the Pope one more time.
Louis IV was a protector of the Teutonic Knights. In 1337 he allegedly bestowed upon the Teutonic Order a privilege to conquer Lithuania and Russia, although the Order had only petitioned for three small territories. Later he forbade the Order to stand trial before foreign courts in their territorial conflicts with foreign rulers.
Louis concentrated his energies also on the economic development of the cities of the empire, so his name can be found in many city chronicles for the privileges he granted. In 1330 the emperor for example permitted the Frankfurt Trade Fair, and in 1340 Lübeck, as the most powerful member of the future Hanseatic League, received the coinage prerogative for golden gulden.
In 1323 Louis gave Brandenburg as a fiefdom to his eldest son Louis V after the Brandenburg branch of the House of Ascania had died out. With the Treaty of Pavia in 1329 the emperor reconciled the sons of his late brother Rudolph and returned the Palatinate to his nephews Rudolf and Rupert. After the death of Henry of Bohemia, the duchy of Carinthia was released as an imperial fief on 2 May 1335 in Linz to his Habsburg cousins Albert II, Duke of Austria, and Otto, Duke of Austria, while Tyrol was first placed into Luxemburg hands.
With the death of duke John I in 1340 Louis inherited Lower Bavaria and then reunited the duchy of Bavaria. John's mother, a member of the Luxemburg dynasty, had to return to Bohemia. In 1342 Louis also acquired Tyrol for the Wittelsbach by voiding the first marriage of Margarete Maultasch with John Henry of Bohemia and marrying her to his own son Louis V, thus alienating the House of Luxemburg even more.
In 1345 the emperor further antagonized the lay princes by conferring Hainaut, Holland, Zeeland, and Friesland upon his wife, Margaret II of Hainaut. The hereditary titles of Margaret's sisters, one of whom was the queen of England, were ignored. Because of the dangerous hostility of the Luxemburgs, Louis had increased his power base ruthlessly.
The acquisition of these territories and his restless foreign policy had earned Louis many enemies among the German princes. In the summer of 1346 the Luxemburg Charles IV was elected rival king, with the support of Pope Clement VI. Louis himself obtained much support from the Imperial Free Cities and the knights and successfully resisted Charles, who was widely regarded as a papal puppet ("rex clericorum" as William of Ockham called him). Also the Habsburg dukes stayed loyal to Louis. In the Battle of Crécy Charles' father John of Luxemburg was killed; Charles himself also took part in the battle but escaped.
But then Louis' sudden death avoided a longer civil war. Louis died in October 1347 from a stroke suffered during a bear-hunt in Puch near Fürstenfeldbruck. He is buried in the Frauenkirche in Munich. The sons of Louis supported Günther von Schwarzburg as new rival king to Charles but finally joined the Luxemburg party after Günther's early death in 1349 and divided the Wittelsbach possessions amongst themselves again. In continuance of the conflict of the House of Wittelsbach with the House of Luxemburg, the Wittelsbach family returned to power in the Holy Roman Empire in 1400 with King Rupert of Germany, a great-grandnephew of Louis.
In 1308 Louis IV married his first wife, Beatrix of Świdnica (1290-1320). Their children were:
In 1324 he married his second wife, Margaret II, Countess of Hainaut and Holland (1308-1356).
Their children were: | https://en.wikipedia.org/wiki?curid=38802 |
GNU Privacy Guard
GNU Privacy Guard (GnuPG or GPG) is a free-software replacement for Symantec's PGP cryptographic software suite, and is compliant with RFC 4880, the IETF standards-track specification of OpenPGP. Modern versions of PGP are interoperable with GnuPG and other OpenPGP-compliant systems.
GnuPG is part of the GNU Project, and has received major funding from the German government.
GnuPG is a hybrid-encryption software program because it uses a combination of conventional symmetric-key cryptography for speed, and public-key cryptography for ease of secure key exchange, typically by using the recipient's public key to encrypt a session key which is used only once. This mode of operation is part of the OpenPGP standard and has been part of PGP from its first version.
The GnuPG 1.x series uses an integrated cryptographic library, while the GnuPG 2.x series replaces this with Libgcrypt.
GnuPG encrypts messages using asymmetric key pairs individually generated by GnuPG users. The resulting public keys may be exchanged with other users in a variety of ways, such as Internet key servers. They must always be exchanged carefully to prevent identity spoofing by corrupting public key ↔ "owner" identity correspondences. It is also possible to add a cryptographic digital signature to a message, so the message integrity and sender can be verified, if a particular correspondence relied upon has not been corrupted.
GnuPG also supports symmetric encryption algorithms. By default, GnuPG uses the AES symmetrical algorithm since version 2.1, CAST5 was used in earlier versions. GnuPG does not use patented or otherwise restricted software or algorithms. Instead, GnuPG uses a variety of other, non-patented algorithms.
For a long time it did not support the IDEA encryption algorithm used in PGP. It was in fact possible to use IDEA in GnuPG by downloading a plugin for it, however this might require a license for some uses in countries in which IDEA was patented. Starting with versions 1.4.13 and 2.0.20, GnuPG supports IDEA because the last patent of IDEA expired in 2012. Support of IDEA is intended "to get rid of all the questions from folks either trying to decrypt old data or migrating keys from PGP to GnuPG", and hence is not recommended for regular use.
As of versions 2.0.26 and 1.4.18, GnuPG supports the following algorithms:
More recent releases of GnuPG 2.x ("modern" and the now deprecated "stable" series) expose most cryptographic functions and algorithms Libgcrypt (its cryptography library) provides, including support for elliptic curve cryptography (ECDSA, ECDH and EdDSA) in the "modern" series (i.e. since GnuPG 2.1).
GnuPG was initially developed by Werner Koch. The first production version, version 1.0.0, was released on September 7, 1999, almost two years after the first GnuPG release (version 0.0.0). The German Federal Ministry of Economics and Technology funded the documentation and the port to Microsoft Windows in 2000.
GnuPG is a system compliant to the OpenPGP standard, thus the history of OpenPGP is of importance; it was designed to interoperate with PGP, an email encryption program initially designed and developed by Phil Zimmermann.
On February 7, 2014, a GnuPG crowdfunding effort closed, raising €36,732 for a new Web site and infrastructure improvements.
, there are two actively maintained branches of GnuPG:
Different GnuPG 2.x versions (e.g. from the 2.2 and 2.0 branches) cannot be installed at the same time. However, it is possible to install a "classic" GnuPG version (i.e. from the 1.4 branch) along with any GnuPG 2.x version.
Before the release of GnuPG 2.2 ("modern"), the now deprecated "stable" branch (2.0) was recommended for general use, initially released on November 13, 2006. This branch reached its end-of-life on December 31, 2017; Its last version is 2.0.31, released on December 29, 2017.
Before the release of GnuPG 2.0, all stable releases originated from a single branch; i.e., before November 13, 2006, no multiple release branches were maintained in parallel. These former, sequentially succeeding (up to 1.4) release branches were:
Although the basic GnuPG program has a command-line interface, there exists various front-ends that provide it with a graphical user interface. For example, GnuPG encryption support has been integrated into KMail and Evolution, the graphical email clients found in KDE and GNOME, the most popular Linux desktops. There are also graphical GnuPG front-ends, for example Seahorse for GNOME and KGPG for KDE.
The GPG Suite project provides a number of Aqua front-ends for OS integration of encryption and key management as well as GnuPG installations via Installer packages for macOS. Furthermore, the GPG Suite Installer installs all related OpenPGP applications (GPG Keychain Access), plugins (GPGMail) and dependencies (MacGPG) to use GnuPG based encryption.
Instant messaging applications such as Psi and Fire can automatically secure messages when GnuPG is installed and configured. Web-based software such as Horde also makes use of it. The cross-platform extension Enigmail provides GnuPG support for Mozilla Thunderbird and SeaMonkey. Similarly, Enigform provides GnuPG support for Mozilla Firefox. FireGPG was discontinued June 7, 2010.
In 2005, g10 Code GmbH and Intevation GmbH released Gpg4win, a software suite that includes GnuPG for Windows, GNU Privacy Assistant, and GnuPG plug-ins for Windows Explorer and Outlook. These tools are wrapped in a standard Windows installer, making it easier for GnuPG to be installed and used on Windows systems.
As a command-line-based system, GnuPG 1.x is not written as an API that may be incorporated into other software. To overcome this, "GPGME" (abbreviated from "GnuPG Made Easy") was created as an API wrapper around GnuPG that parses the output of GnuPG and provides a stable and maintainable API between the components. This currently requires an out-of-process call to the GnuPG executable for many GPGME API calls; as a result, possible security problems in an application do not propagate to the actual crypto code due to the process barrier. Various graphical front-ends based on GPGME have been created.
Since GnuPG 2.0, many of GnuPG's functions are available directly as C APIs in Libgcrypt.
The OpenPGP standard specifies several methods of digitally signing messages. In 2003, due to an error in a change to GnuPG intended to make one of those methods more efficient, a security vulnerability was introduced. It affected only one method of digitally signing messages, only for some releases of GnuPG (1.0.2 through 1.2.3), and there were fewer than 1000 such keys listed on the key servers. Most people did not use this method, and were in any case discouraged from doing so, so the damage caused (if any, since none has been publicly reported) would appear to have been minimal. Support for this method has been removed from GnuPG versions released after this discovery (1.2.4 and later).
Two further vulnerabilities were discovered in early 2006; the first being that scripted uses of GnuPG for signature verification may result in false positives, the second that non-MIME messages were vulnerable to the injection of data which while not covered by the digital signature, would be reported as being part of the signed message. In both cases updated versions of GnuPG were made available at the time of the announcement.
In June 2017, a vulnerability (CVE-2017-7526) was discovered within Libgcrypt by Bernstein, Breitner and others: a library used by GnuPG, which enabled a full key recovery for RSA-1024 and about more than 1/8th of RSA-2048 keys. This side-channel attack exploits the fact that Libgcrypt used a sliding windows method for exponentiation which leads to the leakage of exponent bits and to full key recovery. Again, an updated version of GnuPG was made available at the time of the announcement.
In October 2017, the ROCA vulnerability was announced that affects RSA keys generated by YubiKey 4 tokens, which often are used with PGP/GPG. Many published PGP keys were found to be susceptible.
Around June 2018, the SigSpoof attacks were announced. These allowed an attacker to convincingly spoof digital signatures.
Notable applications, front ends and browser extensions that support GPG include the following:
In May 2014, "The Washington Post" reported on a 12-minute video guide "GPG for Journalists" posted to Vimeo in January 2013 by a user named anon108. The "Post" identified anon108 as fugitive NSA whistleblower Edward Snowden, who it said made the tutorial—"narrated by a digitally disguised voice whose speech patterns sound similar to those of Snowden"—to teach journalist Glenn Greenwald email encryption. Greenwald said that he could not confirm the authorship of the video. | https://en.wikipedia.org/wiki?curid=38809 |
Proline
Proline (symbol Pro or P) is a proteinogenic amino acid that is used in the biosynthesis of proteins. It contains an α-amino group (which is in the protonated NH2+ form under biological conditions), an α-carboxylic acid group (which is in the deprotonated −COO− form under biological conditions), and a side chain pyrrolidine, classifying it as a nonpolar (at physiological pH), aliphatic amino acid. It is non-essential in humans, meaning the body can synthesize it from the non-essential amino acid L-glutamate. It is encoded by all the codons starting with CC (CCU, CCC, CCA, and CCG).
Proline is the only proteinogenic amino acid with a secondary amine, in that the alpha-amino group is attached directly to the main chain, making the α carbon a direct substituent of the side chain.
Proline was first isolated in 1900 by Richard Willstätter who obtained the amino acid while studying N-methylproline. The year after Emil Fischer published the synthesis of proline from phthalimide propylmalonic ester. The name proline comes from pyrrolidine, one of its constituents.
Proline is biosynthetically derived from the amino acid L-glutamate. Glutamate-5-semialdehyde is first formed by glutamate 5-kinase (ATP-dependent) and glutamate-5-semialdehyde dehydrogenase (which requires NADH or NADPH). This can then either spontaneously cyclize to form 1-pyrroline-5-carboxylic acid, which is reduced to proline by pyrroline-5-carboxylate reductase (using NADH or NADPH), or turned into ornithine by ornithine aminotransferase, followed by cyclisation by ornithine cyclodeaminase to form proline.
L-Proline has been found to act as a weak agonist of the glycine receptor and of both NMDA and non-NMDA (AMPA/kainate) ionotropic glutamate receptors. It has been proposed to be a potential endogenous excitotoxin. In plants, proline accumulation is a common physiological response to various stresses but is also part of the developmental program in generative tissues (e.g. pollen).
The distinctive cyclic structure of proline's side chain gives proline an exceptional conformational rigidity compared to other amino acids. It also affects the rate of peptide bond formation between proline and other amino acids. When proline is bound as an amide in a peptide bond, its nitrogen is not bound to any hydrogen, meaning it cannot act as a hydrogen bond donor, but can be a hydrogen bond acceptor.
Peptide bond formation with incoming Pro-tRNAPro is considerably slower than with any other tRNAs, which is a general feature of N-alkylamino acids. Peptide bond formation is also slow between an incoming tRNA and a chain ending in proline; with the creation of proline-proline bonds slowest of all.
The exceptional conformational rigidity of proline affects the secondary structure of proteins near a proline residue and may account for proline's higher prevalence in the proteins of thermophilic organisms. Protein secondary structure can be described in terms of the dihedral angles φ, ψ and ω of the protein backbone. The cyclic structure of proline's side chain locks the angle φ at approximately −65°.
Proline acts as a structural disruptor in the middle of regular secondary structure elements such as alpha helices and beta sheets; however, proline is commonly found as the first residue of an alpha helix and also in the edge strands of beta sheets. Proline is also commonly found in turns (another kind of secondary structure), and aids in the formation of beta turns. This may account for the curious fact that proline is usually solvent-exposed, despite having a completely aliphatic side chain.
Multiple prolines and/or hydroxyprolines in a row can create a polyproline helix, the predominant secondary structure in collagen. The hydroxylation of proline by prolyl hydroxylase (or other additions of electron-withdrawing substituents such as fluorine) increases the conformational stability of collagen significantly. Hence, the hydroxylation of proline is a critical biochemical process for maintaining the connective tissue of higher organisms. Severe diseases such as scurvy can result from defects in this hydroxylation, e.g., mutations in the enzyme prolyl hydroxylase or lack of the necessary ascorbate (vitamin C) cofactor.
Peptide bonds to proline, and to other "N"-substituted amino acids (such as sarcosine), are able to populate both the "cis" and "trans" isomers. Most peptide bonds overwhelmingly adopt the "trans" isomer (typically 99.9% under unstrained conditions), chiefly because the amide hydrogen ("trans" isomer) offers less steric repulsion to the preceding Cα atom than does the following Cα atom ("cis" isomer). By contrast, the "cis" and "trans" isomers of the X-Pro peptide bond (where X represents any amino acid) both experience steric clashes with the neighboring substitution and have a much lower energy difference. Hence, the fraction of X-Pro peptide bonds in the "cis" isomer under unstrained conditions is significantly elevated, with "cis" fractions typically in the range of 3-10%. However, these values depend on the preceding amino acid, with Gly and aromatic residues yielding increased fractions of the "cis" isomer. "Cis" fractions up to 40% have been identified for Aromatic-Pro peptide bonds.
From a kinetic standpoint, "cis"-"trans" proline isomerization is a very slow process that can impede the progress of protein folding by trapping one or more proline residues crucial for folding in the non-native isomer, especially when the native protein requires the "cis" isomer. This is because proline residues are exclusively synthesized in the ribosome as the "trans" isomer form. All organisms possess prolyl isomerase enzymes to catalyze this isomerization, and some bacteria have specialized prolyl isomerases associated with the ribosome. However, not all prolines are essential for folding, and protein folding may proceed at a normal rate despite having non-native conformers of many X-Pro peptide bonds.
Proline and its derivatives are often used as asymmetric catalysts in proline organocatalysis reactions. The CBS reduction and proline catalysed aldol condensation are prominent examples.
In brewing, proteins rich in proline combine with polyphenols to produce haze (turbidity).
L-Proline is an osmoprotectant and therefore is used in many pharmaceutical, biotechnological applications.
The growth medium used in plant tissue culture may be supplemented with proline. This can increase growth, perhaps because it helps the plant tolerate the stresses of tissue culture. For proline's role in the stress response of plants, see .
Proline is one of the two amino acids that do not follow along with the typical Ramachandran plot, along with glycine. Due to the ring formation connected to the beta carbon, the ψ and φ angles about the peptide bond have fewer allowable degrees of rotation. As a result, it is often found in "turns" of proteins as its free entropy (ΔS) is not as comparatively large to other amino acids and thus in a folded form vs. unfolded form, the change in entropy is smaller. Furthermore, proline is rarely found in α and β structures as it would reduce the stability of such structures, because its side chain α-N can only form one nitrogen bond.
Additionally, proline is the only amino acid that does not form a red/purple colour when developed by spraying with ninhydrin for uses in chromatography. Proline, instead, produces an orange/yellow colour.
Richard Willstätter synthesized proline by the reaction of sodium salt of diethyl malonate with 1,3-dibromopropane in 1900. In 1901, Hermann Emil Fischer isolated proline from casein and the decomposition products of γ-phthalimido-propylmalonic ester.
Racemic proline can be synthesized from diethyl malonate and acrylonitrile: | https://en.wikipedia.org/wiki?curid=38811 |
Coven
A coven usually refers to a group or gathering of witches. The word "coven" (from Anglo-Norman "covent, cuvent", from Old French "covent", from Latin "conventum" = convention) remained largely unused in English until 1921 when Margaret Murray promoted the idea that all witches across Europe met in groups of thirteen which they called "covens".
In Wicca and other similar forms of neopagan witchcraft, such as Stregheria and Feri, a coven is a gathering or community of witches, like an affinity group, engagement group, or small covenant group. It is composed of a group of practitioners who gather together for rituals such as Drawing Down the Moon, or celebrating the Sabbats.. The place at which they generally meet is called a covenstead.
The number of people involved may vary. Although some consider thirteen to be ideal (probably in deference to Murray's theories), any group of at least three can be a coven. A group of two is usually called a "working couple" (regardless of their gender). Within the community, many believe that a coven larger than thirteen is unwieldy, citing unwieldy group dynamics and an unfair burden on the leadership. When a coven has grown too large to be manageable, it may split, or "hive". In Wicca, this may also occur when a newly made High Priest or High Priestess, also called 3rd Degree initiation, leaves to start their own coven.
Wiccan covens are usually jointly led by a High Priestess and a High Priest, although some are led by only one or the other, and some by a same-sex couple. In more recent forms of neopagan witchcraft, covens are sometimes run as democracies with a rotating leadership.
With the rise of the Internet as a platform for collaborative discussion and media dissemination, it became popular for adherents and practitioners of Wicca to establish "online covens" which remotely teach tradition-specific crafts to students in a similar method of education as non-religious virtual online schools. One of the first online covens to take this route is the Coven of the Far Flung Net (CFFN), which was established in 1998 as the online arm of the Church of Universal Eclectic Wicca.
However, because of potentially-unwieldy membership sizes, many online covens limit their memberships to anywhere between 10 and 100 students. The CFFN, in particular, tried to devolve its structure into a system of sub-coven clans (which governed their own application processes), a system which ended in 2003 due to fears by the CFFN leadership that the clans were becoming communities in their own right.
The Urban Coven is a group founded on Facebook by Becca Gordon for women in Los Angeles to gather, hike, and howl at the moon. It meets monthly and is estimated to have almost 3,500 members. A January 2016 gathering at Griffith Park drew nearly 1,000 women, and was described as follows: "Many of the women ... were there in groups — mothers and daughters, friends, colleagues. Some arrived solo and struck up conversations with other women or hiked in solitude."
In popular culture, a coven is a group or gathering of witches who work spells in tandem. Such imagery can be traced back to Renaissance prints depicting witches and to the three "weird sisters" in Shakespeare's "Macbeth" (1606).
Orgiastic meetings of witches are depicted in the Robert Burns poem "Tam o' Shanter" (1791) and in the Goethe play "Faust" (1832).
Films featuring covens include "Rosemary's Baby" (1968), "Suspiria" (1977) and its 2018 remake, "The Witches of Eastwick" (1987), "Four Rooms" (1995), "The Craft" (1996), "Coven" (1997), "Underworld" (2003), "" (2006), "The Covenant" (2006), "Paranormal Activity 3" (2011), "The Witch" (2015) and "Hereditary" (2018).
In television, covens have been portrayed in the U.S. in supernatural dramas such as "Charmed", "Witches of East End", "The Vampire Diaries", "The Originals", "The Secret Circle", "True Blood", "Once Upon a Time" and "Chilling Adventures of Sabrina". The third season of "American Horror Story" is entitled "", and focuses on witches.
In vampire novels such as "The Vampire Chronicles" by Anne Rice and the "Twilight" series by Stephenie Meyer, covens are families or unrelated groups of vampires who live together.
Covens feature in the video game Dishonored, specifically in the DLC's Knife of Dunwall, and The Brigmore Witches. | https://en.wikipedia.org/wiki?curid=38813 |
Amstrad
Amstrad was a British electronics company, founded in 1968 by Alan Sugar at the age of 21. The name is a contraction of Alan Michael Sugar Trading. It was first listed on the London Stock Exchange in April 1980. During the late 1980s, Amstrad had a substantial share of the PC market in the UK. Amstrad was once a FTSE 100 Index constituent but since 2007 is wholly owned by Sky UK. , Amstrad's main business was manufacturing Sky UK interactive boxes. In 2010 Sky integrated Amstrad's satellite division as part of Sky so they could make their own set-top boxes in-house.
The company had offices in Kings Road, Brentwood, Essex.
Amstrad (also known as AMSTrad) was founded in 1968 by Alan Sugar at the age of 21, the name of the original company being AMS Trading (Amstrad) Limited, derived from its founder's initials (Alan Michael Sugar). Amstrad entered the market in the field of consumer electronics. During the 1970s they were at the forefront of low-priced hi-fi, TV and car stereo cassette technologies. Lower prices were achieved by injection moulding plastic hi-fi turntable covers, undercutting competitors who used the vacuum forming process.
Amstrad expanded to the marketing of low cost, amplifiers and tuners, imported from the Far East and badged with the Amstrad name for the UK market. Their first electrical product was the Amstrad 8000 amplifier.
In 1980, Amstrad went public trading on the London Stock Exchange, and doubled in size each year during the early '80s. Amstrad began marketing its own home computers in an attempt to capture the market from Commodore and Sinclair, with the Amstrad CPC range in 1984. The CPC 464 was launched in the UK, Ireland, France, Australia, New Zealand, Germany, Spain and Italy. It was followed by the CPC 664 and CPC 6128 models. Later "Plus" variants of the 464 and 6128, launched in 1990, increased their functionality slightly.
In 1985, the popular Amstrad PCW range was introduced, which were principally word processors, complete with printer, running the LocoScript word processing program. They were also capable of running the CP/M operating system. The Amsoft division of Amstrad was set up to provide in-house software and consumables.
On 7 April 1986 Amstrad announced it had bought from Sinclair Research "the worldwide rights to sell and manufacture all existing and future Sinclair computers and computer products, together with the Sinclair brand name and those intellectual property rights where they relate to computers and computer related products", which included the ZX Spectrum, for £5 million. This included Sinclair's unsold stock of Sinclair QLs and Spectrums. Amstrad made more than £5 million on selling these surplus machines alone. Amstrad launched two new variants of the Spectrum: the ZX Spectrum +2, based on the ZX Spectrum 128, with a built-in cassette tape drive (like the CPC 464) and, the following year, the ZX Spectrum +3, with a built-in floppy disk drive (similar to the CPC 664 and 6128), taking the 3" disks that many Amstrad machines used.
In 1986 Amstrad entered the IBM PC-compatible arena with the PC1512 system. In standard Amstrad livery and priced at £399 it was a success, capturing more than 25% of the European computer market. It was MS-DOS-based, but with the GEM graphics interface, and later Windows. In 1988 Amstrad attempted to make the first affordable portable personal computer with the PPC512 and 640 models, introduced a year before the Macintosh Portable. They ran MS-DOS on an 8 MHz processor, and the built-in screen could emulate the Monochrome Display Adapter or Color Graphics Adapter. Amstrad's final (and ill-fated) attempts to exploit the Sinclair brand were based on the company's own PCs; a compact desktop PC derived from the PPC 512, branded as the Sinclair PC200, and the PC1512 rebadged as the Sinclair PC500.
Amstrad's second generation of PCs, the PC2000 series, were launched in 1989. However,
due to a problem with the Seagate ST277R hard disk shipped with the PC2386 model, these had to be recalled and fitted with Western Digital controllers. Amstrad later successfully sued Seagate, but following bad press over the hard disk problems, Amstrad lost its lead in the European PC market.
In the early 1990s, Amstrad began to focus on portable computers rather than desktop computers. In 1990, Amstrad tried to enter the video game console market with the Amstrad GX4000, similar to what Commodore did at the same time with the C64 GS. The console, based on the Amstrad 464 Plus hardware, was a commercial failure, because it used outdated technology, and most games available for it were straight ports of CPC games that could be purchased for much less in their original format.
In 1993, Amstrad was licensed by Sega to produce a system which was similar to the Sega TeraDrive, going by the name of the Amstrad Mega PC, to try to regain their image in the gaming market. The system didn't succeed as well as expected, mostly due to its high initial retail price of £999. In that same year, Amstrad released the PenPad, a PDA similar to the Apple Newton, and released only weeks before it. It was a commercial failure, and had several technical and usability problems. It lacked most features that the Apple Newton included, but had a lower price at $450.
As Amstrad began to concentrate less on computers and more in communication, they purchased several telecommunications businesses including Betacom, Dancall Telecom, Viglen Computers and Dataflex Design Communications during the early 1990s. Amstrad has been a major supplier of set top boxes to UK satellite TV provider Sky since its launch in 1989. Amstrad was key to the introduction of Sky, as the company was responsible for finding methods to produce the requisite equipment at an attractive price for the consumer - Alan Sugar famously approached "someone who bashes out dustbin lids", to manufacture satellite dishes cheaply. Ultimately, it was the only manufacturer producing receiver boxes and dishes at the system's launch, and has continued to manufacture set top boxes for Sky, from analogue to digital and now including Sky's Sky+ digital video recorder.
In 1997, Amstrad PLC was wound up, its shares being split into Viglen and Betacom instead. Betacom PLC was then renamed Amstrad PLC.
The same year, Amstrad supplied set top boxes to Australian broadcaster Foxtel, and in 2004 to Italian broadcaster Sky Italia.
In 2000, Amstrad released the first of its combined telephony and e-mail devices, called the "E-m@iler". This was followed by the "E-m@iler Plus" in 2002, and the "E3 Videophone" in 2004. Amstrad's UK E-m@iler business is operated through a separate company, Amserve Ltd which is 89.8% owned by Amstrad and 10.2% owned by DSG International plc (formerly Dixons plc).
Amstrad has also produced a variety of home entertainment products over their history, including hi-fi, televisions, VCRs, and DVD players.
In July 2007, BSkyB announced a takeover of Amstrad for £125m, a 23.7% premium on its market capitalisation. BSkyB had been a major client of Amstrad, accounting for 75% of sales for its 'set top box' business. Having supplied BSkyB with hardware since its inception in 1988, market analysts had noted the two companies becoming increasingly close.
Sugar commented that he wished to play a part in the business, saying: "I turn 60 this year and I have had 40 years of hustling in the business, but now I have to start thinking about my team of loyal staff, many of whom have been with me for many years."
It was announced on 2 July 2008 that Sugar had stepped down as Chairman of Amstrad, which had been planned since BSkyB took over in 2007.
Amstrad was taken off the Stock Exchange on 9 October 2008.
Recently, Amstrad has ceased operations as a trading company, and exist in name only. Under Sky, Amstrad currently only produce satellite receivers for Sky, as doing so allows them to reduce costs by cutting out the middleman. Amstrad's former offices are now a Premier Inn Hotel.
Sky bought Amstrad so they could have their own hardware development division to develop new Satellite boxes (Sky Q) made in-house. | https://en.wikipedia.org/wiki?curid=38817 |
Power transmission
Power transmission is the movement of energy from its place of generation to a location where it is applied to perform useful work.
Power is defined formally as units of energy per unit time. In SI units:
Since the development of technology, transmission and storage systems have been of immense interest to technologists and technology users.
With the widespread establishment of electrical grids, power transmission is usually associated most with electric power transmission. Alternating current is normally preferred as its voltage may be easily stepped up by a transformer in order to minimize resistive loss in the conductors used to transmit power over great distances; another set of transformers is required to step it back down to safer or more usable voltage levels at destination.
Power transmission is usually performed with overhead lines as this is the most economical way to do so. Underground transmission by high-voltage cables is chosen in crowded urban areas and in high-voltage direct-current (HVDC) submarine connections.
Power might also be transmitted by changing electromagnetic fields or by radio waves; microwave energy may be carried efficiently over short distances by a waveguide or in free space via wireless power transfer.
Electrical power transmission has replaced mechanical power transmission in all but the very shortest distances.
From the 16th century through the industrial revolution to the end of the 19th century mechanical power transmission was the norm. The oldest long-distance power transmission technology involved systems of push-rods or jerker lines ("stängenkunst" or "feldstängen") connecting waterwheels to distant mine-drainage and brine-well pumps. A surviving example from 1780 exists at Bad Kösen that transmits power approximately 200 meters from a waterwheel to a salt well, and from there, an additional 150 meters to a brine evaporator. This technology survived into the 21st century in a handful of oilfields in the US, transmitting power from a central pumping engine to the numerous pump-jacks in the oil field.
Mechanical power may be transmitted directly using a solid structure such as a driveshaft; transmission gears can adjust the amount of torque or force vs. speed in much the same way an electrical transformer adjusts voltage vs current. Factories were fitted with overhead line shafts providing rotary power. Short line-shaft systems were described by Agricola, connecting a waterwheel to numerous ore-processing machines. While the machines described by Agricola used geared connections from the shafts to the machinery, by the 19th century, drivebelts would become the norm for linking individual machines to the line shafts. One mid 19th century factory had 1,948 feet of line shafting with 541 pulleys.
Hydraulic systems use liquid under pressure to transmit power; canals and hydroelectric power generation facilities harness natural water power to lift ships or generate electricity. Pumping water or pushing mass uphill with (windmill pumps) is one possible means of energy storage. London had a hydraulic network powered by five pumping stations operated by the London Hydraulic Power Company, with a total effect of 5 MW.
Pneumatic systems use gasses under pressure to transmit power; compressed air is commonly used to operate pneumatic tools in factories and repair garages. A pneumatic wrench (for instance) is used to remove and install automotive tires far more quickly than could be done with standard manual hand tools. A pneumatic system was proposed by proponents of Edison's direct current as the basis of the power grid. Compressed air generated at Niagara Falls would drive far away generators of DC power. The war of the currents ended with alternating current (AC) as the only means of long distance power transmission.
Thermal power can be transported in pipelines containing a high heat capacity fluid such as oil or water as used in district heating systems, or by physically transporting material items, such as bottle cars, or in the ice trade.
While not technically power transmission, energy is commonly transported by shipping chemical or nuclear fuels. Possible artificial fuels include radioactive isotopes, wood alcohol, grain alcohol, methane, synthetic gas, hydrogen gas (H2), cryogenic gas, and liquefied natural gas (LNG). | https://en.wikipedia.org/wiki?curid=38822 |
Fulda
Fulda () (historically in English called Fuld) is a city in Hesse, Germany; it is located on the river Fulda and is the administrative seat of the Fulda district ("Kreis"). In 1990, the town hosted the 30th Hessentag state festival.
In 744 Saint Sturm, a disciple of Saint Boniface, founded the Benedictine monastery of Fulda as one of Boniface's outposts in the reorganization of the church in Germany. It later served as a base from which missionaries could accompany Charlemagne's armies in their political and military campaigns to fully conquer and convert pagan Saxony.
The initial grant for the abbey was signed by Carloman, Mayor of the Palace in Austrasia (in office 741–47), the son of Charles Martel. The support of the Mayors of the Palace, and later of the early Pippinid and Carolingian rulers, was important to Boniface's success. Fulda also received support from many of the leading families of the Carolingian world. Sturm, whose tenure as abbot lasted from 747 until 779, was most likely related to the Agilolfing dukes of Bavaria.
Fulda also received large and constant donations from the Etichonids, a leading family in Alsace, and from the Conradines, predecessors of the Salian Holy Roman Emperors. Under Sturm, the donations Fulda received from these and other important families helped in the establishment of daughter-houses near Fulda.
Between 790 and 819 the community rebuilt the main monastery church to more fittingly house the relics. They based their new basilica on the original 4th-century (since demolished) Old St. Peter's Basilica in Rome, using the transept and crypt plan of that great pilgrimage church to frame their own saint as the "Apostle to the Germans".
The crypt of the original abbey church still holds those relics, but the church itself has been subsumed into a Baroque renovation. A small, 9th-century chapel remains standing within walking distance of the church, as do the foundations of a later women's abbey. Rabanus Maurus served as abbot at Fulda from 822 to 842.
Prince-abbot Balthasar von Dernbach adopted a policy of counterreformation. In 1571 he called in the Jesuits to found a school and college. He insisted that the members of the chapter should return to a monastic form of life. Whereas his predecessors had tolerated Protestantism, resulting in most of the citizenry of Fulda and a large portion of the principality's countryside professing Lutheranism, Balthasar ordered his subjects either to return to the Catholic faith or leave his territories.
The foundation of the abbey of Fulda and its territory originated with an Imperial grant, and the sovereign principality therefore was subject only to the German emperor. Fulda became a bishopric in 1752 and the prince-abbots were given the additional title of prince-bishop. The prince-abbots (and later prince-bishops) ruled Fulda and the surrounding region until the bishopric was forcibly dissolved by Napoleon I in 1802.
The city went through a baroque building campaign in the 18th century, resulting in the current “Baroque City” status. This included a remodeling of Fulda Cathedral (1704–12) and of the "Stadtschloss "(Fulda Castle-Palace, 1707–12) by Johann Dientzenhofer. The city parish church, St. Blasius, was built between 1771–85. In 1764 a porcelain factory was started in Fulda under Prince-Bishop, Prince-Abbot Heinrich von Bibra, but shortly after his death it was closed down in 1789 by his successor, Prince-Bishop, Prince-Abbot Adalbert von Harstall.
The city was given to Prince William Frederick of Orange-Nassau (the later King William I of the Netherlands) in 1803 (as part of the short-lived Principality of Nassau-Orange-Fulda), was annexed to the Grand Duchy of Berg in 1806, and in 1809 to the Principality of Frankfurt. After the Congress of Vienna of 1814–15, most of the territory went to the Electorate of Hesse, which Prussia annexed in 1866.
Fulda lends its name to the Fulda Gap, a traditional east-west invasion route used by Napoleon I and others. During the Cold War, it was presumed to be an invasion route for any conventional war between NATO and Soviet forces. Downs Barracks in Fulda was the headquarters of the American 14th Armored Cavalry Regiment, later replaced by the 11th Armored Cavalry Regiment. The cavalry had as many as 3,000 soldiers from the end of World War II until 1993. Not all of those soldiers were in Fulda proper, but scattered over observation posts and in the cities of Bad Kissingen and Bad Hersfeld. The strategic importance of this region, along the border between East and West Germany, led to a large United States and Soviet military presence.
Department I (head and personnel administration, finance, committee work, culture, business development, city marketing, investments)
Department II (public security and order, family, youth, schools, sports, social affairs, seniors)
Source:
Fulda station is a transport hub and interchange point between local and long distance traffic of the German railway network, and is classified by Deutsche Bahn as a category 2 station. It is on the Hanover–Würzburg high-speed railway; the North-South line ("Nord-Süd-Strecke"), comprising the Bebra-Fulda line north of Fulda, and the Kinzig Valley Railway and Fulda-Main Railway to the south; the Vogelsberg Railway, which connects to the hills of the Vogelsberg in the west; and the Fulda–Gersfeld Railway (Rhön Railway) to Gersfeld in the Rhön Mountains to the east.
Fulda is on the Bundesautobahn 7 (BAB 7). Bundesautobahn 66 starts at the interchange with the BAB 7, heading south towards Frankfurt. Fulda is also on the Bundesstraße 27.
Fulda is twinned with: | https://en.wikipedia.org/wiki?curid=38823 |
Electric power transmission
Electric power transmission is the bulk movement of electrical energy from a generating site, such as a power plant, to an electrical substation. The interconnected lines which facilitate this movement are known as a "transmission network". This is distinct from the local wiring between high-voltage substations and customers, which is typically referred to as electric power distribution. The combined transmission and distribution network is part of electricity delivery, known as the "power grid" in North America, or just "the grid". In the United Kingdom, India, Tanzania, Myanmar, Malaysia and New Zealand, the network is known as the National Grid.
A wide area synchronous grid, also known as an "interconnection" in North America, directly connects many generators delivering AC power with the same relative "frequency" to many consumers. For example, there are four major interconnections in North America (the Western Interconnection, the Eastern Interconnection, the Quebec Interconnection and the Electric Reliability Council of Texas (ERCOT) grid). In Europe one large grid connects most of continental Europe.
Historically, transmission and distribution lines were owned by the same company, but starting in the 1990s, many countries have liberalized the regulation of the electricity market in ways that have led to the separation of the electricity transmission business from the distribution business.
Most transmission lines are high-voltage three-phase alternating current (AC), although single phase AC is sometimes used in railway electrification systems. High-voltage direct-current (HVDC) technology is used for greater efficiency over very long distances (typically hundreds of miles). HVDC technology is also used in submarine power cables (typically longer than 30 miles (50 km)), and in the interchange of power between grids that are not mutually synchronized. HVDC links are used to stabilize large power distribution networks where sudden new loads, or blackouts, in one part of a network can result in synchronization problems and cascading failures.
Electricity is transmitted at high voltages (66 kV or above) to reduce the energy loss which occurs in long-distance transmission. Power is usually transmitted through overhead power lines. Underground power transmission has a significantly higher installation cost and greater operational limitations, but reduced maintenance costs. Underground transmission is sometimes used in urban areas or environmentally sensitive locations.
A lack of electrical energy storage facilities in transmission systems leads to a key limitation. Electrical energy must be generated at the same rate at which it is consumed. A sophisticated control system is required to ensure that the power generation very closely matches the demand. If the demand for power exceeds supply, the imbalance can cause generation plant(s) and transmission equipment to automatically disconnect or shut down to prevent damage. In the worst case, this may lead to a cascading series of shut downs and a major regional blackout. Examples include the US Northeast blackouts of 1965, 1977, 2003, and major blackouts in other US regions in 1996 and 2011. Electric transmission networks are interconnected into regional, national, and even continent wide networks to reduce the risk of such a failure by providing multiple redundant, alternative routes for power to flow should such shut downs occur. Transmission companies determine the maximum reliable capacity of each line (ordinarily less than its physical or thermal limit) to ensure that spare capacity is available in the event of a failure in another part of the network.
High-voltage overhead conductors are not covered by insulation. The conductor material is nearly always an aluminum alloy, made into several strands and possibly reinforced with steel strands. Copper was sometimes used for overhead transmission, but aluminum is lighter, yields only marginally reduced performance and costs much less. Overhead conductors are a commodity supplied by several companies worldwide. Improved conductor material and shapes are regularly used to allow increased capacity and modernize transmission circuits. Conductor sizes range from 12 mm2 (#6 American wire gauge) to 750 mm2 (1,590,000 circular mils area), with varying resistance and current-carrying capacity. For large conductors (more than a few centimetres in diameter) at power frequency, much of the current flow is concentrated near the surface due to the skin effect. The center part of the conductor carries little current, but contributes weight and cost to the conductor. Because of this current limitation, multiple parallel cables (called bundle conductors) are used when higher capacity is needed. Bundle conductors are also used at high voltages to reduce energy loss caused by corona discharge.
Today, transmission-level voltages are usually considered to be 110 kV and above. Lower voltages, such as 66 kV and 33 kV, are usually considered subtransmission voltages, but are occasionally used on long lines with light loads. Voltages less than 33 kV are usually used for distribution. Voltages above 765 kV are considered extra high voltage and require different designs compared to equipment used at lower voltages.
Since overhead transmission wires depend on air for insulation, the design of these lines requires minimum clearances to be observed to maintain safety. Adverse weather conditions, such as high winds and low temperatures, can lead to power outages. Wind speeds as low as can permit conductors to encroach operating clearances, resulting in a flashover and loss of supply.
Oscillatory motion of the physical line can be termed conductor gallop or flutter depending on the frequency and amplitude of oscillation.
Electric power can also be transmitted by underground power cables instead of overhead power lines. Underground cables take up less right-of-way than overhead lines, have lower visibility, and are less affected by bad weather. However, costs of insulated cable and excavation are much higher than overhead construction. Faults in buried transmission lines take longer to locate and repair.
In some metropolitan areas, underground transmission cables are enclosed by metal pipe and insulated with dielectric fluid (usually an oil) that is either static or circulated via pumps. If an electric fault damages the pipe and produces a dielectric leak into the surrounding soil, liquid nitrogen trucks are mobilized to freeze portions of the pipe to enable the draining and repair of the damaged pipe location. This type of underground transmission cable can prolong the repair period and increase repair costs. The temperature of the pipe and soil are usually monitored constantly throughout the repair period.
Underground lines are strictly limited by their thermal capacity, which permits less overload or re-rating than overhead lines. Long underground AC cables have significant capacitance, which may reduce their ability to provide useful power to loads beyond . DC cables are not limited in length by their capacitance, however, they do require HVDC converter stations at both ends of the line to convert from DC to AC before being interconnected with the transmission network.
In the early days of commercial electric power, transmission of electric power at the same voltage as used by lighting and mechanical loads restricted the distance between generating plant and consumers. In 1882, generation was with direct current (DC), which could not easily be increased in voltage for long-distance transmission. Different classes of loads (for example, lighting, fixed motors, and traction/railway systems) required different voltages, and so used different generators and circuits.
Due to this specialization of lines and because transmission was inefficient for low-voltage high-current circuits, generators needed to be near their loads. It seemed, at the time, that the industry would develop into what is now known as a distributed generation system with large numbers of small generators located near their loads.
The transmission of electric power with alternating current (AC) became possible after Lucien Gaulard and John Dixon Gibbs built what they called the secondary generator, an early transformer provided with 1:1 turn ratio and open magnetic circuit, in 1881.
The first long distance AC line was long, built for the 1884 International Exhibition of Turin, Italy. It was powered by a 2 kV, 130 Hz Siemens & Halske alternator and featured several Gaulard secondary generators with their primary windings connected in series, which fed incandescent lamps. The system proved the feasibility of AC electric power transmission on long distances.
The very first AC system to operate was in service in 1885 in via dei Cerchi, Rome, Italy, for public lighting. It was powered by two Siemens & Halske alternators rated 30 hp (22 kW), 2 kV at 120 Hz and used 19 km of cables and 200 parallel-connected 2 kV to 20 V step-down transformers provided with a closed magnetic circuit, one for each lamp. A few months later it was followed by the first British AC system, which was put into service at the Grosvenor Gallery, London. It also featured Siemens alternators and 2.4 kV to 100 V step-down transformers – one per user – with shunt-connected primaries.
Working from what he considered an impractical Gaulard-Gibbs design, electrical engineer William Stanley, Jr. developed what is considered the first practical series AC transformer in 1885. Working with the support of George Westinghouse, in 1886 he demonstrated a transformer based alternating current lighting system in Great Barrington, Massachusetts. Powered by a steam engine driven 500 V Siemens generator, voltage was stepped down to 100 Volts using the new Stanley transformer to power incandescent lamps at 23 businesses along main street with very little power loss over . This practical demonstration of a transformer and alternating current lighting system would lead Westinghouse to begin installing AC based systems later that year.
1888 saw designs for a functional AC motor, something these systems had lacked up till then. These were induction motors running on polyphase current, independently invented by Galileo Ferraris and Nikola Tesla (with Tesla's design being licensed by Westinghouse in the US). This design was further developed into the modern practical three-phase form by Mikhail Dolivo-Dobrovolsky and Charles Eugene Lancelot Brown. Practical use of these types of motors would be delayed many years by development problems and the scarcity of poly-phase power systems needed to power them.
The late 1880s and early 1890s would see the financial merger of smaller electric companies into a few larger corporations such as Ganz and AEG in Europe and General Electric and Westinghouse Electric in the US. These companies continued to develop AC systems but the technical difference between direct and alternating current systems would follow a much longer technical merger. Due to innovation in the US and Europe, alternating current's economy of scale with very large generating plants linked to loads via long-distance transmission was slowly being combined with the ability to link it up with all of the existing systems that needed to be supplied. These included single phase AC systems, poly-phase AC systems, low voltage incandescent lighting, high voltage arc lighting, and existing DC motors in factories and street cars. In what was becoming a "universal system", these technological differences were temporarily being bridged via the development of rotary converters and motor-generators that would allow the large number of legacy systems to be connected to the AC grid. These stopgaps would slowly be replaced as older systems were retired or upgraded.
The first transmission of single-phase alternating current using high voltage took place in Oregon in 1890 when power was delivered from a hydroelectric plant at Willamette Falls to the city of Portland downriver. The first three-phase alternating current using high voltage took place in 1891 during the international electricity exhibition in Frankfurt. A 15 kV transmission line, approximately 175 km long, connected Lauffen on the Neckar and Frankfurt.
Voltages used for electric power transmission increased throughout the 20th century. By 1914, fifty-five transmission systems each operating at more than 70 kV were in service. The highest voltage then used was 150 kV.
By allowing multiple generating plants to be interconnected over a wide area, electricity production cost was reduced. The most efficient available plants could be used to supply the varying loads during the day. Reliability was improved and capital investment cost was reduced, since stand-by generating capacity could be shared over many more customers and a wider geographic area. Remote and low-cost sources of energy, such as hydroelectric power or mine-mouth coal, could be exploited to lower energy production cost.
The rapid industrialization in the 20th century made electrical transmission lines and grids critical infrastructure items in most industrialized nations. The interconnection of local generation plants and small distribution networks was spurred by the requirements of World War I, with large electrical generating plants built by governments to provide power to munitions factories. Later these generating plants were connected to supply civil loads through long-distance transmission.
Engineers design transmission networks to transport the energy as efficiently as possible, while at the same time taking into account the economic factors, network safety and redundancy. These networks use components such as power lines, cables, circuit breakers, switches and transformers. The transmission network is usually administered on a regional basis by an entity such as a regional transmission organization or transmission system operator.
Transmission efficiency is greatly improved by devices that increase the voltage (and thereby proportionately reduce the current), in the line conductors, thus allowing power to be transmitted with acceptable losses. The reduced current flowing through the line reduces the heating losses in the conductors. According to Joule's Law, energy losses are directly proportional to the square of the current. Thus, reducing the current by a factor of two will lower the energy lost to conductor resistance by a factor of four for any given size of conductor.
The optimum size of a conductor for a given voltage and current can be estimated by Kelvin's law for conductor size, which states that the size is at its optimum when the annual cost of energy wasted in the resistance is equal to the annual capital charges of providing the conductor. At times of lower interest rates, Kelvin's law indicates that thicker wires are optimal; while, when metals are expensive, thinner conductors are indicated: however, power lines are designed for long-term use, so Kelvin's law has to be used in conjunction with long-term estimates of the price of copper and aluminum as well as interest rates for capital.
The increase in voltage is achieved in AC circuits by using a "step-up transformer". HVDC systems require relatively costly conversion equipment which may be economically justified for particular projects such as submarine cables and longer distance high capacity point-to-point transmission. HVDC is necessary for the import and export of energy between grid systems that are not synchronized with each other.
A transmission grid is a network of power stations, transmission lines, and substations. Energy is usually transmitted within a grid with three-phase AC. Single-phase AC is used only for distribution to end users since it is not usable for large polyphase induction motors. In the 19th century, two-phase transmission was used but required either four wires or three wires with unequal currents. Higher order phase systems require more than three wires, but deliver little or no benefit.
The price of electric power station capacity is high, and electric demand is variable, so it is often cheaper to import some portion of the needed power than to generate it locally. Because loads are often regionally correlated (hot weather in the Southwest portion of the US might cause many people to use air conditioners), electric power often comes from distant sources. Because of the economic benefits of load sharing between regions, wide area transmission grids now span countries and even continents. The web of interconnections between power producers and consumers should enable power to flow, even if some links are inoperative.
The unvarying (or slowly varying over many hours) portion of the electric demand is known as the "base load" and is generally served by large facilities (which are more efficient due to economies of scale) with fixed costs for fuel and operation. Such facilities are nuclear, coal-fired or hydroelectric, while other energy sources such as concentrated solar thermal and geothermal power have the potential to provide base load power. Renewable energy sources, such as solar photovoltaics, wind, wave, and tidal, are, due to their intermittency, not considered as supplying "base load" but will still add power to the grid. The remaining or 'peak' power demand, is supplied by peaking power plants, which are typically smaller, faster-responding, and higher cost sources, such as combined cycle or combustion turbine plants fueled by natural gas.
Long-distance transmission of electricity (hundreds of kilometers) is cheap and efficient, with costs of US$0.005–0.02 per kWh (compared to annual averaged large producer costs of US$0.01–0.025 per kWh, retail rates upwards of US$0.10 per kWh, and multiples of retail for instantaneous suppliers at unpredicted highest demand moments). Thus distant suppliers can be cheaper than local sources (e.g., New York often buys over 1000 MW of electricity from Canada). Multiple local sources (even if more expensive and infrequently used) can make the transmission grid more fault tolerant to weather and other disasters that can disconnect distant suppliers.
Long-distance transmission allows remote renewable energy resources to be used to displace fossil fuel consumption. Hydro and wind sources cannot be moved closer to populous cities, and solar costs are lowest in remote areas where local power needs are minimal. Connection costs alone can determine whether any particular renewable alternative is economically sensible. Costs can be prohibitive for transmission lines, but various proposals for massive infrastructure investment in high capacity, very long distance super grid transmission networks could be recovered with modest usage fees.
At the power stations, the power is produced at a relatively low voltage between about 2.3 kV and 30 kV, depending on the size of the unit. The generator terminal voltage is then stepped up by the power station transformer to a higher voltage (115 kV to 765 kV AC, varying by the transmission system and by the country) for transmission over long distances.
In the United States, power transmission is, variously, 230 kV to 500 kV, with less than 230 kV or more than 500 kV being local exceptions.
For example, the Western System has two primary interchange voltages: 500 kV AC at 60 Hz, and ±500 kV (1,000 kV net) DC from North to South (Columbia River to Southern California) and Northeast to Southwest (Utah to Southern California). The 287.5 kV (Hoover to Los Angeles line, via Victorville) and 345 kV (APS line) being local standards, both of which were implemented before 500 kV became practical, and thereafter the Western System standard for long distance AC power transmission.
Transmitting electricity at high voltage reduces the fraction of energy lost to resistance, which varies depending on the specific conductors, the current flowing, and the length of the transmission line. For example, a span at 765 kV carrying 1000 MW of power can have losses of 1.1% to 0.5%. A 345 kV line carrying the same load across the same distance has losses of 4.2%. For a given amount of power, a higher voltage reduces the current and thus the resistive losses in the conductor. For example, raising the voltage by a factor of 10 reduces the current by a corresponding factor of 10 and therefore the formula_1 losses by a factor of 100, provided the same sized conductors are used in both cases. Even if the conductor size (cross-sectional area) is decreased ten-fold to match the lower current, the formula_1 losses are still reduced ten-fold. Long-distance transmission is typically done with overhead lines at voltages of 115 to 1,200 kV. At extremely high voltages, more than 2,000 kV exists between conductor and ground, corona discharge losses are so large that they can offset the lower resistive losses in the line conductors. Measures to reduce corona losses include conductors having larger diameters; often hollow to save weight, or bundles of two or more conductors.
Factors that affect the resistance, and thus loss, of conductors used in transmission and distribution lines include temperature, spiraling, and the skin effect. The resistance of a conductor increases with its temperature. Temperature changes in electric power lines can have a significant effect on power losses in the line. Spiraling, which refers to the way stranded conductors spiral about the center, also contributes to increases in conductor resistance. The skin effect causes the effective resistance of a conductor to increase at higher alternating current frequencies. Corona and resistive losses can be estimated using a mathematical model.
Transmission and distribution losses in the USA were estimated at 6.6% in 1997, 6.5% in 2007 and 5% from 2013 to 2019. In general, losses are estimated from the discrepancy between power produced (as reported by power plants) and power sold to the end customers; the difference between what is produced and what is consumed constitute transmission and distribution losses, assuming no utility theft occurs.
As of 1980, the longest cost-effective distance for direct-current transmission was determined to be . For alternating current it was , though all transmission lines in use today are substantially shorter than this.
In any alternating current transmission line, the inductance and capacitance of the conductors can be significant. Currents that flow solely in ‘reaction’ to these properties of the circuit, (which together with the resistance define the impedance) constitute reactive power flow, which transmits no ‘real’ power to the load. These reactive currents, however, are very real and cause extra heating losses in the transmission circuit. The ratio of 'real' power (transmitted to the load) to 'apparent' power (the product of a circuit's voltage and current, without reference to phase angle) is the power factor. As reactive current increases, the reactive power increases and the power factor decreases. For transmission systems with low power factor, losses are higher than for systems with high power factor. Utilities add capacitor banks, reactors and other components (such as phase-shifting transformers; static VAR compensators; and flexible AC transmission systems, FACTS) throughout the system help to compensate for the reactive power flow, reduce the losses in power transmission and stabilize system voltages. These measures are collectively called 'reactive support'.
Current flowing through transmission lines induces a magnetic field that surrounds the lines of each phase and affects the inductance of the surrounding conductors of other phases. The mutual inductance of the conductors is partially dependent on the physical orientation of the lines with respect to each other. Three-phase power transmission lines are conventionally strung with phases separated on different vertical levels. The mutual inductance seen by a conductor of the phase in the middle of the other two phases will be different than the inductance seen by the conductors on the top or bottom. An imbalanced inductance among the three conductors is problematic because it may result in the middle line carrying a disproportionate amount of the total power transmitted. Similarly, an imbalanced load may occur if one line is consistently closest to the ground and operating at a lower impedance. Because of this phenomenon, conductors must be periodically transposed along the length of the transmission line so that each phase sees equal time in each relative position to balance out the mutual inductance seen by all three phases. To accomplish this, line position is swapped at specially designed transposition towers at regular intervals along the length of the transmission line in various transposition schemes.
Subtransmission is part of an electric power transmission system that runs at relatively lower voltages. It is uneconomical to connect all distribution substations to the high main transmission voltage, because the equipment is larger and more expensive. Typically, only larger substations connect with this high voltage. It is stepped down and sent to smaller substations in towns and neighborhoods. Subtransmission circuits are usually arranged in loops so that a single line failure does not cut off service to many customers for more than a short time. Loops can be "normally closed", where loss of one circuit should result in no interruption, or "normally open" where substations can switch to a backup supply. While subtransmission circuits are usually carried on overhead lines, in urban areas buried cable may be used. The lower-voltage subtransmission lines use less right-of-way and simpler structures; it is much more feasible to put them underground where needed. Higher-voltage lines require more space and are usually above-ground since putting them underground is very expensive.
There is no fixed cutoff between subtransmission and transmission, or subtransmission and distribution. The voltage ranges overlap somewhat. Voltages of 69 kV, 115 kV, and 138 kV are often used for subtransmission in North America. As power systems evolved, voltages formerly used for transmission were used for subtransmission, and subtransmission voltages became distribution voltages. Like transmission, subtransmission moves relatively large amounts of power, and like distribution, subtransmission covers an area instead of just point-to-point.
At the substations, transformers reduce the voltage to a lower level for distribution to commercial and residential users. This distribution is accomplished with a combination of sub-transmission (33 to 132 kV) and distribution (3.3 to 25 kV). Finally, at the point of use, the energy is transformed to low voltage (varying by country and customer requirements – see Mains electricity by country).
High-voltage power transmission allows for lesser resistive losses over long distances in the wiring. This efficiency of high voltage transmission allows for the transmission of a larger proportion of the generated power to the substations and in turn to the loads, translating to operational cost savings.
In a very simplified model, assume the electrical grid delivers electricity from a generator (modelled as an ideal voltage source with voltage formula_3, delivering a power formula_4) to a single point of consumption, modelled by a pure resistance formula_5, when the wires are long enough to have a significant resistance formula_6.
If the resistance are simply in series without any transformer between them, the circuit acts as a voltage divider, because the same current formula_7 runs through the wire resistance and the powered device. As a consequence, the useful power (used at the point of consumption) is:
Assume now that a transformer converts high-voltage, low-current electricity transported by the wires into low-voltage, high-current electricity for use at the consumption point. If we suppose it is an ideal transformer with a voltage ratio of formula_9 (i.e., the voltage is divided by formula_9 and the current is multiplied by formula_9 in the secondary branch, compared to the primary branch), then the circuit is again equivalent to a voltage divider, but the transmission wires now have apparent resistance of only formula_12. The useful power is then:
For formula_14 (i.e. conversion of high voltage to low voltage near the consumption point), a larger fraction of the generator's power is transmitted to the consumption point and a lesser fraction is lost to Joule heating.
Oftentimes, we are only interested in the terminal characteristics of the transmission line, which are the voltage and current at the sending and receiving ends. The transmission line itself is then modeled as a "black box" and a 2 by 2 transmission matrix is used to model its behavior, as follows:
The line is assumed to be a reciprocal, symmetrical network, meaning that the receiving and sending labels can be switched with no consequence. The transmission matrix T also has the following properties:
The parameters "A", "B", "C", and "D" differ depending on how the desired model handles the line's resistance ("R"), inductance ("L"), capacitance ("C"), and shunt (parallel, leak) conductance "G". The four main models are the short line approximation, the medium line approximation, the long line approximation (with distributed parameters), and the lossless line. In all models described, a capital letter such as "R" refers to the total quantity summed over the line and a lowercase letter such as "c" refers to the per-unit-length quantity.
The lossless line approximation is the least accurate model; it is often used on short lines when the inductance of the line is much greater than its resistance. For this approximation, the voltage and current are identical at the sending and receiving ends.
The characteristic impedance is pure real, which means resistive for that impedance, and it is often called surge impedance for a lossless line. When lossless line is terminated by surge impedance, there is no voltage drop. Though the phase angles of voltage and current are rotated, the magnitudes of voltage and current remain constant along the length of the line. For load > SIL, the voltage will drop from sending end and the line will “consume” VARs. For load Donald G. Fink, H. Wayne Beatty, "Standard Handbook for Electrical Engineers 11th Edition", McGraw Hill, 1978, , pages 15-57 and 15-58 In these cases special high-voltage cables for DC are used. Submarine HVDC systems are often used to connect the electricity grids of islands, for example, between Great Britain and continental Europe, between Great Britain and Ireland, between Tasmania and the Australian mainland, between the North and South Islands of New Zealand, between New Jersey and New York City, and between New Jersey and Long Island. Submarine connections up to in length are presently in use.
HVDC links can be used to control problems in the grid with AC electricity flow. The power transmitted by an AC line increases as the phase angle between source end voltage and destination ends increases, but too large a phase angle will allow the systems at either end of the line to fall out of step. Since the power flow in a DC link is controlled independently of the phases of the AC networks at either end of the link, this phase angle limit does not exist, and a DC link is always able to transfer its full rated power. A DC link therefore stabilizes the AC grid at either end, since power flow and phase angle can then be controlled independently.
As an example, to adjust the flow of AC power on a hypothetical line between Seattle and Boston would require adjustment of the relative phase of the two regional electrical grids. This is an everyday occurrence in AC systems, but one that can become disrupted when AC system components fail and place unexpected loads on the remaining working grid system. With an HVDC line instead, such an interconnection would:
(and possibly in other cooperating cities along the transmission route). Such a system could be less prone to failure if parts of it were suddenly shut down. One example of a long DC transmission line is the Pacific DC Intertie located in the Western United States.
The amount of power that can be sent over a transmission line is limited. The origins of the limits vary depending on the length of the line. For a short line, the heating of conductors due to line losses sets a thermal limit. If too much current is drawn, conductors may sag too close to the ground, or conductors and equipment may be damaged by overheating. For intermediate-length lines on the order of , the limit is set by the voltage drop in the line. For longer AC lines, system stability sets the limit to the power that can be transferred. Approximately, the power flowing over an AC line is proportional to the cosine of the phase angle of the voltage and current at the receiving and transmitting ends. This angle varies depending on system loading and generation. It is undesirable for the angle to approach 90 degrees, as the power flowing decreases but the resistive losses remain. Very approximately, the allowable product of line length and maximum load is proportional to the square of the system voltage. Series capacitors or phase-shifting transformers are used on long lines to improve stability. High-voltage direct current lines are restricted only by thermal and voltage drop limits, since the phase angle is not material to their operation.
Up to now, it has been almost impossible to foresee the temperature distribution along the cable route, so that the maximum applicable current load was usually set as a compromise between understanding of operation conditions and risk minimization. The availability of industrial distributed temperature sensing (DTS) systems that measure in real time temperatures all along the cable is a first step in monitoring the transmission system capacity. This monitoring solution is based on using passive optical fibers as temperature sensors, either integrated directly inside a high voltage cable or mounted externally on the cable insulation. A solution for overhead lines is also available. In this case the optical fiber is integrated into the core of a phase wire of overhead transmission lines (OPPC). The integrated Dynamic Cable Rating (DCR) or also called Real Time Thermal Rating (RTTR) solution enables not only to continuously monitor the temperature of a high voltage cable circuit in real time, but to safely utilize the existing network capacity to its maximum. Furthermore, it provides the ability to the operator to predict the behavior of the transmission system upon major changes made to its initial operating conditions.
To ensure safe and predictable operation, the components of the transmission system are controlled with generators, switches, circuit breakers and loads. The voltage, power, frequency, load factor, and reliability capabilities of the transmission system are designed to provide cost effective performance for the customers.
The transmission system provides for base load and peak load capability, with safety and fault tolerance margins. The peak load times vary by region largely due to the industry mix. In very hot and very cold climates home air conditioning and heating loads have an effect on the overall load. They are typically highest in the late afternoon in the hottest part of the year and in mid-mornings and mid-evenings in the coldest part of the year. This makes the power requirements vary by the season and the time of day. Distribution system designs always take the base load and the peak load into consideration.
The transmission system usually does not have a large buffering capability to match the loads with the generation. Thus generation has to be kept matched to the load, to prevent overloading failures of the generation equipment.
Multiple sources and loads can be connected to the transmission system and they must be controlled to provide orderly transfer of power. In centralized power generation, only local control of generation is necessary, and it involves synchronization of the generation units, to prevent large transients and overload conditions.
In distributed power generation the generators are geographically distributed and the process to bring them online and offline must be carefully controlled. The load control signals can either be sent on separate lines or on the power lines themselves. Voltage and frequency can be used as signalling mechanisms to balance the loads.
In voltage signaling, the variation of voltage is used to increase generation. The power added by any system increases as the line voltage decreases. This arrangement is stable in principle. Voltage-based regulation is complex to use in mesh networks, since the individual components and setpoints would need to be reconfigured every time a new generator is added to the mesh.
In frequency signaling, the generating units match the frequency of the power transmission system. In droop speed control, if the frequency decreases, the power is increased. (The drop in line frequency is an indication that the increased load is causing the generators to slow down.)
Wind turbines, vehicle-to-grid and other locally distributed storage and generation systems can be connected to the power grid, and interact with it to improve system operation. Internationally, the trend has been a slow move from a heavily centralized power system to a decentralized power system. The main draw of locally distributed generation systems which involve a number of new and innovative solutions is that they reduce transmission losses by leading to consumption of electricity closer to where it was produced.
Under excess load conditions, the system can be designed to fail gracefully rather than all at once. Brownouts occur when the supply power drops below the demand. Blackouts occur when the supply fails completely.
Rolling blackouts (also called load shedding) are intentionally engineered electrical power outages, used to distribute insufficient power when the demand for electricity exceeds the supply.
Operators of long transmission lines require reliable communications for control of the power grid and, often, associated generation and distribution facilities. Fault-sensing protective relays at each end of the line must communicate to monitor the flow of power into and out of the protected line section so that faulted conductors or equipment can be quickly de-energized and the balance of the system restored. Protection of the transmission line from short circuits and other faults is usually so critical that common carrier telecommunications are insufficiently reliable, and in remote areas a common carrier may not be available. Communication systems associated with a transmission project may use:
Rarely, and for short distances, a utility will use pilot-wires strung along the transmission line path. Leased circuits from common carriers are not preferred since availability is not under control of the electric power transmission organization.
Transmission lines can also be used to carry data: this is called power-line carrier, or PLC. PLC signals can be easily received with a radio for the long wave range.
Optical fibers can be included in the stranded conductors of a transmission line, in the overhead shield wires. These cables are known as optical ground wire ("OPGW"). Sometimes a standalone cable is used, all-dielectric self-supporting ("ADSS") cable, attached to the transmission line cross arms.
Some jurisdictions, such as Minnesota, prohibit energy transmission companies from selling surplus communication bandwidth or acting as a telecommunications common carrier. Where the regulatory structure permits, the utility can sell capacity in extra dark fibers to a common carrier, providing another revenue stream.
Some regulators regard electric transmission to be a natural monopoly and there are moves in many countries to separately regulate transmission (see electricity market).
Spain was the first country to establish a regional transmission organization. In that country, transmission operations and market operations are controlled by separate companies. The transmission system operator is Red Eléctrica de España (REE) and the wholesale electricity market operator is Operador del Mercado Ibérico de Energía – Polo Español, S.A. (OMEL) OMEL Holding | Omel Holding. Spain's transmission system is interconnected with those of France, Portugal, and Morocco.
The establishment of RTOs in the United States was spurred by the FERC's Order 888, "Promoting Wholesale Competition Through Open Access Non-discriminatory Transmission Services by Public Utilities; Recovery of Stranded Costs by Public Utilities and Transmitting Utilities", issued in 1996.
In the United States and parts of Canada, several electric transmission companies operate independently of generation companies, but there are still regions - the Southern United States - where vertical integration of the electric system is intact. In regions of separation, transmission owners and generation owners continue to interact with each other as market participants with voting rights within their RTO. RTOs in the United States are regulated by the Federal Energy Regulatory Commission.
The cost of high voltage electricity transmission (as opposed to the costs of electric power distribution) is comparatively low, compared to all other costs arising in a consumer's electricity bill. In the UK, transmission costs are about 0.2 p per kWh compared to a delivered domestic price of around 10 p per kWh.
Research evaluates the level of capital expenditure in the electric power T&D equipment market will be worth $128.9 bn in 2011.
Merchant transmission is an arrangement where a third party constructs and operates electric transmission lines through the franchise area of an unrelated incumbent utility.
Operating merchant transmission projects in the United States include the Cross Sound Cable from Shoreham, New York to New Haven, Connecticut, Neptune RTS Transmission Line from Sayreville, New Jersey to New Bridge, New York, and Path 15 in California. Additional projects are in development or have been proposed throughout the United States, including the Lake Erie Connector, an underwater transmission line proposed by ITC Holdings Corp., connecting Ontario to load serving entities in the PJM Interconnection region.
There is only one unregulated or market interconnector in Australia: Basslink between Tasmania and Victoria. Two DC links originally implemented as market interconnectors, Directlink and Murraylink, have been converted to regulated interconnectors. NEMMCO
A major barrier to wider adoption of merchant transmission is the difficulty in identifying who benefits from the facility so that the beneficiaries will pay the toll. Also, it is difficult for a merchant transmission line to compete when the alternative transmission lines are subsidized by incumbent utility businesses with a monopolized and regulated rate base. In the United States, the FERC's Order 1000, issued in 2010, attempts to reduce barriers to third party investment and creation of merchant transmission lines where a public policy need is found.
Some large studies, including a large study in the United States, have failed to find any link between living near power lines and developing any sickness or diseases, such as cancer. A 1997 study found that it did not matter how close one was to a power line or a sub-station, there was no increased risk of cancer or illness.
The mainstream scientific evidence suggests that low-power, low-frequency, electromagnetic radiation associated with household currents and high transmission power lines does not constitute a short or long-term health hazard. Some studies, however, have found statistical correlations between various diseases and living or working near power lines. No adverse health effects have been substantiated for people not living close to powerlines.
The New York State Public Service Commission conducted a study, documented in "Opinion No. 78-13" (issued June 19, 1978), to evaluate potential health effects of electric fields. The study's case number is too old to be listed as a case number in the commission's online database, DMM, and so the original study can be difficult to find. The study chose to utilize the electric field strength that was measured at the edge of an existing (but newly built) right-of-way on a 765 kV transmission line from New York to Canada, 1.6 kV/m, as the interim standard maximum electric field at the edge of any new transmission line right-of-way built in New York State after issuance of the order. The opinion also limited the voltage of all new transmission lines built in New York to 345 kV. On September 11, 1990, after a similar study of magnetic field strengths, the NYSPSC issued their "Interim Policy Statement on Magnetic Fields". This study established a magnetic field interim standard of 200 mG at the edge of the right-of-way using the winter-normal conductor rating. This later document can also be difficult to find on the NYSPSC's online database, since it predates the online database system. As a comparison with everyday items, a hair dryer or electric blanket produces a 100 mG - 500 mG magnetic field. An electric razor can produce 2.6 kV/m. Whereas electric fields can be shielded, magnetic fields cannot be shielded, but are usually minimized by optimizing the location of each phase of a circuit in cross-section.
When a new transmission line is proposed, within the application to the applicable regulatory body (usually a public utility commission), there is often an analysis of electric and magnetic field levels at the edge of rights-of-way. These analyses are performed by a utility or by an electrical engineering consultant using modelling software. At least one state public utility commission has access to software developed by an engineer or engineers at the Bonneville Power Administration to analyze electric and magnetic fields at edge of rights-of-way for proposed transmission lines. Often, public utility commissions will not comment on any health impacts due to electric and magnetic fields and will refer information seekers to the state's affiliated department of health.
There are established biological effects for acute "high" level exposure to magnetic fields well above 100 µT (1 G) (1,000 mG). In a residential setting, there is "limited evidence of carcinogenicity in humans and less than sufficient evidence for carcinogenicity in experimental animals", in particular, childhood leukemia, "associated with" average exposure to residential power-frequency magnetic field above 0.3 µT (3 mG) to 0.4 µT (4 mG). These levels exceed average residential power-frequency magnetic fields in homes, which are about 0.07 µT (0.7 mG) in Europe and 0.11 µT (1.1 mG) in North America.
The Earth's natural geomagnetic field strength varies over the surface of the planet between 0.035 mT and 0.07 mT (35 µT - 70 µT or 350 mG - 700 mG) while the International Standard for the continuous exposure limit is set at 40 mT (400,000 mG or 400 G) for the general public.
Tree Growth Regulator and Herbicide Control Methods may be used in transmission line right of ways which may have health effects.
The Federal Energy Regulatory Commission (FERC) is the primary regulatory agency of electric power transmission and wholesale electricity sales within the United States. It was originally established by Congress in 1920 as the Federal Power Commission and has since undergone multiple name and responsibility modifications. That which is not regulated by FERC, primarily electric power distribution and the retail sale of power, is under the jurisdiction of state authority.
Two of the more notable U.S. energy policies impacting electricity transmission are Order No. 888 and the Energy Policy Act of 2005.
Order No. 888 adopted by FERC on 24 April 1996, was “designed to remove impediments to competition in the wholesale bulk power marketplace and to bring more efficient, lower cost power to the Nation’s electricity consumers. The legal and policy cornerstone of these rules is to remedy undue discrimination in access to the monopoly owned transmission wires that control whether and to whom electricity can be transported in interstate commerce.” Order No. 888 required all public utilities that own, control, or operate facilities used for transmitting electric energy in interstate commerce, to have open access non-discriminatory transmission tariffs. These tariffs allow any electricity generator to utilize the already existing power lines for the transmission of the power that they generate. Order No. 888 also permits public utilities to recover the costs associated with providing their power lines as an open access service.
The Energy Policy Act of 2005 (EPAct) signed into law by congress on 8 August 2005, further expanded the federal authority of regulating power transmission. EPAct gave FERC significant new responsibilities including but not limited to the enforcement of electric transmission reliability standards and the establishment of rate incentives to encourage investment in electric transmission.
Historically, local governments have exercised authority over the grid and have significant disincentives to encourage actions that would benefit states other than their own. Localities with cheap electricity have a disincentive to encourage making interstate commerce in electricity trading easier, since other regions will be able to compete for local energy and drive up rates. For example, some regulators in Maine do not wish to address congestion problems because the congestion serves to keep Maine rates low. Further, vocal local constituencies can block or slow permitting by pointing to visual impact, environmental, and perceived health concerns. In the US, generation is growing four times faster than transmission, but big transmission upgrades require the coordination of multiple states, a multitude of interlocking permits, and cooperation between a significant portion of the 500 companies that own the grid. From a policy perspective, the control of the grid is balkanized, and even former energy secretary Bill Richardson refers to it as a "third world grid". There have been efforts in the EU and US to confront the problem. The US national security interest in significantly growing transmission capacity drove passage of the 2005 energy act giving the Department of Energy the authority to approve transmission if states refuse to act. However, soon after the Department of Energy used its power to designate two National Interest Electric Transmission Corridors, 14 senators signed a letter stating the DOE was being too aggressive.
In some countries where electric locomotives or electric multiple units run on low frequency AC power, there are separate single phase traction power networks operated by the railways. Prime examples are countries in Europe (including Austria, Germany and Switzerland) which utilize the older AC technology based on 16 2"/"3 Hz (Norway and Sweden also use this frequency but use conversion from the 50 Hz public supply; Sweden has a 16 2"/"3 Hz traction grid but only for part of the system).
High-temperature superconductors (HTS) promise to revolutionize power distribution by providing lossless transmission of electrical power. The development of superconductors with transition temperatures higher than the boiling point of liquid nitrogen has made the concept of superconducting power lines commercially feasible, at least for high-load applications. It has been estimated that the waste would be halved using this method, since the necessary refrigeration equipment would consume about half the power saved by the elimination of the majority of resistive losses. Some companies such as Consolidated Edison and American Superconductor have already begun commercial production of such systems. In one hypothetical future system called a SuperGrid, the cost of cooling would be eliminated by coupling the transmission line with a liquid hydrogen pipeline.
Superconducting cables are particularly suited to high load density areas such as the business district of large cities, where purchase of an easement for cables would be very costly.
Single-wire earth return (SWER) or single wire ground return is a single-wire transmission line for supplying single-phase electrical power for an electrical grid to remote areas at low cost. It is principally used for rural electrification, but also finds use for larger isolated loads such as water pumps. Single wire earth return is also used for HVDC over submarine power cables.
Both Nikola Tesla and Hidetsugu Yagi attempted to devise systems for large scale wireless power transmission in the late 1800s and early 1900s, with no commercial success.
In November 2009, LaserMotive won the NASA 2009 Power Beaming Challenge by powering a cable climber 1 km vertically using a ground-based laser transmitter. The system produced up to 1 kW of power at the receiver end. In August 2010, NASA contracted with private companies to pursue the design of laser power beaming systems to power low earth orbit satellites and to launch rockets using laser power beams.
Wireless power transmission has been studied for transmission of power from solar power satellites to the earth. A high power array of microwave or laser transmitters would beam power to a rectenna. Major engineering and economic challenges face any solar power satellite project.
The Federal government of the United States admits that the power grid is susceptible to cyber-warfare. The United States Department of Homeland Security works with industry to identify vulnerabilities and to help industry enhance the security of control system networks, the federal government is also working to ensure that security is built in as the U.S. develops the next generation of 'smart grid' networks.
In June 2019, Russia has conceded that it is "possible" its electrical grid is under cyber-attack by the United States. "The New York Times" reported that American hackers from the United States Cyber Command planted malware potentially capable of disrupting the Russian electrical grid. | https://en.wikipedia.org/wiki?curid=38824 |
Wenceslaus IV of Bohemia
Wenceslaus (also "Wenceslas"; ; , nicknamed "der Faule" ("the Idle"); 26 February 136116 August 1419) was, by inheritance, King of Bohemia (as "Wenceslaus IV") from 1363 and by election, German King (formally King of the Romans) from 1376. He was the third Bohemian and fourth German monarch of the Luxembourg dynasty. Wenceslaus was deposed in 1400 as King of the Romans, but continued to rule as Bohemian king until his death.
Wenceslaus was born in the Imperial city of Nuremberg, the son of Emperor Charles IV by his third wife Anna von Schweidnitz, a scion of the Silesian Piasts, and baptized at St. Sebaldus Church. He was raised by the Prague Archbishops Arnošt of Pardubice and Jan Očko of Vlašim. His father had the two-year-old crowned King of Bohemia in June 1363 and in 1373 also obtained for him the Electoral Margraviate of Brandenburg. When on 10 June 1376 Charles IV asserted Wenceslaus' election as King of the Romans by the prince-electors, two of seven votes, those of Brandenburg and Bohemia, were held by the emperor and his son themselves. Wenceslaus was crowned at Aix-la-Chapelle on 6 July.
In order to secure the election of his son, Charles IV revoked the privileges of many Imperial Cities that he had earlier granted, and mortgaged them to various nobles. The cities, however, were not powerless, and as executors of the public peace, they had developed into a potent military force. Moreover, as Charles IV had organised the cities into leagues, he had made it possible for them to cooperate in large-scale endeavors. Indeed, on 4 July 1376, fourteen Swabian cities bound together into the independent Swabian League of Cities to defend their rights against the newly elected King, attacking the lands of Eberhard II, Count of Württemberg. The city league soon attracted other members and until 1389 acted as an autonomous state within the Empire.
Wenceslaus took some part in government during his father's lifetime, and on Charles' death in 1378, he inherited the Crown of Bohemia and as Emperor-elect assumed the government of the Holy Roman Empire. In the cathedral of Monza there is preserved a series of reliefs depicting the coronations of the kings of Italy with the Iron Crown of Lombardy. The seventh of these depicts Wenceslaus being crowned in the presence of six electors, he himself being the seventh. The depiction is probably not accurate and was likely made solely to reinforce the claims of the cathedral on the custody of the Iron Crown.
In 1387 a quarrel between Frederick, Duke of Bavaria, and the cities of the Swabian League allied with the Archbishop of Salzburg gave the signal for a general war in Swabia, in which the cities, weakened by their isolation, mutual jealousies and internal conflicts, were defeated by the forces of Eberhard II, Count of Württemberg, at Döffingen, near Grafenau, on 24 August 1388. The cities were taken severally and devastated. Most of them quietly acquiesced when King Wenceslaus proclaimed an ambivalent arrangement at Cheb ("Eger") in 1389 that prohibited all leagues between cities, while confirming their political autonomy. This settlement provided a modicum of stability for the next several decades, however the cities dropped out as a basis of the central Imperial authority.
During his long reign, Wenceslaus held a tenuous grip on power at best, as he came into repeated conflicts with the Bohemian nobility led by the House of Rosenberg. On two occasions he was even imprisoned for lengthy spells by rebellious nobles.
But the greatest liability for Wenceslaus proved to be his own family. Charles IV had divided his holdings among his sons and other relatives. Although Wenceslaus upon his father's death retained Bohemia, his younger half-brother Sigismund inherited Brandenburg, while John received the newly established Duchy of Görlitz in Upper Lusatia. The March of Moravia was divided between his cousins Jobst and Procopius, and his uncle Wenceslaus I had already been made Duke of Luxembourg. Hence the young king was left without the resources his father had enjoyed, although he inherited the duchy of Luxembourg from his uncle in 1383. In 1386, Sigismund became king of Hungary and became involved in affairs further east.
Wenceslaus also faced serious opposition from the Bohemian nobles and even from his chancellor, the Prague archbishop Jan of Jenštejn. In a conflict surrounding the investiture of the abbot of Kladruby, the torture and murder of the archbishop's vicar-general John of Nepomuk by royal officials in 1393 sparked a noble rebellion. In 1394 Wenceslaus' cousin Jobst of Moravia was named regent, while Wenceslaus was arrested at Králův Dvůr. King Sigismund of Hungary arranged a truce in 1396, and for his efforts he was recognized as heir to Wenceslaus.
In the Papal Schism, Wenceslaus had supported the Roman Pope Urban VI. As Bohemian king he sought to protect the religious reformer Jan Hus and his followers against the demands of the Roman Catholic Church for their suppression as heretics. This caused many Germans to withdraw from the University of Prague, and set up their own university at Leipzig.
He then met Charles VI of France at Reims, where the two monarchs decided to persuade the rival popes, now Benedict XIII and Boniface IX, to resign, and to end the papal schisms by the election of a new pontiff. Many of the princes were angry at this abandonment of Boniface by Wenceslaus, who had also aroused much indignation by his long absence from Germany and by selling the title of duke of Milan to Gian Galeazzo Visconti.
Hus was eventually executed in Konstanz in 1415, and the rest of Wenceslaus' reign in Bohemia featured precursors of the Hussite Wars that would follow his death during the Defenestrations of Prague.
In view of his troubles in Bohemia, Wenceslaus did not seek a coronation ceremony as Holy Roman Emperor. This did little to endear him to the pope. He also was long absent from the German lands. Consequently, he faced anger at the "Reichstag" diets of Nuremberg (1397) and Frankfurt (1398). The four Rhenish electors, Count Palatine Rupert III and the Archbishops of Mainz, Cologne and Trier, accused him of failing to maintain the public peace or to resolve the Schism. They demanded that Wenceslaus appear before them to answer to the charges in June 1400. Wenceslaus demurred, in large part because of renewed hostilities in Bohemia. When he failed to appear, the electors meeting at Lahneck Castle declared him deposed on 20 August 1400 on account of "futility, idleness, negligence and ignobility". The next day they chose Rupert as their king at Rhens. Although Wenceslaus refused to acknowledge this successor's decade-long reign, he made no move against Rupert.
On 29 June 1402 Wenceslaus was captured by Sigismund, who at first intended to escort him to Rome to have him crowned emperor, but Rupert heard of this plan and tried to prevent the passage to Italy, so that Sigismund had Wenceslaus imprisoned, at first in Schaumberg, and from 16 August in Vienna, in the charge of William, Duke of Austria.
On 20 November, Wenceslaus was forced to sign his renunciation of all his powers to Sigismund and the Dukes of Austria. In exchange, the conditions of his imprisonment were relaxed.
In early 1403, Rupert made diplomatic overtures to Sigismund, attempting to get him to forgo his attempt to secure the imperial crown. But Sigismund invaded Bohemia with Hungarian forces, looting and imposing heavy taxes, and persecuting the supporters of Wenceslaus. He also plundered the royal treasury to pay for his military campaigns against the supporters Rupert and of Jobst of Moravia. An armistice between Sigismund and Jobst was agreed to be in effect from 14 April until 20 May. This gave Sigismund's opponents time to prepare, and after the end of the armistice, Sigismund could make no further gains and retreated from Bohemia, reaching Bratislava on 24 July.
On 1 October 1403, Pope Boniface IX finally acknowledged the deposition of Wenceslaus and the election of Rupert as King of the Romans. As a coronation of Wenceslaus was now no longer a possibility, and while he was nominally still prisoner in Vienna, he was no longer under strict guard, and he managed to escape on 11 November.
He crossed the Danube and was escorted by John of Liechtenstein via Mikulov back to Bohemia, meeting his supporters in Kutná Hora before moving on Prague, which he entered on Christmas.
Among the charges raised by Rupert as the basis for his predecessor's deposition was the Papal Schism. King Rupert called the Council of Pisa in 1409, attended by defectors from both papal parties. They elected Antipope Alexander V, worsening the situation because he was not acknowledged by his two rivals, and from 1409 to 1417 there were three popes.
After the death of Rupert in 1410, his succession at first proved difficult, as both Wenceslaus' cousin Jobst of Moravia and Wenceslaus' brother Sigismund of Hungary were elected King of the Romans. Wenceslaus himself had never recognized his deposition and hence still claimed the kingship. Jobst died in 1411, and Wenceslaus agreed to give up the crown, so long as he could keep Bohemia. This settled the issue, and after 1411 Sigismund reigned as king and later also became Holy Roman Emperor.
The bishops and secular leaders, tired of the Great Schism, supported Sigismund when he called the Council of Constance in 1414. The goal of the council was to reform the church in head and members. What made it work was the translation of supreme authority from the popes to the council. In 1417, the council deposed all three popes and elected a new one, maintaining all the while that the council, and not the pope, was the supreme head of the church. By resolving the schism, Sigismund restored the honour of the imperial title and made himself the most influential monarch in the west.
Wenceslaus was married twice, first to Joanna of Bavaria, a scion of the Wittelsbach dynasty, on 29 September 1370. Following her death on 31 December 1386 (according to an unproven legend "mangled by one of Wenceslaus' beloved deer-hounds"), he married her first cousin once removed, Sofia of Bavaria, on 2 May 1389. He had no children by either wife.
Wenceslaus was described as a man of great knowledge and is known for the Wenceslas Bible, a richly illuminated manuscript he had drawn up between 1390 and 1400. However, his rule remained uncertain, varying between idleness and cruel measures as in the case of John of Nepomuk. Unlike his father, Wenceslaus relied on favouritism, which made him abhorrent to many nobles and led to increasing isolation. Moreover, he probably suffered from alcoholism, which was brought to light in 1398 when he was unable to accept an invitation by King Charles VI of France for a reception at Reims due to his drunkenness.
Wenceslaus died in 1419 of a heart attack during a hunt in the woods surrounding his castle Nový Hrad at Kunratice (today a part of Prague), leaving the country in a deep political crisis. His death was followed by almost two decades of conflict called the Hussite Wars, which were centred on greater calls for religious reform by Jan Hus and spurred by popular outrage provoked by his martyrdom.
[aged 58] | https://en.wikipedia.org/wiki?curid=38826 |
Michael Bloomberg
Michael Rubens Bloomberg (born February 14, 1942) is an American businessman, politician, philanthropist, and author. He is the majority owner and co-founder of Bloomberg L.P.. He was the mayor of New York City from 2002 to 2013, and was a candidate in the 2020 Democratic presidential primaries.
Bloomberg grew up in Medford, Massachusetts and graduated from Johns Hopkins University and Harvard Business School. He began his career at the securities brokerage Salomon Brothers before forming his own company in 1981. That company, Bloomberg L.P., is a financial information, software and media firm that is known for its Bloomberg Terminal. Bloomberg spent the next twenty years as its chairman and CEO. In 2019, "Forbes" ranked him as the ninth-richest person in the world, with an estimated net worth of $55.5 billion. Since signing The Giving Pledge, Bloomberg has given away $8.2 billion.
Bloomberg was elected the 108th mayor of New York City. First elected in 2001, he held office for three consecutive terms, winning re-election in 2005 and in 2009. Pursuing socially liberal and fiscally moderate policies, Bloomberg developed a technocratic managerial style. After a brief stint as a full-time philanthropist, he re-assumed the position of CEO at Bloomberg L.P. by the end of 2014.
As mayor of New York, Bloomberg established public charter schools, rebuilt urban infrastructure, and supported gun control, public health initiatives, and environmental protections. He also led a rezoning of large areas of New York City, which facilitated massive and widespread new commercial and residential construction after the September 11 attacks. Bloomberg is considered to have had far-reaching influence on the politics, business sector, and culture of New York City during his three terms as mayor. He has also faced significant criticism for his expansion of the city's stop and frisk program.
In November 2019, Bloomberg officially launched his campaign for the Democratic nomination for president of the United States in the 2020 election. He ended his campaign in March 2020, after having won only 61 delegates. Bloomberg self-funded $935 million on the primary campaign, setting the record for the most expensive U.S. presidential primary campaign.
Bloomberg was born at St. Elizabeth's Hospital, in Brighton, a neighborhood of Boston, Massachusetts, on February 14, 1942, to William Henry Bloomberg (1906–1963), a bookkeeper for a dairy company, and Charlotte (née Rubens) Bloomberg (1909–2011). The Bloomberg Center at the Harvard Business School was named in William Henry's honor. His family is Jewish, and he is a member of the Emanu-El Temple in Manhattan. Bloomberg's paternal grandfather, Alexander "Elick" Bloomberg, was a Polish Jew. Bloomberg's maternal grandfather, Max Rubens, was a Lithuanian Jewish immigrant from present-day Belarus. His maternal grandmother was born in New York to Lithuanian Jewish parents.
The family lived in Allston until Bloomberg was two years old, followed by Brookline, Massachusetts for two years, finally settling in the Boston suburb of Medford, Massachusetts, where he lived until after he graduated from college.
Bloomberg is an Eagle Scout and he graduated from Medford High School in 1960. He went on to attend Johns Hopkins University, where he joined the fraternity Phi Kappa Psi and constructed the school mascot's (the blue jay's) costume. He graduated in 1964 with a Bachelor of Science degree in electrical engineering. In 1966, he graduated from Harvard Business School with a Master of Business Administration.
Bloomberg is a member of Kappa Beta Phi. He wrote an autobiography, with help from Bloomberg News Editor-in-Chief Matthew Winkler, called "Bloomberg by Bloomberg".
In 1973, Bloomberg became a general partner at Salomon Brothers, a large Wall Street investment bank, where he headed equity trading and, later, systems development. In 1981, Salomon Brothers was bought by Phibro Corporation, and Bloomberg was laid off from the investment bank with a $10 million cash buyout of his partnership stake in the firm.
Using this money, Bloomberg, having designed in-house computerized financial systems for Salomon, set up a data services company named Innovative Market Systems (IMS) based on his belief that Wall Street would pay a premium for high-quality business information, delivered instantaneously on computer terminals in a variety of usable formats. The company sold customized computer terminals that delivered real-time market data, financial calculations and other analytics to Wall Street firms. The terminal, first called the Market Master terminal, was released to market in December 1982.
In 1986, the company renamed itself Bloomberg L.P. Over the years, ancillary products including Bloomberg News, Bloomberg Radio, Bloomberg Message, and Bloomberg Tradebook were launched. Bloomberg, L.P. had revenues of approximately $10 billion in 2018. As of 2019, the company has more than 325,000 terminal subscribers worldwide and employs 20,000 people in dozens of locations.
The culture of the company in the 1980s and 1990s has been compared to a fraternity, with employees bragging in the company's office about their sexual exploits. The company was sued four times by female employees for sexual harassment, including one incident in which a victim claimed to have been raped. To celebrate Bloomberg's 48th birthday, colleagues published a pamphlet entitled "". Among various sayings that were attributed to him, several have subsequently been criticized as sexist or misogynistic.
When he left the position of CEO to pursue a political career as the mayor of New York City, Bloomberg was replaced by Lex Fenwick and later by Daniel L. Doctoroff, after his initial service as deputy mayor under Bloomberg. After completing his final term as the mayor of New York City, Bloomberg spent his first eight months out of office as a full-time philanthropist. In fall 2014, he announced that he would return to Bloomberg L.P. as CEO at the end of 2014, succeeding Doctoroff, who had led the company since February 2008. Bloomberg resigned as CEO of Bloomberg L.P. to run for president in 2019.
In March 2009, "Forbes" reported Bloomberg's wealth at $16 billion, a gain of $4.5 billion over the previous year, the world's biggest increase in wealth from 2008 to 2009. Bloomberg moved from 142nd to 17th in the "Forbes" list of the world's billionaires in only two years. In the 2019 "Forbes" list of the world's billionaires, he was the ninth-richest person; his net worth was estimated at $55.5 billion.
Bloomberg assumed office as the 108th mayor of New York City on January 1, 2002. He won re-election in 2005 and again in 2009. As mayor, he initially struggled with approval ratings as low as 24 percent; however, he subsequently developed and maintained high approval ratings. Bloomberg joined Rudy Giuliani and Fiorello La Guardia as re-elected Republican mayors in the mostly Democratic city.
Bloomberg stated that he wanted public education reform to be the legacy of his first term and addressing poverty to be the legacy of his second.
Bloomberg chose to apply a statistical, metrics-based management approach to city government, and granted departmental commissioners broad autonomy in their decision-making. Breaking with 190 years of tradition, he implemented what "New York Times" political reporter Adam Nagourney called a "bullpen" open office plan, similar to a Wall Street trading floor, in which dozens of aides and managerial staff are seated together in a large chamber. The design is intended to promote accountability and accessibility.
Bloomberg accepted a remuneration of $1 annually in lieu of the mayoral salary.
As mayor, Bloomberg turned the city's $6 billion budget deficit into a $3 billion surplus, largely by raising property taxes. Bloomberg increased city funding for the new development of affordable housing through a plan that created and preserved an estimated 160,000 affordable homes in the city. In 2003, he implemented a successful smoking ban in all indoor workplaces, including bars and restaurants, and many other cities and states followed suit. On December 5, 2006, New York City became the first city in the United States to ban trans-fat from all restaurants. This went into effect in July 2008 and has since been adopted in many other cities and countries. Bloomberg created bicycle lanes, required chain restaurants to post calorie counts, and pedestrianized much of Times Square. In 2011, Bloomberg launched the NYC Young Men's Initiative, a $127 million initiative to support programs and policies designed to address disparities between young Black and Latino men and their peers, and personally donated $30 million to the project. In 2010, Bloomberg supported the then-controversial Islamic complex near Ground Zero.
Bloomberg greatly expanded the New York City Police Department's stop and frisk program, with a sixfold increase in documented stops. The policy was challenged in U.S. Federal Court, which ruled that the city's implementation of the policy violated citizens' rights under the Fourth Amendment of the Constitution and encouraged racial profiling. Bloomberg's administration appealed the ruling; however, his successor, Mayor Bill de Blasio, dropped the appeal and allowed the ruling to take effect. After the September 11 attacks, with assistance from the Central Intelligence Agency, Bloomberg's administration oversaw a controversial program that surveilled Muslim communities on the basis of their religion, ethnicity, and language. The program was discontinued in 2014.
In a January 2014 Quinnipiac poll, 64 percent of voters called Bloomberg's 12 years as mayor "mainly a success."
In 2001, New York's Republican mayor Rudy Giuliani, was ineligible for re-election due to the city's limit of two consecutive terms. Bloomberg, who had been a lifelong member of the Democratic Party, decided to run for mayor on the Republican ticket. Voting in the primary began on the morning of September 11, 2001. The primary was postponed later that day, due to the September 11 attacks. In the rescheduled primary, Bloomberg defeated Herman Badillo, a former Democratic congressman, to become the Republican nominee. After a runoff, the Democratic nomination went to New York City Public Advocate Mark J. Green.
Bloomberg received Giuliani's endorsement to succeed him in the 2001 election. He also had a huge campaign spending advantage. Although New York City's campaign finance law restricts the amount of contributions that a candidate can accept, Bloomberg chose not to use public funds and therefore his campaign was not subject to these restrictions. He spent $73 million of his own money on his campaign, outspending Green five to one. One of the major themes of his campaign was that, with the city's economy suffering from the effects of the World Trade Center attacks, it needed a mayor with business experience.
In addition to running on the Republican line, Bloomberg ran on the ticket of the controversial Independence Party, in which "Social Therapy" leaders Fred Newman and Lenora Fulani exerted strong influence. Bloomberg's votes on that line exceeded his margin of victory over Green. (Under New York's fusion rules, a candidate can run on more than one party's line and combine all the votes received.) Another factor was the vote in Staten Island, which has traditionally been friendlier to Republicans than the rest of the city. Bloomberg received 75 percent of the vote in Staten Island. Overall, he won 50 percent to 48 percent.
In the wake of the September 11 attacks, Bloomberg's administration made a successful bid to host the 2004 Republican National Convention. The convention drew thousands of protesters, among them New Yorkers who despised Bush and the Bush Administration's pursuit of the Iraq war.
Bloomberg was re-elected mayor in November 2005 by a margin of 20 percent, the widest margin ever for a Republican mayor of New York City. He spent almost $78 million on his campaign, exceeding the record of $74 million he spent on the previous election. In late 2004 or early 2005, Bloomberg gave the Independence Party of New York $250,000 to fund a phone bank seeking to recruit volunteers for his re-election campaign.
Former Bronx Borough President Fernando Ferrer won the Democratic nomination to oppose Bloomberg in the general election. Thomas Ognibene sought to run against Bloomberg in the Republican Party's primary election. The Bloomberg campaign successfully challenged the signatures Ognibene submitted to the Board of Elections to prevent Ognibene from appearing on ballots for the Republican primary. Instead, Ognibene ran on only the Conservative Party ticket. Ognibene accused Bloomberg of betraying Republican Party ideals, a feeling echoed by others.
Bloomberg opposed the confirmation of John Roberts as Chief Justice of the United States. Bloomberg is a staunch supporter of abortion rights and did not believe that Roberts was committed to maintaining "Roe v. Wade". In addition to Republican support, Bloomberg obtained the endorsements of several prominent Democrats: former Democratic Mayor Ed Koch; former Democratic governor Hugh Carey; former Democratic City Council Speaker Peter Vallone, and his son, Councilman Peter Vallone Jr.; former Democratic Congressman Floyd Flake (who had previously endorsed Bloomberg in 2001), and Brooklyn Borough President Marty Markowitz.
On October 2, 2008, Bloomberg announced he would seek to extend the city's term limits law and run for a third mayoral term in 2009, arguing a leader of his field was needed following the financial crisis of 2007–08. "Handling this financial crisis while strengthening essential services ... is a challenge I want to take on," Bloomberg said at a news conference. "So should the City Council vote to amend term limits, I plan to ask New Yorkers to look at my record of independent leadership and then decide if I have earned another term."
Ronald Lauder, who campaigned for New York City's term limits in 1993 and spent over 4 million dollars of his own money to limit the maximum years a mayor could serve to eight years, sided with Bloomberg and agreed to stay out of future legality issues. In exchange, he was promised a seat on an influential city board by Bloomberg.
Some people and organizations objected and NYPIRG filed a complaint with the City Conflict of Interest Board. On October 23, 2008, the City Council voted 29–22 in favor of extending the term limit to three consecutive four-year terms. After two days of public hearings, Bloomberg signed the bill into law on November 3.
Bloomberg's bid for a third term generated some controversy. Civil libertarians such as former New York Civil Liberties Union Director Norman Siegel and New York Civil Rights Coalition Executive Director Michael Meyers joined with local politicians to protest the process as undermining the democratic process.
Bloomberg's opponent was Democratic and Working Families Party nominee Bill Thompson, who had been New York City Comptroller for the past eight years and before that, president of the New York City Board of Education. Bloomberg defeated Thompson by a vote of 51 percent to 46 percent. Bloomberg spent $109.2 million on his 2009 campaign, outspending Thompson by a margin of more than 11 to one.
After the release of Independence Party campaign filings in January 2010, it was reported that Bloomberg had made two $600,000 contributions from his personal account to the Independence Party on October 30 and November 2, 2009. The Independence Party then paid $750,000 of that money to Republican Party political operative John Haggerty Jr.
This prompted an investigation beginning in February 2010 by the office of New York County District Attorney Cyrus Vance Jr. into possible improprieties. The Independence Party later questioned how Haggerty spent the money, which was to go to poll-watchers. Former New York State Senator Martin Connor contended that because the Bloomberg donations were made to an Independence Party housekeeping account rather than to an account meant for current campaigns, this was a violation of campaign finance laws. Haggerty also spent money from a separate $200,000 donation from Bloomberg on office space.
On September 13, 2013, Bloomberg announced that he would not endorse any of the candidates to succeed him. On his radio show, he stated, "I don't want to do anything that complicates it for the next mayor. And that's one of the reasons I've decided I'm just not going to make an endorsement in the race." He added, "I want to make sure that person is ready to succeed, to take what we've done and build on that."
Bloomberg praised "The New York Times" for its endorsement of Christine Quinn and Joe Lhota as their favorite candidates in the Democratic and Republican primaries, respectively. Quinn came in third in the Democratic primary and Lhota won the Republican primary. Bloomberg criticized Democratic mayoral candidate Bill de Blasio's campaign methods, which he initially called "racist;" Bloomberg later downplayed and partially retracted those remarks.
On January 1, 2014, de Blasio became New York City's new mayor, succeeding Bloomberg.
Bloomberg was frequently mentioned as a possible centrist candidate for the presidential elections in 2008 and 2012, as well as for governor of New York in 2010 or vice-president in 2008. He eventually declined to seek all of these offices.
In the immediate aftermath of Hurricane Sandy in November 2012, Bloomberg penned an op-ed officially endorsing Barack Obama for president, citing Obama's policies on climate change.
On January 23, 2016, it was reported that Bloomberg was again considering a presidential run, as an independent candidate in the 2016 election, if Bernie Sanders got the Democratic party nomination. This was the first time he had officially confirmed he was considering a run. Bloomberg supporters believed that Bloomberg could run as a centrist and capture many voters who were dissatisfied with the likely Democratic and Republican nominees. However, on March 7, Bloomberg announced he would not be running for president.
In July 2016, Bloomberg delivered a speech at the 2016 Democratic National Convention in which he called Hillary Clinton "the right choice". Bloomberg warned of the dangers a Donald Trump presidency would pose. He said Trump "wants you to believe that we can solve our biggest problems by deporting Mexicans and shutting out Muslims. He wants you to believe that erecting trade barriers will bring back good jobs. He's wrong on both counts." Bloomberg also said Trump's economic plans "would make it harder for small businesses to compete" and would "erode our influence in the world". Trump responded to the speech by condemning Bloomberg in a series of tweets.
In June 2018, Bloomberg made pledged $80 million to support Democratic congressional candidates in the 2018 election, with the goal of flipping control of the Republican-controlled House to Democrats. In a statement, Bloomberg said that Republican House leadership were "absolutely feckless" and had failed to govern responsibly. Bloomberg advisor Howard Wolfson was chosen to lead the effort, which was to target mainly suburban districts. By early October, Bloomberg had committed more than $100 million to returning the House and Senate to Democratic power, fueling speculation about a presidential run in 2020. On October 10, 2018, Bloomberg announced that he had returned to the Democratic party.
On March 5, 2019, Bloomberg had announced that he would not run for president in 2020. Instead, he encouraged the Democratic Party to "nominate a Democrat who will be in the strongest position to defeat Donald Trump". However, due to his dissatisfaction with the Democratic field, Bloomberg reconsidered. He officially launched his campaign for the 2020 Democratic nomination on November 24, 2019.
Bloomberg self-funded his campaign from his personal fortune, and did not accept campaign contributions.
Bloomberg's campaign suffered from his lackluster performance in two televised debates. When Bloomberg participated in his first presidential debate, Elizabeth Warren challenged him to release women from non-disclosure agreements relating to their allegations of sexual harassment at Bloomberg L.P. Two days later, Bloomberg announced that there were three women who had made complaints concerning him, and added that he would release any of the three if they request him to do so. Warren continued her attack in the second debate the next week. Others criticized Bloomberg for his wealth and campaign spending, as well as his former affiliation with the Republican Party.
As a late entrant to the race, Bloomberg skipped the first four state primaries and caucuses. He spent $676 million of his personal fortune on the primary campaign, breaking a record for the most money ever spent on a presidential primary campaign. His campaign blanketed the country with campaign advertisements on broadcast and cable television, the Internet, and radio, as well as direct mail. Bloomberg also spent heavily on campaign operations that grew to 200 field offices and more than 2,400 paid campaign staffers. His support in nationwide opinion polls hovered around 15 percent but stagnated or dropped before Super Tuesday. Bloomberg suspended his campaign on March 4, 2020, after a disappointing Super Tuesday in which he won only American Samoa. He subsequently endorsed former Vice President Joe Biden.
When a "60 Minutes" correspondent remarked on March 1 that Bloomberg had spent twice what President Trump had raised, he was asked how much he would spend. Bloomberg replied, "I'm making an investment in this country. My investment is I'm going to remove President Trump from 1600 Pennsylvania Avenue or at least try as hard as I can."
Bloomberg was a lifelong Democrat until 2001, when he switched to the Republican Party to run for Mayor. He switched to an independent in 2007 and registered again as a Democrat in October 2018. In 2004, he endorsed the re-election of George W. Bush and spoke at the 2004 Republican National Convention. He endorsed Barack Obama's re-election in 2012, endorsed Hillary Clinton in the 2016 election, and spoke at the 2016 Democratic National Convention.
As Mayor of New York, Bloomberg supported government initiatives in public health and welfare. This included tobacco control efforts (including an increase in the legal age to purchase tobacco products, a ban on smoking in indoor workplaces, and an increase in the cigarette tax); the elimination of the use of artificial trans fats in restaurants; and bans on all flavored tobacco and e-cigarette products including menthol flavors. Bloomberg also launched an unsuccessful effort to ban on certain large (more than 16 fluid ounce) sugary sodas at restaurants and food service establishments in the city. These initiatives were supported by public health advocates but were criticized by some as "nanny state" policies.
Over his career, Bloomberg has "mingled support for progressive causes with more conservative positions on law enforcement, business regulation and school choice." Bloomberg supports gun-control measures, abortion rights, same-sex marriage, and a pathway to citizenship for illegal immigrants. He advocates for a public health insurance option that he has called "Medicare for all for people that are uncovered" rather than a universal single-payer healthcare system. He is concerned about climate change and has touted his mayoral efforts to reduce greenhouse gases. Bloomberg supported the Iraq War and opposed creating a timeline for withdrawing troops. Bloomberg has sometimes embraced the use of surveillance in efforts to deter crime and protect against terrorism.
During and after his tenure, he was a staunch supporter of stop-and-frisk. In November 2019, Bloomberg apologized for supporting it. He advocates reversing many of the Trump tax cuts. His own tax plan includes implementing a 5 percent surtax on incomes above $5 million a year and would raise federal revenue by $5 trillion over a decade. He opposes a wealth tax, saying that it would likely be found unconstitutional. He has also proposed more stringent financial regulations that include tougher oversight for big banks, a financial transactions tax, and stronger consumer protections.
Bloomberg has stated that running as a Democratnot an independentis the only path he sees to defeating Donald Trump, saying: "In 2020, the great likelihood is that an independent would just split the anti-Trump vote and end up re-electing the President. That's a risk I refused to run in 2016 and we can't afford to run it now."
In August 2010, Bloomberg signed The Giving Pledge, whereby the wealthy pledge to give away at least half of their wealth. Since then, he has given away $8.2 billion.
According to a profile in "Fast Company", his Bloomberg Philanthropies foundation has five areas of focus: public health, the arts, government innovation, the environment, and education. According to the "Chronicle of Philanthropy", Bloomberg was the third-largest philanthropic donor in America in 2015. Through his Bloomberg Philanthropies Foundation, he has donated and/or pledged $240 million in 2005, $60 million in 2006, $47 million in 2007, $150 million in 2009, $332 million in 2010, $311 million in 2011, and $510 million in 2015.
2011 recipients included the Campaign for Tobacco-Free Kids; Centers for Disease Control and Prevention; Johns Hopkins Bloomberg School of Public Health; World Lung Foundation and the World Health Organization. According to "The New York Times", Bloomberg was an "anonymous donor" to the Carnegie Corporation from 2001 to 2010, with gifts ranging from $5 million to $20 million each year. The Carnegie Corporation distributed these contributions to hundreds of New York City organizations ranging from the Dance Theatre of Harlem to Gilda's Club, a non-profit organization that provides support to people and families living with cancer. He continues to support the arts through his foundation.
Bloomberg gave $254 million in 2009 to almost 1,400 nonprofit organizations, saying, "I am a big believer in giving it all away and have always said that the best financial planning ends with bouncing the check to the undertaker."
Bloomberg is an environmentalist and has advocated policy to fight climate change at least since he became the mayor of New York City. At the national level, Bloomberg has consistently pushed for transitioning the United States' energy mix from fossil fuels to clean energy. In July 2011, Bloomberg Philanthropies donated $50 million to Sierra Club's Beyond Coal campaign, allowing the campaign to expand its efforts to shut down coal-fired power plants from 15 states to 45 states. In 2015, Bloomberg announced an additional $30 million contribution to the Beyond Coal initiative, matched with another $30 million by other donors, to help secure the retirement of half of America's fleet of coal plants by 2017. In early June 2019, Bloomberg pledged $500 million to reduce climate impacts and shut remaining coal-fired power plants by 2030 via the new Beyond Carbon initiative.
Bloomberg Philanthropies awarded a $6 million grant to the Environmental Defense Fund in support of strict regulations on fracking in the 14 states with the heaviest natural gas production.
In 2013, Bloomberg and Bloomberg Philanthropies launched the Risky Business initiative with former Treasury Secretary Hank Paulson and hedge-fund billionaire Tom Steyer. The joint effort worked to convince the business community of the need for more sustainable energy and development policies, by quantifying and publicizing the economic risks the United States faces from the impact of climate change. In January 2015, Bloomberg led Bloomberg Philanthropies in a $48-million partnership with the Heising-Simons family to launch the Clean Energy Initiative. The initiative supports state-based solutions aimed at ensuring America has a clean, reliable, and affordable energy system.
Since 2010, Bloomberg has taken an increasingly global role on environmental issues. From 2010 to 2013, he served as the chairman of the C40 Cities Climate Leadership Group, a network of the world's biggest cities working together to reduce carbon emissions. During his tenure, Bloomberg worked with President Bill Clinton to merge C40 with the Clinton Climate Initiative, with the goal of amplifying their efforts in the global fight against climate change worldwide. He serves as the president of the board of C40 Cities. In January 2014, Bloomberg began a five-year commitment totaling $53 million through Bloomberg Philanthropies to the Vibrant Oceans Initiative. The initiative partners Bloomberg Philanthropies with Oceana, Rare, and Encourage Capital to help reform fisheries and increase sustainable populations worldwide. In 2018, Bloomberg joined Ray Dalio in announcing a commitment of $185 million towards protecting the oceans.
In 2014, United Nations Secretary General Ban Ki-moon appointed Bloomberg as his first Special Envoy for Cities and Climate Change to help the United Nations work with cities to prevent climate change. In September 2014, Bloomberg convened with Ban and global leaders at the UN Climate Summit to announce definite action to fight climate change in 2015. In 2018, Ban's successor António Guterres appointed Bloomberg as UN envoy for climate action. He resigned in November 2019, in the run-up to his presidential campaign.
In late 2014, Bloomberg, Ban Ki-moon, and global city networks ICLEI-Local Governments for Sustainability (ICLEI), C40 Cities Climate Leadership Group (C40) and United Cities and Local Governments (UCLG), with support from UN-Habitat, launched the Compact of Mayors, a global coalition of mayors and city officials pledging to reduce local greenhouse gas emissions, enhance climate resilience, and track their progress transparently. To date, over 250 cities representing more than 300 million people worldwide and 4.1 percent of the total global population, have committed to the Compact of Mayors, which was merged with the Covenant of Mayors in June 2016.
In 2015, Bloomberg and Paris mayor Anne Hidalgo created the Climate Summit for Local Leaders. which convened assembled hundreds of city leaders from around the world at Paris City Hall to discuss fighting climate change. The Summit concluded with the presentation of the Paris Declaration, a pledge by leaders from assembled global cities to cut carbon emissions by 3.7 gigatons annually by 2030.
During the 2015 UN Climate Change Conference in Paris, Mark Carney, Governor of the Bank of England and chair of the Financial Stability Board, announced that Bloomberg would lead a new global task force designed to help industry and financial markets understand the growing risks of climate change.
Following President Donald Trump's announcement that the U.S. government would withdraw from the Paris climate accord, Bloomberg outlined a coalition of cities, states, universities and businesses that had come together to honor America's commitment under the agreement through 'America's Pledge.' Bloomberg offered up to $15 million to the UNFCCC, the UN body that assists countries with climate change efforts. About a month later, Bloomberg and California Governor Jerry Brown announced that the America's Pledge coalition would work to "quantify the actions taken by U.S. states, cities and business to drive down greenhouse gas emissions consistent with the goals of the Paris Agreement." In announcing the initiative, Bloomberg said "the American government may have pulled out of the Paris agreement, but American society remains committed to it." Two think tanks, World Resource Institute and the Rocky Mountain Institute, will work with America's Pledge to analyze the work cities, states and businesses do to meet the U.S. commitment to the Paris agreement.
In May 2019, Bloomberg announced a 2020 Midwestern Collegiate Climate Summit in Washington University in St. Louis with the aim to bring together leaders from Midwestern universities, local government and the private sector to reduce climate impacts in the region.
As of 2019, Bloomberg has given more than $3.3 billion to Johns Hopkins University, his alma mater, making him "the most generous living donor to any education institution in the United States." His first contribution, in 1965, had been $5. He made his first $1 million commitment to JHU in 1984, and subsequently became the first individual to exceed $1 billion in lifetime donations to a single U.S. institution of higher education.
Bloomberg's contributions to Johns Hopkins "fueled major improvements in the university's reputation and rankings, its competitiveness for faculty and students, and the appearance of its campus," and included construction of a children's hospital (the Charlotte R. Bloomberg Children's Center Building, named after Bloomberg's mother); a physics building, a school of public health (the Johns Hopkins Bloomberg School of Public Health), libraries, and biomedical research facilities, including the Institute for Cell Engineering, a stem-cell research institute within the School of Medicine, and the Malaria Research Institute within the School of Public Health. In 2013, Bloomberg committed $350 million to Johns Hopkins, five-sevenths of which were allocated to the Bloomberg Distinguished Professorships, endowing 50 Bloomberg Distinguished Professors (BDPs) whose interdisciplinary expertise crosses traditional academic disciplines. In 2016, on the School of Public Health's centennial, Bloomberg Philanthropies contributed $300 million to establish the Bloomberg American Health Initiative. Bloomberg also funded the launch of the Bloomberg–Kimmel Institute for Cancer Immunotherapy within the Johns Hopkins School of Medicine in East Baltimore, with a $50 million gift; an additional $50 million was given by philanthropist Sidney Kimmel, and $25 million by other donors. It will support cancer therapy research, technology and infrastructure development, and private sector partnerships. In 2016, Bloomberg joined Vice President Joe Biden for the institute's formal launch, embracing Biden's "cancer moonshot" initiative, which seeks to find a cure for cancer through national coordination of government and private sector resources. In 2018, Bloomberg contributed a further gift of $1.8 billion to Johns Hopkins, allowing the university to practice need-blind admission and meet the full financial need of admitted students.
In 2016, the Museum of Science, Boston announced a $50 million gift from Bloomberg. The donation marks Bloomberg's fourth gift to the museum, which he credits with sparking his intellectual curiosity as a patron and student during his youth in Medford, Massachusetts. The endowment supported the museum's education division, named the William and Charlotte Bloomberg Science Education Center in honor of Bloomberg's parents. It is the largest donation in the museum's 186-year history.
In 2015, Bloomberg donated $100 million to Cornell Tech, the applied sciences graduate school of Cornell University, to construct the first academic building, "The Bloomberg Center", on the school's Roosevelt Island campus.
In 1996, Bloomberg endowed the William Henry Bloomberg Professorship at Harvard University with a $3 million gift in honor of his father, who died in 1963, saying, "throughout his life, he recognized the importance of reaching out to the nonprofit sector to help better the welfare of the entire community."
In July 2011, Bloomberg launched a $24 million initiative to fund "Innovation Delivery Teams" in five cities. The teams are one of Bloomberg Philanthropies' key goals: advancing government innovation. In December 2011, Bloomberg Philanthropies launched a partnership with online ticket search engine SeatGeek to connect artists with new audiences. Called the Discover New York Arts Project, the project includes organizations HERE, New York Theatre Workshop, and the Kaufman Center.
In 2016, Bloomberg gave Harvard $32 million to create the Bloomberg Harvard City Leadership Initiative within Harvard Kennedy School's Ash Center for Democratic Governance and Innovation; the initiative provides training to mayors and their aides on innovative municipal leadership and challenges facing cities.
Bloomberg has been a longtime donor to global tobacco control efforts. Bloomberg has donated close to $1 billion to the World Health Organization (WHO) to promote anti-smoking efforts, including $125 million in 2006, $250 million in 2008, and $360 million, making Bloomberg Philanthropies the developing world's biggest funder of tobacco-control initiatives. In 2013, it was reported that Bloomberg had donated $109.24 million in 556 grants and 61 countries to campaigns against tobacco. Bloomberg's contributions are aimed at "getting countries to monitor tobacco use, introduce strong tobacco-control laws, and create mass media campaigns to educate the public about the dangers of tobacco use."
Bloomberg is the co-founder of Everytown for Gun Safety (formerly Mayors Against Illegal Guns), a gun control advocacy group.
In August 2016, the World Health Organization appointed Bloomberg as its Global Ambassador for Noncommunicable Diseases. In this role, Bloomberg will mobilize private sector and political leaders to help the WHO reduce deaths from preventable diseases, traffic accidents, tobacco, obesity, and alcohol. WHO Director-General Margaret Chan cited Bloomberg's ongoing support for WHO anti-smoking, drowning prevention, and road safety programs in her announcement of his new role.
In 2017, Bloomberg donated $75 million for The Shed, a new arts and cultural center in Hudson Yards, Manhattan.
Bloomberg also endowed his hometown synagogue, Temple Shalom, which was renamed for his parents as the William and Charlotte Bloomberg Jewish Community Center of Medford.
Bloomberg hosted the Global Business Forum on September 2017, during the annual meeting of the United Nations General Assembly; the gathering featured international CEOs, heads of state, and other prominent speakers.
In 1975, Bloomberg married Susan Elizabeth Barbara Brown, a British national from Yorkshire, United Kingdom. They have two daughters: Emma (born c. 1979) and Georgina (born 1983), who were featured on "Born Rich", a 2003 documentary film about the children of the extremely wealthy. Bloomberg divorced Brown in 1993, but he has said she remains his "best friend." Since 2000, Bloomberg has lived with former New York state banking superintendent Diana Taylor.
Bloomberg's younger sister, Marjorie Tiven, has been Commissioner of the New York City Commission for the United Nations, Consular Corps and Protocol, since February 2002. His daughter Emma is married to Christopher Frissora, son of multimillionaire businessman Mark Frissora.
Although he attended Hebrew school, had a bar mitzvah, and his family kept a kosher kitchen, Bloomberg today is relatively secular, attending synagogue mainly during the High Holidays and a Passover Seder with his sister, Marjorie Tiven. Neither of his daughters had bat mitzvahs.
Throughout his business career, Bloomberg has made numerous statements which have been considered by some to be insulting, derogatory, sexist or misogynistic. When working on Wall Street in the 1960s and 1970s, Bloomberg claimed in his 1997 autobiography, he had "a girlfriend in every city". On various occasions, Bloomberg allegedly commented "I'd do her", regarding certain women, some of whom were coworkers or employees. Bloomberg later said that by "do", he meant that he would have a personal relationship with the woman. Bloomberg's staff told the New York Times that he now regrets having made "disrespectful" remarks concerning women.
During his term as mayor, he lived at his own home on the Upper East Side of Manhattan instead of Gracie Mansion, the official mayoral residence. In 2013, he owned 13 properties in various countries around the world, including a $20 million Georgian mansion in Southampton, New York. In 2015, he acquired 4 Cheyne Walk, a historical property in Cheyne Walk, Chelsea, London, which once belonged to writer George Eliot. Bloomberg and his daughters own houses in Bermuda and stay there frequently.
Bloomberg stated that during his mayoralty, he rode the New York City Subway on a daily basis, particularly in the commute from his 79th Street home to his office at City Hall. An August 2007 story in "The New York Times" stated that he was often seen chauffeured by two New York Police Department-owned SUVs to an express train station to avoid having to change from the local to the express trains on the IRT Lexington Avenue Line. He supported the construction of the 7 Subway Extension and the Second Avenue Subway; in December 2013, Bloomberg took a ceremonial ride on a train to the new 34th Street station to celebrate a part of his legacy as mayor.
During his tenure as mayor, Bloomberg made cameos playing himself in the films "The Adjustment Bureau" and "New Year's Eve", as well as in episodes of "30 Rock", "Curb Your Enthusiasm", "The Good Wife", and two episodes of "Law & Order".
Bloomberg is a private pilot. He owns six airplanes: three Dassault Falcon 900s, a Beechcraft B300, a Pilatus PC-24, and a Cessna 182 Skylane. Bloomberg also owns two helicopters: an AW109 and an Airbus helicopter and as of 2012 was near the top of the waiting list for an AW609 tiltrotor aircraft. In his youth he was a licensed amateur radio operator, was proficient in Morse code, and built ham radios.
Bloomberg has received honorary degrees from Tufts University (2007), Bard College (2007), Rockefeller University (2007), the University of Pennsylvania (2008), Fordham University (2009), Williams College (2014), Harvard University (2014), the University of Michigan (2016), Villanova University (2017) and Washington University in St. Louis (2019). Bloomberg was the speaker for Princeton University's 2011 baccalaureate service.
Bloomberg has received the Yale School of Management's Award for Distinguished Leadership in Global Capital Markets (2003); Barnard College's Barnard Medal of Distinction (2008); the Robert Wood Johnson Foundation Leadership for Healthy Communities' Healthy Communities Leadership Award (2009); and the Jefferson Awards Foundation's U.S. Senator John Heinz Award for Greatest Public Service by an Elected or Appointed Official (2010). He was the inaugural laureate of the annual Genesis Prize for Jewish values in 2013, and donated the $1 million prize money to a global competition, the Genesis Generation Challenge, to identify young adults' big ideas to better the world.
Bloomberg was named the 39th most influential person in the world in the 2007 and 2008 Time 100. In 2010, "Vanity Fair" ranked him #7 in its "Vanity Fair 100" list of influential figures.
In 2014, Queen Elizabeth II appointed Bloomberg an Honorary Knight Commander of the Order of the British Empire for his "prodigious entrepreneurial and philanthropic endeavors, and the many ways in which they have benefited the United Kingdom and the U.K.-U.S. special relationship."
Bloomberg, with Matthew Winkler, wrote an autobiography, "Bloomberg by Bloomberg", published in 1997 by Wiley. A second edition was released in 2019, ahead of Bloomberg's presidential run. Bloomberg and former Sierra Club Executive Director Carl Pope co-authored "Climate of Hope: How Cities, Businesses, and Citizens Can Save the Planet" (2017), published by St. Martin's Press; the book appeared on the "New York Times" hardcover nonfiction best-seller list. Bloomberg has written a number of op-eds in the "New York Times" about various issues, including an op-ed supporting state and local efforts to fight climate change (2017), an op-ed about his donation of $1.8 billion in financial aid for college students and support for need-blind admission policies (2018); an op-ed supporting a ban on flavored e-cigarettes (2019); and an op-ed supporting policies to reduce economic inequality (2020). | https://en.wikipedia.org/wiki?curid=38828 |
Three-phase electric power
Three-phase electric power is a common method of alternating current electric power generation, transmission, and distribution. It is a type of polyphase system and is the most common method used by electrical grids worldwide to transfer power. It is also used to power large motors and other heavy loads.
A three-wire three-phase circuit is usually more economical than an equivalent two-wire single-phase circuit at the same line to ground voltage because it uses less conductor material to transmit a given amount of electrical power.
Polyphase power systems were independently invented by Galileo Ferraris, Mikhail Dolivo-Dobrovolsky, Jonas Wenström, John Hopkinson and Nikola Tesla in the late 1880s.
The conductors between a voltage source and a load are called lines, and the voltage between any two lines is called "line voltage". The voltage measured between any line and neutral is called "phase voltage". For example, for a 208/120 volt service, the line voltage is 208 Volts, and the phase voltage is 120 Volts.
In a symmetric three-phase power supply system, three conductors each carry an alternating current of the same frequency and voltage amplitude relative to a common reference but with a phase difference of one third of a cycle between each. The common reference is usually connected to ground and often to a current-carrying conductor called the neutral. Due to the phase difference, the voltage on any conductor reaches its peak at one third of a cycle after one of the other conductors and one third of a cycle before the remaining conductor. This phase delay gives constant power transfer to a balanced linear load. It also makes it possible to produce a rotating magnetic field in an electric motor and generate other phase arrangements using transformers (for instance, a two phase system using a Scott-T transformer). The amplitude of the voltage difference between two phases is formula_1 (1.732...) times the amplitude of the voltage of the individual phases.
The symmetric three-phase systems described here are simply referred to as "three-phase systems" because, although it is possible to design and implement asymmetric three-phase power systems (i.e., with unequal voltages or phase shifts), they are not used in practice because they lack the most important advantages of symmetric systems.
In a three-phase system feeding a balanced and linear load, the sum of the instantaneous currents of the three conductors is zero. In other words, the current in each conductor is equal in magnitude to the sum of the currents in the other two, but with the opposite sign. The return path for the current in any phase conductor is the other two phase conductors.
As compared to a single-phase AC power supply that uses two conductors (phase and neutral), a three-phase supply with no neutral and the same phase-to-ground voltage and current capacity per phase can transmit three times as much power using just 1.5 times as many wires (i.e., three instead of two). Thus, the ratio of capacity to conductor material is doubled. The ratio of capacity to conductor material increases to 3:1 with an ungrounded three-phase and center-grounded single-phase system (or 2.25:1 if both employ grounds of the same gauge as the conductors).
Constant power transfer and cancelling phase currents would in theory be possible with any number (greater than one) of phases, maintaining the capacity-to-conductor material ratio that is twice that of single-phase power. However, two-phase power results in a less smooth (pulsating) torque in a generator or motor (making smooth power transfer a challenge), and more than three phases complicates infrastructure unnecessarily.
Three-phase systems may also have a fourth wire, particularly in low-voltage distribution. This is the neutral wire. The neutral allows three separate single-phase supplies to be provided at a constant voltage and is commonly used for supplying groups of domestic properties which are each single-phase loads. The connections are arranged so that, as far as possible in each group, equal power is drawn from each phase. Further up the distribution system, the currents are usually well balanced. Transformers may be wired in a way that they have a four-wire secondary but a three-wire primary while allowing unbalanced loads and the associated secondary-side neutral currents.
Three-phase supplies have properties that make them very desirable in electric power distribution systems:
Most household loads are single-phase. In North American residences, three-phase power might feed a multiple-unit apartment block, but the household loads are connected only as single phase. In lower-density areas, only a single phase might be used for distribution. Some high-power domestic appliances such as electric stoves and clothes dryers are powered by a split phase system at 240 volts or from two phases of a three phase system at 208 volts only.
Wiring for the three phases is typically identified by color codes which vary by country. Connection of the phases in the right order is required to ensure the intended direction of rotation of three-phase motors. For example, pumps and fans may not work in reverse. Maintaining the identity of phases is required if there is any possibility two sources can be connected at the same time; a direct interconnection between two different phases is a short-circuit.
At the power station, an electrical generator converts mechanical power into a set of three AC electric currents, one from each coil (or winding) of the generator. The windings are arranged such that the currents are at the same frequency but with the peaks and troughs of their wave forms offset to provide three complementary currents with a phase separation of one-third cycle (120° or radians). The generator frequency is typically 50 or 60 Hz, depending on the country.
At the power station, transformers change the voltage from generators to a level suitable for transmission in order to minimize losses.
After further voltage conversions in the transmission network, the voltage is finally transformed to the standard utilization before power is supplied to customers.
Most automotive alternators generate three-phase AC and rectify it to DC with a diode bridge.
A "delta" connected transformer winding is connected between phases of a three-phase system. A "wye" transformer connects each winding from a phase wire to a common neutral point.
A single three-phase transformer can be used, or three single-phase transformers.
In an "open delta" or "V" system, only two transformers are used. A closed delta made of three single-phase transformers can operate as an open delta if one of the transformers has failed or needs to be removed. In open delta, each transformer must carry current for its respective phases as well as current for the third phase, therefore capacity is reduced to 87%. With one of three transformers missing and the remaining two at 87% efficiency, the capacity is 58% ( of 87%).
Where a delta-fed system must be grounded for detection of stray current to ground or protection from surge voltages, a grounding transformer (usually a zigzag transformer) may be connected to allow ground fault currents to return from any phase to ground. Another variation is a "corner grounded" delta system, which is a closed delta that is grounded at one of the junctions of transformers.
There are two basic three-phase configurations: wye (Y) and delta (Δ). As shown in the diagram, a delta configuration requires only three wires for transmission but a wye (star) configuration may have a fourth wire. The fourth wire, if present, is provided as a neutral and is normally grounded. The "3-wire" and "4-wire" designations do not count the ground wire present above many transmission lines, which is solely for fault protection and does not carry current under normal use.
A four-wire system with symmetrical voltages between phase and neutral is obtained when the neutral is connected to the "common star point" of all supply windings. In such a system, all three phases will have the same magnitude of voltage relative to the neutral. Other non-symmetrical systems have been used.
The four-wire wye system is used when a mixture of single-phase and three-phase loads are to be served, such as mixed lighting and motor loads. An example of application is local distribution in Europe (and elsewhere), where each customer may be only fed from one phase and the neutral (which is common to the three phases). When a group of customers sharing the neutral draw unequal phase currents, the common neutral wire carries the currents resulting from these imbalances. Electrical engineers try to design the three-phase power system for any one location so that the power drawn from each of three phases is the same, as far as possible at that site. Electrical engineers also try to arrange the distribution network so the loads are balanced as much as possible, since the same principles that apply to individual premises also apply to the wide-scale distribution system power. Hence, every effort is made by supply authorities to distribute the power drawn on each of the three phases over a large number of premises so that, on average, as nearly as possible a balanced load is seen at the point of supply.
For domestic use, some countries such as the UK may supply one phase and neutral at a high current (up to 100 A) to one property, while others such as Germany may supply 3 phases and neutral to each customer, but at a lower fuse rating, typically 40–63 A per phase, and "rotated" to avoid the effect that more load tends to be put on the first phase.
In North America, a high-leg delta supply is sometimes used where one winding of a delta-connected transformer feeding the load is center-tapped and that center tap is grounded and connected as a neutral as shown in the second diagram. This setup produces three different voltages: If the voltage between the center tap (neutral) and each of the top and bottom taps (phase and anti-phase) is 120 V (100%), the voltage across the phase and anti-phase lines is 240 V (200%), and the neutral to "high leg" voltage is ≈ 208 V (173%).
The reason for providing the delta connected supply is usually to power large motors requiring a rotating field. However, the premises concerned will also require the "normal" North American 120 V supplies, two of which are derived (180 degrees "out of phase") between the "neutral" and either of the center tapped phase points.
In the perfectly balanced case all three lines share equivalent loads. Examining the circuits we can derive relationships between line voltage and current, and load voltage and current for wye and delta connected loads.
In a balanced system each line will produce equal voltage magnitudes at phase angles equally spaced from each other. With V1 as our reference and V3 lagging V2 lagging V1, using angle notation, and VLN the voltage between the line and the neutral we have:
These voltages feed into either a wye or delta connected load.
The voltage seen by the load will depend on the load connection; for the wye case, connecting each load to a phase (line-to-neutral) voltages gives:
where "Z"total is the sum of line and load impedances ("Z"total = "Z"LN + "Z"Y), and "θ" is the phase of the total impedance ("Z"total).
The phase angle difference between voltage and current of each phase is not necessarily 0 and is dependent on the type of load impedance, "Z"y. Inductive and capacitive loads will cause current to either lag or lead the voltage. However, the relative phase angle between each pair of lines (1 to 2, 2 to 3, and 3 to 1) will still be −120°.
By applying Kirchhoff's current law (KCL) to the neutral node, the three phase currents sum to the total current in the neutral line. In the balanced case:
In the delta circuit, loads are connected across the lines, and so loads see line-to-line voltages:
Further:
where "θ" is the phase of delta impedance ("Z"Δ).
Relative angles are preserved, so "I"31 lags "I"23 lags "I"12 by 120°. Calculating line currents by using KCL at each delta node gives:
and similarly for each other line:
where, again, "θ" is the phase of delta impedance ("Z"Δ).
Inspection of a phasor diagram, or conversion from phasor notation to complex notation, illuminates how the difference between two line-to-neutral voltages yields a line-to-line voltage that is greater by a factor of . As a delta configuration connects a load across phases of a transformer, it delivers the line-to-line voltage difference, which is times greater than the line-to-neutral voltage delivered to a load in the wye configuration. As the power transferred is V2/Z, the impedance in the delta configuration must be 3 times what it would be in a wye configuration for the same power to be transferred.
Except in a high-leg delta system, single-phase loads may be connected across any two phases, or a load can be connected from phase to neutral. Distributing single-phase loads among the phases of a three-phase system balances the load and makes most economical use of conductors and transformers.
In a symmetrical three-phase four-wire, wye system, the three phase conductors have the same voltage to the system neutral. The voltage between line conductors is times the phase conductor to neutral voltage:
The currents returning from the customers' premises to the supply transformer all share the neutral wire. If the loads are evenly distributed on all three phases, the sum of the returning currents in the neutral wire is approximately zero. Any unbalanced phase loading on the secondary side of the transformer will use the transformer capacity inefficiently.
If the supply neutral is broken, phase-to-neutral voltage is no longer maintained. Phases with higher relative loading will experience reduced voltage, and phases with lower relative loading will experience elevated voltage, up to the phase-to-phase voltage.
A high-leg delta provides phase-to-neutral relationship of , however, LN load is imposed on one phase. A transformer manufacturer's page suggests that LN loading not exceed 5% of transformer capacity.
Since ≈ 1.73, defining as 100% gives . If was set as 100%, then .
When the currents on the three live wires of a three-phase system are not equal or are not at an exact 120° phase angle, the power loss is greater than for a perfectly balanced system. The method of symmetrical components is used to analyze unbalanced systems.
With linear loads, the neutral only carries the current due to imbalance between the phases. Gas-discharge lamps and devices that utilize rectifier-capacitor front-end such as switch-mode power supplies, computers, office equipment and such produce third-order harmonics that are in-phase on all the supply phases. Consequently, such harmonic currents add in the neutral in a wye system (or in the grounded (zigzag) transformer in a delta system), which can cause the neutral current to exceed the phase current.
An important class of three-phase load is the electric motor. A three-phase induction motor has a simple design, inherently high starting torque and high efficiency. Such motors are applied in industry for many applications. A three-phase motor is more compact and less costly than a single-phase motor of the same voltage class and rating, and single-phase AC motors above 10HP (7.5 kW) are uncommon. Three-phase motors also vibrate less and hence last longer than single-phase motors of the same power used under the same conditions.
Resistance heating loads such as electric boilers or space heating may be connected to three-phase systems. Electric lighting may also be similarly connected.
Line frequency flicker in light is detrimental to high speed cameras used in sports event broadcasting for slow motion replays. It can be reduced by evenly spreading line frequency operated light sources across the three phases so that the illuminated area is lit from all three phases. This technique was applied successfully at the 2008 Beijing Olympics.
Rectifiers may use a three-phase source to produce a six-pulse DC output. The output of such rectifiers is much smoother than rectified single phase and, unlike single-phase, does not drop to zero between pulses. Such rectifiers may be used for battery charging, electrolysis processes such as aluminium production or for operation of DC motors. "Zig-zag" transformers may make the equivalent of six-phase full-wave rectification, twelve pulses per cycle, and this method is occasionally employed to reduce the cost of the filtering components, while improving the quality of the resulting DC.
One example of a three-phase load is the electric arc furnace used in steelmaking and in refining of ores.
In many European countries electric stoves are usually designed for a three-phase feed. Individual heating units are often connected between phase and neutral to allow for connection to a single-phase circuit if three-phase is not available. Other usual three-phase loads in the domestic field are tankless water heating systems and storage heaters. Homes in Europe and the UK have standardized on a nominal 230 V between any phase and ground. (Existing supplies remain near 240 V in the UK, and 220 V on much of the continent.) Most groups of houses are fed from a three-phase street transformer so that individual premises with above-average demand can be fed with a second or third phase connection.
Phase converters are used when three-phase equipment needs to be operated on a single-phase power source. They are used when three-phase power is not available or cost is not justifiable. Such converters may also allow the frequency to be varied, allowing speed control. Some railway locomotives use a single-phase source to drive three-phase motors fed through an electronic drive.
A rotary phase converter is a three-phase motor with special starting arrangements and power factor correction that produces balanced three-phase voltages. When properly designed, these rotary converters can allow satisfactory operation of a three-phase motor on a single-phase source. In such a device, the energy storage is performed by the inertia (flywheel effect) of the rotating components. An external flywheel is sometimes found on one or both ends of the shaft.
A three-phase generator can be driven by a single-phase motor. This motor-generator combination can provide a frequency changer function as well as phase conversion, but requires two machines with all their expenses and losses. The motor-generator method can also form an uninterruptible power supply when used in conjunction with a large flywheel and a battery-powered DC motor; such a combination will deliver nearly constant power compared to the temporary frequency drop experienced with a standby generator set gives until the standby generator kicks in.
Capacitors and autotransformers can be used to approximate a three-phase system in a static phase converter, but the voltage and phase angle of the additional phase may only be useful for certain loads.
Variable-frequency drives and digital phase converters use power electronic devices to synthesize a balanced three-phase supply from single-phase input power.
Conductors of a three-phase system are usually identified by a color code, to allow for balanced loading and to assure the correct phase rotation for motors. Colors used may adhere to International Standard IEC 60446 (later IEC 60445), older standards or to no standard at all and may vary even within a single installation. For example, in the U.S. and Canada, different color codes are used for grounded (earthed) and ungrounded systems. | https://en.wikipedia.org/wiki?curid=38829 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.