id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
45,146 | https://en.wikipedia.org/wiki/CP/M | CP/M, originally standing for Control Program/Monitor and later Control Program for Microcomputers, is a mass-market operating system created in 1974 for Intel 8080/85-based microcomputers by Gary Kildall of Digital Research, Inc. CP/M is a disk operating system and its purpose is to organize files on a magnetic storage medium, and to load and run programs stored on a disk. Initially confined to single-tasking on 8-bit processors and no more than 64 kilobytes of memory, later versions of CP/M added multi-user variations and were migrated to 16-bit processors.
CP/M eventually became the de facto standard and the dominant operating system for microcomputers, in combination with the S-100 bus computers. This computer platform was widely used in business through the late 1970s and into the mid-1980s. CP/M increased the market size for both hardware and software by greatly reducing the amount of programming required to port an application to a new manufacturer's computer. An important driver of software innovation was the advent of (comparatively) low-cost microcomputers running CP/M, as independent programmers and hackers bought them and shared their creations in user groups. CP/M was eventually displaced in popularity by DOS following the 1981 introduction of the IBM PC.
History
Early history
Gary Kildall originally developed CP/M during 1974, as an operating system to run on an Intel Intellec-8 development system, equipped with a Shugart Associates 8-inch floppy-disk drive interfaced via a custom floppy-disk controller. It was written in Kildall's own PL/M (Programming Language for Microcomputers). Various aspects of CP/M were influenced by the TOPS-10 operating system of the DECsystem-10 mainframe computer, which Kildall had used as a development environment.
CP/M supported a wide range of computers based on the 8080 and Z80 CPUs. An early outside licensee of CP/M was Gnat Computers, an early microcomputer developer out of San Diego, California. In 1977, the company was granted the license to use CP/M 1.0 for any micro they desired for $90. Within the year, demand for CP/M was so high that Digital Research was able to increase the license to tens of thousands of dollars.
Under Kildall's direction, the development of CP/M 2.0 was mostly carried out by John Pierce in 1978. Kathryn Strutynski, a friend of Kildall from Naval Postgraduate School (NPS), became the fourth employee of Digital Research Inc. in early 1979. She started by debugging CP/M 2.0, and later became influential as key developer for CP/M 2.2 and CP/M Plus. Other early developers of the CP/M base included Robert "Bob" Silberstein and David "Dave" K. Brown.
CP/M originally stood for "Control Program/Monitor", a name which implies a resident monitor—a primitive precursor to the operating system. However, during the conversion of CP/M to a commercial product, trademark registration documents filed in November 1977 gave the product's name as "Control Program for Microcomputers". The CP/M name follows a prevailing naming scheme of the time, as in Kildall's PL/M language, and Prime Computer's PL/P (Programming Language for Prime), both suggesting IBM's PL/I; and IBM's CP/CMS operating system, which Kildall had used when working at the NPS. This renaming of CP/M was part of a larger effort by Kildall and his wife with business partner, Dorothy McEwen to convert Kildall's personal project of CP/M and the Intel-contracted PL/M compiler into a commercial enterprise. The Kildalls intended to establish the Digital Research brand and its product lines as synonymous with "microcomputer" in the consumer's mind, similar to what IBM and Microsoft together later successfully accomplished in making "personal computer" synonymous with their product offerings. Intergalactic Digital Research, Inc. was later renamed via a corporation change-of-name filing to Digital Research, Inc.
Initial success
By September 1981, Digital Research had sold more than CP/M licenses; InfoWorld stated that the actual market was likely larger because of sublicenses. Many different companies produced CP/M-based computers for many different markets; the magazine stated that "CP/M is well on its way to establishing itself as the small-computer operating system". Even companies with proprietary operating systems, such as Heath/Zenith (HDOS), offered CP/M as an alternative for their 8080/Z80-based systems; by contrast, no comparable standard existed for computers based on the also popular 6502 CPU. They supported CP/M because of its large library of software. The Xerox 820 ran the operating system because "where there are literally thousands of programs written for it, it would be unwise not to take advantage of it", Xerox said. (Xerox included a Howard W. Sams CP/M manual as compensation for Digital Research's documentation, which InfoWorld described as atrocious, incomplete, incomprehensible, and poorly indexed.) By 1984, Columbia University used the same source code to build Kermit binaries for more than a dozen different CP/M systems, plus two generic versions. The operating system was described as a "software bus", allowing multiple programs to interact with different hardware in a standardized way. Programs written for CP/M were typically portable among different machines, usually requiring only the specification of the escape sequences for control of the screen and printer. This portability made CP/M popular, and much more software was written for CP/M than for operating systems that ran on only one brand of hardware. One restriction on portability was that certain programs used the extended instruction set of the Z80 processor and would not operate on an 8080 or 8085 processor. Another was graphics routines, especially in games and graphics programs, which were generally machine-specific as they used direct hardware access for speed, bypassing the OS and BIOS (this was also a common problem in early DOS machines).
Bill Gates claimed that the Apple II with a Z-80 SoftCard was the single most-popular CP/M hardware platform. Many different brands of machines ran the operating system, some notable examples being the Altair 8800, the IMSAI 8080, the Osborne 1 and Kaypro luggables, and MSX computers. The best-selling CP/M-capable system of all time was probably the Amstrad PCW. In the UK, CP/M was also available on Research Machines educational computers (with the CP/M source code published as an educational resource), and for the BBC Micro when equipped with a Z80 co-processor. Furthermore, it was available for the Amstrad CPC series, the Commodore 128, TRS-80, and later models of the ZX Spectrum. CP/M 3 was also used on the NIAT, a custom handheld computer designed for A. C. Nielsen's internal use with 1 MB of SSD memory.
Multi-user
In 1979, a multi-user compatible derivative of CP/M was released. MP/M allowed multiple users to connect to a single computer, using multiple terminals to provide each user with a screen and keyboard. Later versions ran on 16-bit processors.
CP/M Plus
The last 8-bit version of CP/M was version 3, often called CP/M Plus, released in 1983. Its BDOS was designed by David K. Brown. It incorporated the bank switching memory management of MP/M in a single-user single-task operating system compatible with CP/M 2.2 applications. CP/M 3 could therefore use more than 64 KB of memory on an 8080 or Z80 processor. The system could be configured to support date stamping of files. The operating system distribution software also included a relocating assembler and linker. CP/M 3 was available for the last generation of 8-bit computers, notably the Amstrad PCW, the Amstrad CPC, the ZX Spectrum +3, the Commodore 128, MSX machines and the Radio Shack TRS-80 Model 4.
16-bit versions
There were versions of CP/M for some 16-bit CPUs as well.
The first version in the 16-bit family was CP/M-86 for the Intel 8086 in November 1981. Kathryn Strutynski was the project manager for the evolving CP/M-86 line of operating systems. At this point, the original 8-bit CP/M became known by the retronym CP/M-80 to avoid confusion.
CP/M-86 was expected to be the standard operating system of the new IBM PCs, but DRI and IBM were unable to negotiate development and licensing terms. IBM turned to Microsoft instead, and Microsoft delivered PC DOS based on 86-DOS. Although CP/M-86 became an option for the IBM PC after DRI threatened legal action, it never overtook Microsoft's system. Most customers were repelled by the significantly greater price IBM charged for CP/M-86 over PC DOS ( and , respectively).
When Digital Equipment Corporation (DEC) put out the Rainbow 100 to compete with IBM, it came with CP/M-80 using a Z80 chip, CP/M-86 or MS-DOS using an 8088 microprocessor, or CP/M-86/80 using both. The Z80 and 8088 CPUs ran concurrently. A benefit of the Rainbow was that it could continue to run 8-bit CP/M software, preserving a user's possibly sizable investment as they moved into the 16-bit world of MS-DOS. A similar dual-processor adaption for the was named CP/M 8-16. The CP/M-86 adaptation for the 8085/8088-based Zenith Z-100 also supported running programs for both of its CPUs.
Soon following CP/M-86, another 16-bit version of CP/M was CP/M-68K for the Motorola 68000. The original version of CP/M-68K in 1982 was written in Pascal/MT+68k, but it was ported to C later on. CP/M-68K, already running on the Motorola EXORmacs systems, was initially to be used in the Atari ST computer, but Atari decided to go with a newer disk operating system called GEMDOS. CP/M-68K was also used on the SORD M68 and M68MX computers.
In 1982, there was also a port from CP/M-68K to the 16-bit Zilog Z8000 for the Olivetti M20, written in C, named CP/M-8000.
These 16-bit versions of CP/M required application programs to be re-compiled for the new CPUs. Some programs written in assembly language could be automatically translated for a new processor. One tool for this was Digital Research's XLT86, which translated .ASM source code for the Intel 8080 processor into .A86 source code for the Intel 8086. The translator would also optimize the output for code size and take care of calling conventions, so that CP/M-80 and MP/M-80 programs could be ported to the CP/M-86 and MP/M-86 platforms automatically. XLT86 itself was written in PL/I-80 and was available for CP/M-80 platforms as well as for VAX/VMS.
Displacement by MS-DOS
Many expected that CP/M would be the standard operating system for 16-bit computers. In 1980 IBM approached Digital Research, at Bill Gates' suggestion, to license a forthcoming version of CP/M for its new product, the IBM Personal Computer. Upon the failure to obtain a signed non-disclosure agreement, the talks failed, and IBM instead contracted with Microsoft to provide an operating system. The resulting product, MS-DOS, soon began outselling CP/M.
Many of the basic concepts and mechanisms of early versions of MS-DOS resembled those of CP/M. Internals like file-handling data structures were identical, and both referred to disk drives with a letter (A:, B:, etc.). MS-DOS's main innovation was its FAT file system. This similarity made it easier to port popular CP/M software like WordStar and dBase. However, CP/M's concept of separate user areas for files on the same disk was never ported to MS-DOS. Since MS-DOS had access to more memory (as few IBM PCs were sold with less than 64 KB of memory, while CP/M could run in 16 KB if necessary), more commands were built into the command-line shell, making MS-DOS somewhat faster and easier to use on floppy-based computers.
Although one of the first peripherals for the IBM PC was a SoftCard-like expansion card that let it run 8-bit CP/M software, InfoWorld stated in 1984 that efforts to introduce CP/M to the home market had been largely unsuccessful and most CP/M software was too expensive for home users. In 1986 the magazine stated that Kaypro had stopped production of 8-bit CP/M-based models to concentrate on sales of MS-DOS compatible systems, long after most other vendors had ceased production of new equipment and software for CP/M. CP/M rapidly lost market share as the microcomputing market moved to the IBM-compatible platform, and it never regained its former popularity. Byte magazine, at the time one of the leading industry magazines for microcomputers, essentially ceased covering CP/M products within a few years of the introduction of the IBM PC. For example, in 1983 there were still a few advertisements for S-100 boards and articles on CP/M software, but by 1987 these were no longer found in the magazine.
Later versions of CP/M-86 made significant strides in performance and usability and were made compatible with MS-DOS. To reflect this compatibility the name was changed, and CP/M-86 became DOS Plus, which in turn became DR-DOS.
ZCPR
ZCPR (the Z80 Command Processor Replacement) was introduced on 2 February 1982 as a drop-in replacement for the standard Digital Research console command processor (CCP) and was initially written by a group of computer hobbyists who called themselves "The CCP Group". They were Frank Wancho, Keith Petersen (the archivist behind Simtel at the time), Ron Fowler, Charlie Strom, Bob Mathias, and Richard Conn. Richard was, in fact, the driving force in this group (all of whom maintained contact through email).
ZCPR1 was released on a disk put out by SIG/M (Special Interest Group/Microcomputers), a part of the Amateur Computer Club of New Jersey.
ZCPR2 was released on 14 February 1983. It was released as a set of ten disks from SIG/M. ZCPR2 was upgraded to 2.3, and also was released in 8080 code, permitting the use of ZCPR2 on 8080 and 8085 systems.
ZCPR3 was released on 14 July 1984, as a set of nine disks from SIG/M. The code for ZCPR3 could also be compiled (with reduced features) for the 8080 and would run on systems that did not have the requisite Z80 microprocessor. Features of ZCPR as of version 3 included shells, aliases, I/O redirection, flow control, named directories, search paths, custom menus, passwords, and online help. In January 1987, Richard Conn stopped developing ZCPR, and Echelon asked Jay Sage (who already had a privately enhanced ZCPR 3.1) to continue work on it. Thus, ZCPR 3.3 was developed and released. ZCPR 3.3 no longer supported the 8080 series of microprocessors, and added the most features of any upgrade in the ZCPR line. ZCPR 3.3 also included a full complement of utilities with considerably extended capabilities. While enthusiastically supported by the CP/M user base of the time, ZCPR alone was insufficient to slow the demise of CP/M.
Hardware model
A minimal 8-bit CP/M system would contain the following components:
A computer terminal using the ASCII character set
An Intel 8080 (and later the 8085) or Zilog Z80 microprocessor
The NEC V20 and V30 processors support an 8080-emulation mode that can run 8-bit CP/M on a PC-DOS/MS-DOS computer so equipped, though any PC clone could run CP/M-86.
At least 16 kilobytes of RAM, beginning at address 0
A means to bootstrap the first sector of the diskette
At least one floppy-disk drive
The only hardware system that CP/M, as sold by Digital Research, would support was the Intel 8080 Development System. Manufacturers of CP/M-compatible systems customized portions of the operating system for their own combination of installed memory, disk drives, and console devices. CP/M would also run on systems based on the Zilog Z80 processor since the Z80 was compatible with 8080 code. While the Digital Research distributed core of CP/M (BDOS, CCP, core transient commands) did not use any of the Z80-specific instructions, many Z80-based systems used Z80 code in the system-specific BIOS, and many applications were dedicated to Z80-based CP/M machines.
Digital Research subsequently partnered with Zilog and American Microsystems to produce Personal CP/M, a ROM-based version of the operating system aimed at lower-cost systems that could potentially be equipped without disk drives. First featured in the Sharp MZ-800, a cassette-based system with optional disk drives, Personal CP/M was described as having been "rewritten to take advantage of the enhanced Z-80 instruction set" as opposed to preserving portability with the 8080. American Microsystems announced a Z80-compatible microprocessor, the S83, featuring 8 KB of in-package ROM for the operating system and BIOS, together with comprehensive logic for interfacing with 64-kilobit dynamic RAM devices. Unit pricing of the S83 was quoted as $32 in 1,000 unit quantities.
On most machines the bootstrap was a minimal bootloader in ROM combined with some means of minimal bank switching or a means of injecting code on the bus (since the 8080 needs to see boot code at Address 0 for start-up, while CP/M needs RAM there); for others, this bootstrap had to be entered into memory using front-panel controls each time the system was started.
CP/M used the 7-bit ASCII set. The other 128 characters made possible by the 8-bit byte were not standardized. For example, one Kaypro used them for Greek characters, and Osborne machines used the 8th bit set to indicate an underlined character. WordStar used the 8th bit as an end-of-word marker. International CP/M systems most commonly used the ISO 646 norm for localized character sets, replacing certain ASCII characters with localized characters rather than adding them beyond the 7-bit boundary.
Components
In the 8-bit versions, while running, the CP/M operating system loaded into memory has three components:
Basic Input/Output System (BIOS),
Basic Disk Operating System (BDOS),
Console Command Processor (CCP).
The BIOS and BDOS are memory-resident, while the CCP is memory-resident unless overwritten by an application, in which case it is automatically reloaded after the application finished running. A number of transient commands for standard utilities are also provided. The transient commands reside in files with the extension .COM on disk.
The BIOS directly controls hardware components other than the CPU and main memory. It contains functions such as character input and output and the reading and writing of disk sectors. The BDOS implements the CP/M file system and some input/output abstractions (such as redirection) on top of the BIOS. The CCP takes user commands and either executes them directly (internal commands such as DIR to show a directory or ERA to delete a file) or loads and starts an executable file of the given name (transient commands such as PIP.COM to copy files or STAT.COM to show various file and system information). Third-party applications for CP/M are also essentially transient commands.
The BDOS, CCP and standard transient commands are the same in all installations of a particular revision of CP/M, but the BIOS portion is always adapted to the particular hardware.
Adding memory to a computer, for example, means that the CP/M system must be reinstalled to allow transient programs to use the additional memory space. A utility program (MOVCPM) is provided with system distribution that allows relocating the object code to different memory areas. The utility program adjusts the addresses in absolute jump and subroutine call instructions to new addresses required by the new location of the operating system in processor memory. This newly patched version can then be saved on a new disk, allowing application programs to access the additional memory made available by moving the system components. Once installed, the operating system (BIOS, BDOS and CCP) is stored in reserved areas at the beginning of any disk which can be used to boot the system. On start-up, the bootloader (usually contained in a ROM firmware chip) loads the operating system from the disk in drive A:.
By modern standards CP/M is primitive, owing to the extreme constraints on program size. With version 1.0 there is no provision for detecting a changed disk. If a user changes disks without manually rereading the disk directory the system writes on the new disk using the old disk's directory information, ruining the data stored on the disk. From version 1.1 or 1.2 onwards, changing a disk then trying to write to it before its directory is read will cause a fatal error to be signalled. This avoids overwriting the disk but requires a reboot and loss of the data to be stored on disk.
The majority of the complexity in CP/M is isolated in the BDOS, and to a lesser extent, the CCP and transient commands. This meant that by porting the limited number of simple routines in the BIOS to a particular hardware platform, the entire OS would work. This significantly reduced the development time needed to support new machines, and was one of the main reasons for CP/M's widespread use. Today this sort of abstraction is common to most OSs (a hardware abstraction layer), but at the time of CP/M's birth, OSs were typically intended to run on only one machine platform, and multilayer designs were considered unnecessary.
Console Command Processor
The Console Command Processor, or CCP, accepts input from the keyboard and conveys results to the terminal. CP/M itself works with either a printing terminal or a video terminal. All CP/M commands have to be typed in on the command line. The console most often displays the A> prompt, to indicate the current default disk drive. When used with a video terminal, this is usually followed by a blinking cursor supplied by the terminal. The CCP awaits input from the user. A CCP internal command, of the form drive letter followed by a colon, can be used to select the default drive. For example, typing B: and pressing enter at the command prompt changes the default drive to B, and the command prompt then becomes B> to indicate this change.
CP/M's command-line interface was patterned after the operating systems from Digital Equipment, such as RT-11 for the PDP-11 and OS/8 for the PDP-8. Commands take the form of a keyword followed by a list of parameters separated by spaces or special characters. Similar to a Unix shell builtin, if an internal command is recognized, it is carried out by the CCP itself. Otherwise it attempts to find an executable file on the currently logged disk drive and (in later versions) user area, loads it, and passes it any additional parameters from the command line. These are referred to as "transient" programs. On completion, BDOS will reload the CCP if it has been overwritten by application programs — this allows transient programs a larger memory space.
The commands themselves can sometimes be obscure. For instance, the command to duplicate files is named PIP (Peripheral-Interchange-Program), the name of the old DEC utility used for that purpose. The format of parameters given to a program was not standardized, so that there is no single option character that differentiated options from file names. Different programs can and do use different characters.
The CP/M Console Command Processor includes DIR, ERA, REN, SAVE, TYPE, and USER as built-in commands. Transient commands in CP/M include ASM, DDT, DUMP, ED, LOAD, , PIP, STAT, SUBMIT, and SYSGEN.
CP/M Plus (CP/M Version 3) includes DIR (display list of files from a directory except those marked with the SYS attribute), DIRSYS / DIRS (list files marked with the SYS attribute in the directory), ERASE / ERA (delete a file), RENAME / REN (rename a file), TYPE / TYP (display contents of an ASCII character file), and USER / USE (change user number) as built-in commands: CP/M 3 allows the user to abbreviate the built-in commands. Transient commands in CP/M 3 include COPYSYS, DATE, DEVICE, DUMP, ED, GET, HELP, HEXCOM, INITDIR, LINK, MAC, PIP, PUT, RMAC, SET, SETDEF, SHOW, SID, SUBMIT, and XREF.
Basic Disk Operating System
The Basic Disk Operating System, or BDOS, provides access to such operations as opening a file, output to the console, or printing. Application programs load processor registers with a function code for the operation, and addresses for parameters or memory buffers, and call a fixed address in memory. Since the address is the same independent of the amount of memory in the system, application programs run the same way for any type or configuration of hardware.
Basic Input Output System
The Basic Input Output System or BIOS, provides the lowest level functions required by the operating system.
These include reading or writing single characters to the system console and reading or writing a sector of data from the disk. The BDOS handles some of the buffering of data from the diskette, but before CP/M 3.0 it assumes a disk sector size fixed at 128 bytes, as used on single-density 8-inch floppy disks. Since most 5.25-inch disk formats use larger sectors, the blocking and deblocking and the management of a disk buffer area is handled by model-specific code in the BIOS.
Customization is required because hardware choices are not constrained by compatibility with any one popular standard. For example, some manufacturers designed built-in integrated video display systems, while others relied on separate computer terminals. Serial ports for printers and modems can use different types of UART chips, and port addresses are not fixed. Some machines use memory-mapped I/O instead of the 8080 I/O address space. All of these variations in the hardware are concealed from other modules of the system by use of the BIOS, which uses standard entry points for the services required to run CP/M such as character I/O or accessing a disk block. Since support for serial communication to a modem is very rudimentary in the BIOS or may be absent altogether, it is common practice for CP/M programs that use modems to have a user-installed overlay containing all the code required to access a particular machine's serial port.
Applications
WordStar, one of the first widely used word processors, and dBase, an early and popular database program for microcomputers, were originally written for CP/M. Two early outliners, KAMAS (Knowledge and Mind Amplification System) and its cut-down successor Out-Think (without programming facilities and retooled for 8080/V20 compatibility) were also written for CP/M, though later rewritten for MS-DOS. Turbo Pascal, the ancestor of Borland Delphi, and Multiplan, the ancestor of Microsoft Excel, also debuted on CP/M before MS-DOS versions became available. VisiCalc, the first-ever spreadsheet program, was made available for CP/M. Another company, Sorcim, created its SuperCalc spreadsheet for CP/M, which would go on to become the market leader and de facto standard on CP/M. Supercalc would go on to be a competitor in the spreadsheet market in the MS-DOS world. AutoCAD, a CAD application from Autodesk debuted on CP/M. A host of compilers and interpreters for popular programming languages of the time (such as BASIC, Borland's Turbo Pascal, FORTRAN and even PL/I) were available, among them several of the earliest Microsoft products.
CP/M software often came with installers that adapted it to a wide variety of computers. The source code for BASIC programs was easily accessible, and most forms of copy protection were ineffective on the operating system. A Kaypro II owner, for example, would obtain software on Xerox 820 format, then copy it to and run it from Kaypro-format disks.
The lack of standardized graphics support limited video games, but various character and text-based games were ported, such as Telengard, Gorillas, Hamurabi, Lunar Lander, along with early interactive fiction including the Zork series and Colossal Cave Adventure. Text adventure specialist Infocom was one of the few publishers to consistently release their games in CP/M format. Lifeboat Associates started collecting and distributing user-written "free" software. One of the first was XMODEM, which allowed reliable file transfers via modem and phone line. Another program native to CP/M was the outline processor KAMAS.
Transient Program Area
The read/write memory between address 0100 hexadecimal and the lowest address of the BDOS was the Transient Program Area (TPA) available for CP/M application programs. Although all Z80 and 8080 processors could address 64 kilobytes of memory, the amount available for application programs could vary, depending on the design of the particular computer. Some computers used large parts of the address space for such things as BIOS ROMs, or video display memory. As a result, some systems had more TPA memory available than others. Bank switching was a common technique that allowed systems to have a large TPA while switching out ROM or video memory space as needed. CP/M 3.0 allowed parts of the BDOS to be in bank-switched memory as well.
Debugging application
CP/M came with a Dynamic Debugging Tool, nicknamed DDT (after the insecticide, i.e. a bug-killer), which allowed memory and program modules to be examined and manipulated, and allowed a program to be executed one step at a time.
Resident programs
CP/M originally did not support the equivalent of terminate and stay resident (TSR) programs as under DOS. Programmers could write software that could intercept certain operating system calls and extend or alter their functionality. Using this capability, programmers developed and sold auxiliary desk accessory programs, such as SmartKey, a keyboard utility to assign any string of bytes to any key.
CP/M 3, however, added support for dynamically loadable Resident System Extensions (RSX). A so-called null command file could be used to allow CCP to load an RSX without a transient program. Similar solutions like RSMs (for Resident System Modules) were also retrofitted to CP/M 2.2 systems by third-parties.
Software installation
Although CP/M provided some hardware abstraction to standardize the interface to disk I/O or console I/O, application programs still typically required installation to make use of all the features of such equipment as printers and terminals. Often these were controlled by escape sequences which had to be altered for different devices. For example, the escape sequence to select bold face on a printer would have differed among manufacturers, and sometimes among models within a manufacturer's range. This procedure was not defined by the operating system; a user would typically run an installation program that would either allow selection from a range of devices, or else allow feature-by-feature editing of the escape sequences required to access a function. This had to be repeated for each application program, since there was no central operating system service provided for these devices.
The initialization codes for each model of printer had to be written into the application. To use a program such as Wordstar with more than one printer (say, a fast dot-matrix printer or a slower but presentation-quality daisy wheel printer), a separate version of Wordstar had to be prepared, and one had to load the Wordstar version that corresponded to the printer selected (and exiting and reloading to change printers).
Disk formats
IBM System/34 and IBM 3740's 128 byte/sector, single-density, single-sided format is CP/M's standard 8-inch floppy-disk format. No standard 5.25-inch CP/M disk format exists, with Kaypro, Morrow Designs, Osborne, and others each using their own. Certain formats were more popular than others. Most software was available in the Xerox 820 format, and other computers such as the Kaypro II were compatible with it, but InfoWorld estimated in September 1981 that "about two dozen formats were popular enough that software creators had to consider them to reach the broadest possible market". JRT Pascal, for example, provided versions on 5.25-inch disk for North Star, Osborne, Apple, Heath/Zenith hard sector and soft sector, and Superbrain, and one 8-inch version. Ellis Computing also offered its software for both Heath formats, and 16 other 5.25-inch formats including two different TRS-80 CP/M modifications.
Various formats were used depending on the characteristics of particular systems and to some degree the choices of the designers. CP/M supported options to control the size of reserved and directory areas on the disk, and the mapping between logical disk sectors (as seen by CP/M programs) and physical sectors as allocated on the disk. There were many ways to customize these parameters for every system but once they had been set, no standardized way existed for a system to load parameters from a disk formatted on another system.
While almost every CP/M system with 8-inch drives can read the aforementioned IBM single-sided, single-density format, for other formats the degree of portability between different CP/M machines depends on the type of disk drive and controller used since many different floppy types existed in the CP/M era in both 8-inch and 5.25-inch sizes. Disks can be hard or soft sectored, single or double density, single or double sided, 35 track, 40 track, 77 track, or 80 track, and the sector layout, size and interleave can vary widely as well. Although translation programs can allow the user to read disk types from different machines, the drive type and controller are also factors. By 1982, soft-sector, single-sided, 40-track 5.25-inch disks had become the most popular format to distribute CP/M software on as they were used by the most common consumer-level machines of that time, such as the Apple II, TRS-80, Osborne 1, Kaypro II, and IBM PC. A translation program allows the user to read any disks on his machine that had a similar format; for example, the Kaypro II can read TRS-80, Osborne, IBM PC, and Epson disks. Other disk types such as 80 track or hard sectored are completely impossible to read. The first half of double-sided disks (like those of the Epson QX-10) can be read because CP/M accessed disk tracks sequentially with track 0 being the first (outermost) track of side 1 and track 79 (on a 40-track disk) being the last (innermost) track of side 2. Apple II users are unable to use anything but Apple's GCR format and so have to obtain CP/M software on Apple format disks or else transfer it via serial link.
The fragmented CP/M market, requiring distributors either to stock multiple formats of disks or to invest in multiformat duplication equipment, compared with the more standardized IBM PC disk formats, was a contributing factor to the rapid obsolescence of CP/M after 1981.
One of the last notable CP/M-capable machines to appear was the Commodore 128 in 1985, which had a Z80 for CP/M support in addition to its native mode using a 6502-derivative CPU. Using CP/M required either a 1571 or 1581 disk drive which could read soft-sector 40-track MFM-format disks.
The first computer to use a 3.5-inch floppy drive, the Sony SMC-70, ran CP/M 2.2. The Commodore 128, Bondwell-2 laptop, Micromint/Ciarcia SB-180, MSX and TRS-80 Model 4 (running Montezuma CP/M 2.2) also supported the use of CP/M with 3.5-inch floppy disks. CP/AM, Applied Engineering's version of CP/M for the Apple II, also supported 3.5-inch disks (as well as RAM disks on RAM cards compatible with the Apple II Memory Expansion Card). The Amstrad PCW ran CP/M using 3-inch floppy drives at first, and later switched to the 3.5 inch drives.
File system
File names were specified as a string of up to eight characters, followed by a period, followed by a file name extension of up to three characters ("8.3" filename format). The extension usually identified the type of the file. For example, .COM indicated an executable program file, and .TXT indicated a file containing ASCII text. Characters in filenames entered at the command prompt were converted to upper case, but this was not enforced by the operating system. Programs (MBASIC is a notable example) were able to create filenames containing lower-case letters, which then could not easily be referenced at the command line.
Each disk drive was identified by a drive letter, for example, drive A and drive B. To refer to a file on a specific drive, the drive letter was prefixed to the file name, separated by a colon, e.g., A:FILE.TXT. With no drive letter prefixed, access was to files on the current default drive.
File size was specified as the number of 128-byte records (directly corresponding to disk sectors on 8-inch drives) occupied by a file on the disk. There was no generally supported way of specifying byte-exact file sizes. The current size of a file was maintained in the file's File Control Block (FCB) by the operating system. Since many application programs (such as text editors) prefer to deal with files as sequences of characters rather than as sequences of records, by convention text files were terminated with a control-Z character (ASCII SUB, hexadecimal 1A). Determining the end of a text file therefore involved examining the last record of the file to locate the terminating control-Z. This also meant that inserting a control-Z character into the middle of a file usually had the effect of truncating the text contents of the file.
With the advent of larger removable and fixed disk drives, disk de-blocking formulas were employed which resulted in more disk blocks per logical file allocation block. While this allowed for larger file sizes, it also meant that the smallest file which could be allocated increased in size from 1 KB (on single-density drives) to 2 KB (on double-density drives) and so on, up to 32 KB for a file containing only a single byte. This made for inefficient use of disk space if the disk contained a large number of small files.
File modification time stamps were not supported in releases up to CP/M 2.2, but were an optional feature in MP/M and CP/M 3.0.
CP/M 2.2 had no subdirectories in the file structure, but provided 16 numbered user areas to organize files on a disk. To change user one had to simply type "User X" at the command prompt, X being the user number. Security was non-existent and considered unnecessary on a personal computer. The user area concept was to make the single-user version of CP/M somewhat compatible with multi-user MP/M systems. A common patch for the CP/M and derivative operating systems was to make one user area accessible to the user independent of the currently set user area. A USER command allowed the user area to be changed to any area from 0 to 15. User 0 was the default. If one changed to another user, such as USER 1, the material saved on the disk for this user would only be available to USER 1; USER 2 would not be able to see it or access it. However, files stored in the USER 0 area were accessible to all other users; their location was specified with a prefatory path, since the files of USER 0 were only visible to someone logged in as USER 0. The user area feature arguably had little utility on small floppy disks, but it was useful for organizing files on machines with hard drives. The intent of the feature was to ease use of the same computer for different tasks. For example, a secretary could do data entry, then, after switching USER areas, another employee could use the machine to do billing without their files intermixing.
Graphics
Although graphics-capable S-100 systems existed from the commercialization of the S-100 bus, CP/M did not provide any standardized graphics support until 1982 with GSX (Graphics System Extension). Owing to the small amount of available memory, graphics was never a common feature associated with 8-bit CP/M operating systems. Most systems could only display rudimentary ASCII art charts and diagrams in text mode or by using a custom character set. Some computers in the Kaypro line and the TRS-80 Model 4 had video hardware supporting block graphics characters, and these were accessible to assembler programmers and BASIC programmers using the CHR$ command. The Model 4 could display 640 by 240 pixel graphics with an optional high resolution board.
Derivatives
Official
Some companies made official enhancements of CP/M based on Digital Research source code.
An example is IMDOS for the IMSAI 8080 computer made by IMS Associates, Inc., a clone of the famous Altair 8800.
Compatible
Other CP/M compatible OSes were developed independently and made no use of Digital Research code. Some contemporary examples were:
Cromemco CDOS from Cromemco
MSX-DOS for the MSX range of computers is CP/M-compatible and can run CP/M programs.
The Epson QX-10 shipped with a choice of CP/M or the compatible TPM-II or TPM-III.
The British ZX Spectrum compatible SAM Coupé had an optional CP/M-2.2 compatible OS called Pro-DOS.
The Amstrad/Schneider CPC series 6xx (disk-based) and PCW series computers were bundled with an CP/M disk pack.
The Husky (computer) ran a ROM-based menu-driven program loader called DEMOS which could run many CP/M applications.
ZSDOS is a replacement BDOS for CP/M-80 2.2 written by Harold F. Bower and Cameron W. Cotrill.
CPMish is a new FOSS CP/M 2.2-compatible operating system which originally contained no DR code. It includes ZSDOS as its BDOS and ZCPR (see earlier) as the command processor. Since Bryan Sparks, the president of DR owners Lineo, granted permission in 2022 to modify and redistribute CP/M code, developer David Given is updating CPMish with some parts of the original DR CP/M.
LokiOS is a CP/M 2.2 compatible OS. Version 0.9 was publicly released in 2023 by David Kitson as a solo-written Operating System exercise, intended for the Open Spectrum Project and includes source code for the BIOS, BDOS and Command-line interface as well as other supporting applications and drivers. The distribution also includes original DR Source code and a utility to allow users to hot-swap OS components (e.g., BDOS, CCP) on the fly.
IS-DOS for the Enterprise computers, written by Intelligent Software.
VT-DOS for the Videoton TV Computer, written by Intelligent Software.
Enhancements
Some CP/M compatible operating systems extended the basic functionality so far that they far exceeded the original, for example the multi-processor capable TurboDOS.
Eastern bloc
A number of CP/M-80 derivatives existed in the former Eastern Bloc under various names, including SCP (), SCP/M, CP/A, CP/J, CP/KC, CP/KSOB, CP/L, CP/Z, MICRODOS, BCU880, ZOAZ, OS/M, TOS/M, ZSDOS, M/OS, COS-PSA, DOS-PSA, CSOC, CSOS, CZ-CPM, DAC, HC and others. There were also CP/M-86 derivatives named SCP1700, CP/K and K8918-OS. They were produced by the East German VEB Robotron and others.
Legacy
A number of behaviors exhibited by Microsoft Windows are a result of backward compatibility with MS-DOS, which in turn attempted some backward compatibility with CP/M. The drive letter and 8.3 filename conventions in MS-DOS (and early Windows versions) were originally adopted from CP/M. The wildcard matching characters used by Windows (? and *) are based on those of CP/M, as are the reserved filenames used to redirect output to a printer ("PRN:"), and the console ("CON:"). The drive names A and B were used to designate the two floppy disk drives that CP/M systems typically used; when hard drives appeared, they were designated C, which survived into MS-DOS as the C:\> command prompt. The control character ^Z marking the end of some text files can also be attributed to CP/M. Various commands in DOS were modelled after CP/M commands; some of them even carried the same name, like DIR, REN/RENAME, or TYPE (and ERA/ERASE in DR-DOS). File extensions like .TXT or .COM are still used to identify file types on many operating systems.
In 1997 and 1998, Caldera released some CP/M 2.2 binaries and source code under an open source license, also allowing the redistribution and modification of further collected Digital Research files related to the CP/M and MP/M families through Tim Olmstead's "The Unofficial CP/M Web site" since 1997. After Olmstead's death on 12 September 2001, the distribution license was refreshed and expanded by Lineo, who had meanwhile become the owner of those Digital Research assets, on 19 October 2001.
In October 2014, to mark the 40th anniversary of the first presentation of CP/M, the Computer History Museum released early source code versions of CP/M.
, there are a number of active vintage, hobby and retro-computer people and groups, and some small commercial businesses, still developing and supporting computer platforms that use CP/M (mostly 2.2) as the host operating system.
See also
Amstrad CP/M Plus character set
CPMulator
CP/NET and CP/NOS
Cromemco DOS, an operating system independently derived from CP/M
Eagle Computer
IMDOS
List of machines running CP/M
MP/M
MP/NET and MP/NOS
Multiuser DOS
Pascal/MT+
SpeedStart CP/M
86-DOS
Kenbak-1
References
Further reading
(NB. This PBS series includes the details of IBM's choice of Microsoft DOS over Digital Research's CP/M for the IBM PC)
External links
The Unofficial CP/M Web site (founded by Tim Olmstead) - Includes source code
Gaby Chaudry's Homepage for CP/M and Computer History - includes ZCPR materials
CP/M Main Page - John C. Elliott's technical information site
MaxFrame's Digital Research CP/M page
CP/M variants
Microcomputer software
Disk operating systems
Digital Research operating systems
Discontinued operating systems
Floppy disk-based operating systems
Free software operating systems
History of computing
1974 software
Formerly proprietary software | CP/M | [
"Technology"
] | 10,295 | [
"Computers",
"History of computing"
] |
45,162 | https://en.wikipedia.org/wiki/Peridot | Peridot ( ), sometimes called chrysolite, is a yellow-green transparent variety of olivine. Peridot is one of the few gemstones that occur in only one color.
Peridot can be found in mafic and ultramafic rocks occurring in lava and peridotite xenoliths of the mantle. The gem occurs in silica-deficient rocks such as volcanic basalt and pallasitic meteorites. Along with diamonds, peridot is one of only two gems observed to be formed not in Earth's crust, but in the molten rock of the upper mantle. Gem-quality peridot is rare on Earth's surface due to its susceptibility to alteration during its movement from deep within the mantle and weathering at the surface. Peridot has a chemical formula of .
Peridot is one of the birthstones for the month of August.
Etymology
The origin of the name peridot is uncertain. The Oxford English Dictionary suggests an alteration of Anglo–Norman (classical Latin -), a kind of opal, rather than the Arabic word , meaning "gemstone".
The Middle English Dictionarys entry on peridot includes several variations: , , and — other variants substitute y for letter i used here.
The earliest use of the word in English is possibly in the 1705 register of the St. Albans Abbey: The dual entry is in Latin with the translation to English listed as peridot. It records that on his death in 1245, Bishop John bequeathed various items, including peridot gems, to the Abbey.
Appearance
Peridot is one of the few gemstones that occur in only one color: an olive-green. The intensity and tint of the green, however, depends on the percentage of iron in the crystal structure, so the color of individual peridot gems can vary from yellow, to olive, to brownish-green. In rare cases, peridot may have a medium-dark toned, pure green with no secondary yellow hue or brown mask. Lighter-colored gems are due to lower iron concentrations.
Mineral properties
Crystal structure
The molecular structure of peridot consists of isomorphic olivine, silicate, magnesium and iron in an orthorhombic crystal system. In an alternative view, the atomic structure can be described as a hexagonal, close-packed array of oxygen ions with half of the octahedral sites occupied by magnesium or iron ions and one-eighth of the tetrahedral sites occupied by silicon ions.
Surface property
Oxidation of peridot does not occur at natural surface temperature and pressure but begins to occur slowly at with rates increasing with temperature. The oxidation of the olivine occurs by an initial breakdown of the fayalite component, and subsequent reaction with the forsterite component, to give magnetite and orthopyroxene.
Occurrence
Geologically
Olivine, of which peridot is a type, is a common mineral in mafic and ultramafic rocks, often found in lava and in peridotite xenoliths of the mantle, which lava carries to the surface; however, gem-quality peridot occurs in only a fraction of these settings. Peridots can also be found in meteorites.
Peridots can be differentiated by size and composition. A peridot formed as a result of volcanic activity tends to contain higher concentrations of lithium, nickel and zinc than those found in meteorites.
Olivine is an abundant mineral, but gem-quality peridot is rather rare due to its chemical instability on Earth's surface. Olivine is usually found as small grains and tends to exist in a heavily weathered state, unsuitable for decorative use. Large crystals of forsterite, the variety most often used to cut peridot gems, are rare; as a result, peridot is considered to be precious.
In the ancient world, the mining of peridot was called topazios then, on St. John's Island, in the Red Sea began about 300 .
The principal source of peridot olivine today is the San Carlos Apache Indian Reservation in Arizona.
It is also mined at another location in Arizona, and in Arkansas, Hawaii, Nevada, and New Mexico at Kilbourne Hole, in the US; and in Australia, Brazil, China, Egypt, Kenya, Mexico, Myanmar (Burma), Norway, Pakistan, Saudi Arabia, South Africa, Sri Lanka, and Tanzania.
In meteorites
Peridot crystals have been collected from some pallasite meteorites. The most commonly studied pallasitic peridot belongs to the Indonesian Jeppara meteorite, but others exist such as the Brenham, Esquel, Fukang, and Imilac meteorites.
Pallasitic (extraterrestrial) peridot differs chemically from its earthbound counterpart, in that pallasitic peridot lacks nickel.
Gemology
Orthorhombic minerals, like peridot, have biaxial birefringence defined by three principal axes: , and . Refractive index readings of faceted gems can range around = 1.651, = 1.668, and = 1.689, with a biaxial positive birefringence of 0.037–0.038. With decreasing magnesium and increasing iron concentration, the specific gravity, color darkness and refractive indices increase, and the shifts toward the index. Increasing iron concentration ultimately forms the iron-rich end-member of the olivine solid solution series fayalite.
A study of Chinese peridot gem samples determined the hydro-static specific gravity to be 3.36 . The visible-light spectroscopy of the same Chinese peridot samples showed light bands between 493.0–481.0 nm, the strongest absorption at 492.0 nm.
The largest cut peridot olivine is a specimen in the gem collection of the Smithsonian Museum in Washington, D.C.
Inclusions are common in peridot crystals but their presence depends on the location where it was found and the geological conditions that led to its crystallization.
Primary negative crystals – rounded gas bubbles – form in situ with peridot, and are common in Hawaiian peridots.
Secondary negative crystals form in peridot fractures.
"Lily pad" cleavages are often seen in San Carlos peridots, and are a type of secondary negative crystal. They can easily be seen under reflected light as circular discs surrounding a negative crystal.
Silky and rod-like inclusions are common in Pakistani peridots.
The most common mineral inclusion in peridot is the chromium-rich mineral chromite.
Magnesium-rich minerals also can exist in the form of pyrope and magnesiochromite. These two types of mineral inclusions are typically surrounded "lily-pad" cleavages.
Biotite flakes appear flat, brown, translucent, and tabular.
Cultural history
Peridot has been prized since the earliest civilizations for its claimed protective powers to drive away fears and nightmares, according to superstitions. There is a superstition that it carries the gift of "inner radiance", sharpening the mind and opening it to new levels of awareness and growth, helping one to recognize and realize one's destiny and spiritual purpose. (There is no scientific evidence for any such claims.)
Peridot olivine is the birthstone for the month of August.
Peridot has often been mistaken for emerald beryl and other green gems. Noted gemologist G.F. Kunz discussed the confusion between beryl and peridot in many church treasures, most notably the "Three Magi treasure" in the Dom of Cologne, Germany.
Gallery
Footnotes
References
External links
Ganoksin
Mineralminers
USGS peridot data
Emporia Edu
Florida State University – Peridot
Gemstones
Silicate minerals | Peridot | [
"Physics"
] | 1,634 | [
"Materials",
"Gemstones",
"Matter"
] |
45,165 | https://en.wikipedia.org/wiki/Orthoclase | Orthoclase, or orthoclase feldspar (endmember formula KAlSi3O8), is an important tectosilicate mineral which forms igneous rock. The name is from the Ancient Greek for "straight fracture", because its two cleavage planes are at right angles to each other. It is a type of potassium feldspar, also known as K-feldspar. The gem known as moonstone (see below) is largely composed of orthoclase.
Formation and subtypes
Orthoclase is a common constituent of most granites and other felsic igneous rocks and often forms huge crystals and masses in pegmatite.
Typically, the pure potassium endmember of orthoclase forms a solid solution with albite, the sodium endmember (NaAlSi3O8), of plagioclase. While slowly cooling within the earth, sodium-rich albite lamellae form by exsolution, enriching the remaining orthoclase with potassium. The resulting intergrowth of the two feldspars is called perthite.
The higher-temperature polymorph of KAlSi3O8 is sanidine. Sanidine is common in rapidly cooled volcanic rocks such as obsidian and felsic pyroclastic rocks, and is notably found in trachytes of the Drachenfels, Germany. The lower-temperature polymorph of KAlSi3O8 is microcline.
Adularia is a low temperature form of either microcline or orthoclase originally reported from the low temperature hydrothermal deposits in the Adula Alps of Switzerland. It was first described by Ermenegildo Pini in 1781. The optical effect of adularescence in moonstone is typically due to adularia.
The largest documented single crystal of orthoclase was found in the Ural Mountains in Russia. It measured around and weighed around .
Applications
Together with the other potassium feldspars, orthoclase is a common raw material for the manufacture of some glasses and some ceramics such as porcelain, and as a constituent of scouring powder.
Some intergrowths of orthoclase and albite have an attractive pale luster and are called moonstone when used in jewelry. Most moonstones are translucent and white, although grey and peach-colored varieties also occur. In gemology, their luster is called adularescence and is typically described as creamy or silvery white with a "billowy" quality. It is the state gem of Florida.
The gemstone commonly called rainbow moonstone is more properly a colorless form of labradorite and can be distinguished from "true" moonstone by its greater transparency and play of color, although their value and durability do not greatly differ.
Orthoclase is one of the ten defining minerals of the Mohs scale of mineral hardness, on which it is listed as having a hardness of 6.
NASA's Curiosity rover discovery of high levels of orthoclase in Martian sandstones suggested that some Martian rocks may have experienced complex geological processing, such as repeated melting.
See also
List of minerals
Schiller, optical effect
References
Potassium minerals
Aluminium minerals
Tectosilicates
Monoclinic minerals
Minerals in space group 12
Feldspar
Gemstones | Orthoclase | [
"Physics"
] | 692 | [
"Materials",
"Gemstones",
"Matter"
] |
45,166 | https://en.wikipedia.org/wiki/Microcline | Microcline (KAlSi3O8) is an important igneous rock-forming tectosilicate mineral. It is a potassium-rich alkali feldspar. Microcline typically contains minor amounts of sodium. It is common in granite and pegmatites. Microcline forms during slow cooling of orthoclase; it is more stable at lower temperatures than orthoclase. Sanidine is a polymorph of alkali feldspar stable at yet higher temperature. Microcline may be clear, white, pale-yellow, brick-red, or green; it is generally characterized by cross-hatch twinning that forms as a result of the transformation of monoclinic orthoclase into triclinic microcline.
The chemical compound name is potassium aluminium silicate, and it is known as E number reference E555.
Geology
Microcline may be chemically the same as monoclinic orthoclase, but because it belongs to the triclinic crystal system, the prism angle is slightly less than right angles; hence the name "microcline" from the Greek "small slope". It is a fully ordered triclinic modification of potassium feldspar and is dimorphous with orthoclase. Microcline is identical to orthoclase in many physical properties, and can be distinguished by x-ray or optical examination. When viewed under a polarizing microscope, microcline exhibits a minute multiple twinning which forms a grating-like structure that is unmistakable.
Perthite is either microcline or orthoclase with thin lamellae of exsolved albite.
Amazon stone, or amazonite, is a green variety of microcline. It is not found anywhere in the Amazon Basin, however. The Spanish explorers who named it apparently confused it with another green mineral from that region.
The largest documented single crystals of microcline were found in Devil's Hole Beryl Mine, Colorado, US and measured ~50 × 36 × 14 m. This could be one of the largest crystals of any material found so far.
Microcline is exceptionally active ice-nucleating agent in the atmosphere. Recently it has been possible to understand how water binds to the microcline surface.
As food additive
The chemical compound name is potassium aluminium silicate, and it is known as E number reference E555. It was the subject in 2018 of a Call for technical and toxicological data from the EFSA.
In 2008, it (along with other Aluminum compounds) was the subject of
a Scientific Opinion of the Panel on Food Additives, Flavourings, Processing Aids and Food Contact Materials from the EFSA.
See also
List of minerals
References
Alkali feldspars U. Texas
Mindat
Potassium minerals
Aluminium minerals
Tectosilicates
Triclinic minerals
Feldspar
Luminescent minerals
E-number additives
Minerals in space group 2 | Microcline | [
"Chemistry"
] | 615 | [
"Luminescence",
"Luminescent minerals"
] |
45,177 | https://en.wikipedia.org/wiki/Negative%20binomial%20distribution | In probability theory and statistics, the negative binomial distribution is a discrete probability distribution that models the number of failures in a sequence of independent and identically distributed Bernoulli trials before a specified/constant/fixed number of successes occur. For example, we can define rolling a 6 on some dice as a success, and rolling any other number as a failure, and ask how many failure rolls will occur before we see the third success (). In such a case, the probability distribution of the number of failures that appear will be a negative binomial distribution.
An alternative formulation is to model the number of total trials (instead of the number of failures). In fact, for a specified (non-random) number of successes (r), the number of failures (n − r) is random because the number of total trials (n) is random. For example, we could use the negative binomial distribution to model the number of days n (random) a certain machine works (specified by r) before it breaks down.
The Pascal distribution (after Blaise Pascal) and Polya distribution (for George Pólya) are special cases of the negative binomial distribution. A convention among engineers, climatologists, and others is to use "negative binomial" or "Pascal" for the case of an integer-valued stopping-time parameter () and use "Polya" for the real-valued case.
For occurrences of associated discrete events, like tornado outbreaks, the Polya distributions can be used to give more accurate models than the Poisson distribution by allowing the mean and variance to be different, unlike the Poisson. The negative binomial distribution has a variance , with the distribution becoming identical to Poisson in the limit for a given mean (i.e. when the failures are increasingly rare). This can make the distribution a useful overdispersed alternative to the Poisson distribution, for example for a robust modification of Poisson regression. In epidemiology, it has been used to model disease transmission for infectious diseases where the likely number of onward infections may vary considerably from individual to individual and from setting to setting. More generally, it may be appropriate where events have positively correlated occurrences causing a larger variance than if the occurrences were independent, due to a positive covariance term.
The term "negative binomial" is likely due to the fact that a certain binomial coefficient that appears in the formula for the probability mass function of the distribution can be written more simply with negative numbers.
Definitions
Imagine a sequence of independent Bernoulli trials: each trial has two potential outcomes called "success" and "failure." In each trial the probability of success is and of failure is . We observe this sequence until a predefined number of successes occurs. Then the random number of observed failures, , follows the negative binomial (or Pascal) distribution:
Probability mass function
The probability mass function of the negative binomial distribution is
where r is the number of successes, k is the number of failures, and p is the probability of success on each trial.
Here, the quantity in parentheses is the binomial coefficient, and is equal to
Note that Γ(r) is the Gamma function.
There are k failures chosen from k + r − 1 trials rather than k + r because the last of the k + r trials is by definition a success.
This quantity can alternatively be written in the following manner, explaining the name "negative binomial":
Note that by the last expression and the binomial series, for every and ,
hence the terms of the probability mass function indeed add up to one as below.
To understand the above definition of the probability mass function, note that the probability for every specific sequence of r successes and k failures is , because the outcomes of the k + r trials are supposed to happen independently. Since the rth success always comes last, it remains to choose the k trials with failures out of the remaining k + r − 1 trials. The above binomial coefficient, due to its combinatorial interpretation, gives precisely the number of all these sequences of length k + r − 1.
Cumulative distribution function
The cumulative distribution function can be expressed in terms of the regularized incomplete beta function:
(This formula is using the same parameterization as in the article's table, with r the number of successes, and with the mean.)
It can also be expressed in terms of the cumulative distribution function of the binomial distribution:
Alternative formulations
Some sources may define the negative binomial distribution slightly differently from the primary one here. The most common variations are where the random variable X is counting different things. These variations can be seen in the table here:
Each of the four definitions of the negative binomial distribution can be expressed in slightly different but equivalent ways. The first alternative formulation is simply an equivalent form of the binomial coefficient, that is: . The second alternate formulation somewhat simplifies the expression by recognizing that the total number of trials is simply the number of successes and failures, that is: . These second formulations may be more intuitive to understand, however they are perhaps less practical as they have more terms.
The definition where X is the number of n trials that occur for a given number of r successes is similar to the primary definition, except that the number of trials is given instead of the number of failures. This adds r to the value of the random variable, shifting its support and mean.
The definition where X is the number of k successes (or n trials) that occur for a given number of r failures is similar to the primary definition used in this article, except that numbers of failures and successes are switched when considering what is being counted and what is given. Note however, that p still refers to the probability of "success".
The definition of the negative binomial distribution can be extended to the case where the parameter r can take on a positive real value. Although it is impossible to visualize a non-integer number of "failures", we can still formally define the distribution through its probability mass function. The problem of extending the definition to real-valued (positive) r boils down to extending the binomial coefficient to its real-valued counterpart, based on the gamma function:
After substituting this expression in the original definition, we say that X has a negative binomial (or Pólya) distribution if it has a probability mass function:
Here r is a real, positive number.
In negative binomial regression, the distribution is specified in terms of its mean, , which is then related to explanatory variables as in linear regression or other generalized linear models. From the expression for the mean m, one can derive and . Then, substituting these expressions in the one for the probability mass function when r is real-valued, yields this parametrization of the probability mass function in terms of m:
The variance can then be written as . Some authors prefer to set , and express the variance as . In this context, and depending on the author, either the parameter r or its reciprocal α is referred to as the "dispersion parameter", "shape parameter" or "clustering coefficient", or the "heterogeneity" or "aggregation" parameter. The term "aggregation" is particularly used in ecology when describing counts of individual organisms. Decrease of the aggregation parameter r towards zero corresponds to increasing aggregation of the organisms; increase of r towards infinity corresponds to absence of aggregation, as can be described by Poisson regression.
Alternative parameterizations
Sometimes the distribution is parameterized in terms of its mean μ and variance σ2:
Another popular parameterization uses r and the failure odds β:
Examples
Length of hospital stay
Hospital length of stay is an example of real-world data that can be modelled well with a negative binomial distribution via negative binomial regression.
Selling candy
Pat Collis is required to sell candy bars to raise money for the 6th grade field trip. Pat is (somewhat harshly) not supposed to return home until five candy bars have been sold. So the child goes door to door, selling candy bars. At each house, there is a 0.6 probability of selling one candy bar and a 0.4 probability of selling nothing.
What's the probability of selling the last candy bar at the nth house?
Successfully selling candy enough times is what defines our stopping criterion (as opposed to failing to sell it), so k in this case represents the number of failures and r represents the number of successes. Recall that the NB(r, p) distribution describes the probability of k failures and r successes in k + r Bernoulli(p) trials with success on the last trial. Selling five candy bars means getting five successes. The number of trials (i.e. houses) this takes is therefore k + 5 = n. The random variable we are interested in is the number of houses, so we substitute k = n − 5 into a NB(5, 0.4) mass function and obtain the following mass function of the distribution of houses (for n ≥ 5):
What's the probability that Pat finishes on the tenth house?
What's the probability that Pat finishes on or before reaching the eighth house?
To finish on or before the eighth house, Pat must finish at the fifth, sixth, seventh, or eighth house. Sum those probabilities:
What's the probability that Pat exhausts all 30 houses that happen to stand in the neighborhood?
This can be expressed as the probability that Pat does not finish on the fifth through the thirtieth house:
Because of the rather high probability that Pat will sell to each house (60 percent), the probability of her not fulfilling her quest is vanishingly slim.
Properties
Expectation
The expected total number of trials needed to see r successes is . Thus, the expected number of failures would be this value, minus the successes:
Expectation of successes
The expected total number of failures in a negative binomial distribution with parameters is r(1 − p)/p. To see this, imagine an experiment simulating the negative binomial is performed many times. That is, a set of trials is performed until successes are obtained, then another set of trials, and then another etc. Write down the number of trials performed in each experiment: and set . Now we would expect about successes in total. Say the experiment was performed times. Then there are successes in total. So we would expect , so . See that is just the average number of trials per experiment. That is what we mean by "expectation". The average number of failures per experiment is . This agrees with the mean given in the box on the right-hand side of this page.
A rigorous derivation can be done by representing the negative binomial distribution as the sum of waiting times. Let with the convention represents the number of failures observed before successes with the probability of success being . And let where represents the number of failures before seeing a success. We can think of as the waiting time (number of failures) between the th and th success. Thus
The mean is
which follows from the fact .
Variance
When counting the number of failures before the r-th success, the variance is r(1 − p)/p2.
When counting the number of successes before the r-th failure, as in alternative formulation (3) above, the variance is rp/(1 − p)2.
Relation to the binomial theorem
Suppose Y is a random variable with a binomial distribution with parameters n and p. Assume p + q = 1, with p, q ≥ 0, then
Using Newton's binomial theorem, this can equally be written as:
in which the upper bound of summation is infinite. In this case, the binomial coefficient
is defined when n is a real number, instead of just a positive integer. But in our case of the binomial distribution it is zero when k > n. We can then say, for example
Now suppose r > 0 and we use a negative exponent:
Then all of the terms are positive, and the term
is just the probability that the number of failures before the rth success is equal to k, provided r is an integer. (If r is a negative non-integer, so that the exponent is a positive non-integer, then some of the terms in the sum above are negative, so we do not have a probability distribution on the set of all nonnegative integers.)
Now we also allow non-integer values of r. Then we have a proper negative binomial distribution, which is a generalization of the Pascal distribution, which coincides with the Pascal distribution when r happens to be a positive integer.
Recall from above that
The sum of independent negative-binomially distributed random variables r1 and r2 with the same value for parameter p is negative-binomially distributed with the same p but with r-value r1 + r2.
This property persists when the definition is thus generalized, and affords a quick way to see that the negative binomial distribution is infinitely divisible.
Recurrence relations
The following recurrence relations hold:
For the probability mass function
For the moments
For the cumulants
Related distributions
The geometric distribution (on { 0, 1, 2, 3, ... }) is a special case of the negative binomial distribution, with
The negative binomial distribution is a special case of the discrete phase-type distribution.
The negative binomial distribution is a special case of discrete compound Poisson distribution.
Poisson distribution
Consider a sequence of negative binomial random variables where the stopping parameter r goes to infinity, while the probability p of success in each trial goes to one, in such a way as to keep the mean of the distribution (i.e. the expected number of failures) constant. Denoting this mean as λ, the parameter p will be p = r/(r + λ)
Under this parametrization the probability mass function will be
Now if we consider the limit as r → ∞, the second factor will converge to one, and the third to the exponent function:
which is the mass function of a Poisson-distributed random variable with expected value λ.
In other words, the alternatively parameterized negative binomial distribution converges to the Poisson distribution and r controls the deviation from the Poisson. This makes the negative binomial distribution suitable as a robust alternative to the Poisson, which approaches the Poisson for large r, but which has larger variance than the Poisson for small r.
Gamma–Poisson mixture
The negative binomial distribution also arises as a continuous mixture of Poisson distributions (i.e. a compound probability distribution) where the mixing distribution of the Poisson rate is a gamma distribution. That is, we can view the negative binomial as a distribution, where λ is itself a random variable, distributed as a gamma distribution with shape r and scale θ = or correspondingly rate β =.
To display the intuition behind this statement, consider two independent Poisson processes, "Success" and "Failure", with intensities p and 1 − p. Together, the Success and Failure processes are equivalent to a single Poisson process of intensity 1, where an occurrence of the process is a success if a corresponding independent coin toss comes up heads with probability p; otherwise, it is a failure. If r is a counting number, the coin tosses show that the count of successes before the rth failure follows a negative binomial distribution with parameters r and p. The count is also, however, the count of the Success Poisson process at the random time T of the rth occurrence in the Failure Poisson process. The Success count follows a Poisson distribution with mean pT, where T is the waiting time for r occurrences in a Poisson process of intensity 1 − p, i.e., T is gamma-distributed with shape parameter r and intensity 1 − p. Thus, the negative binomial distribution is equivalent to a Poisson distribution with mean pT, where the random variate T is gamma-distributed with shape parameter r and intensity . The preceding paragraph follows, because λ = pT is gamma-distributed with shape parameter r and intensity .
The following formal derivation (which does not depend on r being a counting number) confirms the intuition.
Because of this, the negative binomial distribution is also known as the gamma–Poisson (mixture) distribution. The negative binomial distribution was originally derived as a limiting case of the gamma-Poisson distribution.
Distribution of a sum of geometrically distributed random variables
If Yr is a random variable following the negative binomial distribution with parameters r and p, and support {0, 1, 2, ...}, then Yr is a sum of r independent variables following the geometric distribution (on {0, 1, 2, ...}) with parameter p. As a result of the central limit theorem, Yr (properly scaled and shifted) is therefore approximately normal for sufficiently large r.
Furthermore, if Bs+r is a random variable following the binomial distribution with parameters s + r and p, then
In this sense, the negative binomial distribution is the "inverse" of the binomial distribution.
The sum of independent negative-binomially distributed random variables r1 and r2 with the same value for parameter p is negative-binomially distributed with the same p but with r-value r1 + r2.
The negative binomial distribution is infinitely divisible, i.e., if Y has a negative binomial distribution, then for any positive integer n, there exist independent identically distributed random variables Y1, ..., Yn whose sum has the same distribution that Y has.
Representation as compound Poisson distribution
The negative binomial distribution NB(r,p) can be represented as a compound Poisson distribution: Let denote a sequence of independent and identically distributed random variables, each one having the logarithmic series distribution Log(p), with probability mass function
Let N be a random variable, independent of the sequence, and suppose that N has a Poisson distribution with mean . Then the random sum
is NB(r,p)-distributed. To prove this, we calculate the probability generating function GX of X, which is the composition of the probability generating functions GN and GY1. Using
and
we obtain
which is the probability generating function of the NB(r,p) distribution.
The following table describes four distributions related to the number of successes in a sequence of draws:
(a,b,0) class of distributions
The negative binomial, along with the Poisson and binomial distributions, is a member of the (a,b,0) class of distributions. All three of these distributions are special cases of the Panjer distribution. They are also members of a natural exponential family.
Statistical inference
Parameter estimation
MVUE for p
Suppose p is unknown and an experiment is conducted where it is decided ahead of time that sampling will continue until r successes are found. A sufficient statistic for the experiment is k, the number of failures.
In estimating p, the minimum variance unbiased estimator is
Maximum likelihood estimation
When r is known, the maximum likelihood estimate of p is
but this is a biased estimate. Its inverse (r + k)/r, is an unbiased estimate of 1/p, however.
When r is unknown, the maximum likelihood estimator for p and r together only exists for samples for which the sample variance is larger than the sample mean. The likelihood function for N iid observations (k1, ..., kN) is
from which we calculate the log-likelihood function
To find the maximum we take the partial derivatives with respect to r and p and set them equal to zero:
and
where
is the digamma function.
Solving the first equation for p gives:
Substituting this in the second equation gives:
This equation cannot be solved for r in closed form. If a numerical solution is desired, an iterative technique such as Newton's method can be used. Alternatively, the expectation–maximization algorithm can be used.
Occurrence and applications
Waiting time in a Bernoulli process
For the special case where r is an integer, the negative binomial distribution is known as the Pascal distribution. It is the probability distribution of a certain number of failures and successes in a series of independent and identically distributed Bernoulli trials. For k + r Bernoulli trials with success probability p, the negative binomial gives the probability of k successes and r failures, with a failure on the last trial. In other words, the negative binomial distribution is the probability distribution of the number of successes before the rth failure in a Bernoulli process, with probability p of successes on each trial. A Bernoulli process is a discrete time process, and so the number of trials, failures, and successes are integers.
Consider the following example. Suppose we repeatedly throw a die, and consider a 1 to be a failure. The probability of success on each trial is 5/6. The number of successes before the third failure belongs to the infinite set { 0, 1, 2, 3, ... }. That number of successes is a negative-binomially distributed random variable.
When r = 1 we get the probability distribution of number of successes before the first failure (i.e. the probability of the first failure occurring on the (k + 1)st trial), which is a geometric distribution:
Overdispersed Poisson
The negative binomial distribution, especially in its alternative parameterization described above, can be used as an alternative to the Poisson distribution. It is especially useful for discrete data over an unbounded positive range whose sample variance exceeds the sample mean. In such cases, the observations are overdispersed with respect to a Poisson distribution, for which the mean is equal to the variance. Hence a Poisson distribution is not an appropriate model. Since the negative binomial distribution has one more parameter than the Poisson, the second parameter can be used to adjust the variance independently of the mean. See Cumulants of some discrete probability distributions.
An application of this is to annual counts of tropical cyclones in the North Atlantic or to monthly to 6-monthly counts of wintertime extratropical cyclones over Europe, for which the variance is greater than the mean. In the case of modest overdispersion, this may produce substantially similar results to an overdispersed Poisson distribution.
Negative binomial modeling is widely employed in ecology and biodiversity research for analyzing count data where overdispersion is very common. This is because overdispersion is indicative of biological aggregation, such as species or communities forming clusters. Ignoring overdispersion can lead to significantly inflated model parameters, resulting in misleading statistical inferences. The negative binomial distribution effectively addresses overdispersed counts by permitting the variance to vary quadratically with the mean. An additional dispersion parameter governs the slope of the quadratic term, determining the severity of overdispersion. The model's quadratic mean-variance relationship proves to be a realistic approach for handling overdispersion, as supported by empirical evidence from many studies. Overall, the NB model offers two attractive features: (1) the convenient interpretation of the dispersion parameter as an index of clustering or aggregation, and (2) its tractable form, featuring a closed expression for the probability mass function.
In genetics, the negative binomial distribution is commonly used to model data in the form of discrete sequence read counts from high-throughput RNA and DNA sequencing experiments.
In epidemiology of infectious diseases, the negative binomial has been used as a better option than the Poisson distribution to model overdispersed counts of secondary infections from one infected case (super-spreading events).
Multiplicity observations (physics)
The negative binomial distribution has been the most effective statistical model for a broad range of multiplicity observations in particle collision experiments, e.g., (See for an overview), and is argued to be a scale-invariant property of matter, providing the best fit for astronomical observations, where it predicts the number of galaxies in a region of space. The phenomenological justification for the effectiveness of the negative binomial distribution in these contexts remained unknown for fifty years, since their first observation in 1973. In 2023, a proof from first principles was eventually demonstrated by Scott V. Tezlaf, where it was shown that the negative binomial distribution emerges from symmetries in the dynamical equations of a canonical ensemble of particles in Minkowski space. Roughly, given an expected number of trials and expected number of successes , where
an isomorphic set of equations can be identified with the parameters of a relativistic current density of a canonical ensemble of massive particles, via
where is the rest density, is the relativistic mean square density, is the relativistic mean square current density, and , where is the mean square speed of the particle ensemble and is the speed of light—such that one can establish the following bijective map:
A rigorous alternative proof of the above correspondence has also been demonstrated through quantum mechanics via the Feynman path integral.
History
This distribution was first studied in 1713 by Pierre Remond de Montmort in his Essay d'analyse sur les jeux de hazard, as the distribution of the number of trials required in an experiment to obtain a given number of successes. It had previously been mentioned by Pascal.
See also
Coupon collector's problem
Beta negative binomial distribution
Extended negative binomial distribution
Negative multinomial distribution
Binomial distribution
Poisson distribution
Compound Poisson distribution
Exponential family
Negative binomial regression
Vector generalized linear model
References
Discrete distributions
Exponential family distributions
Compound probability distributions
Factorial and binomial topics
Infinitely divisible probability distributions | Negative binomial distribution | [
"Mathematics"
] | 5,340 | [
"Factorial and binomial topics",
"Combinatorics"
] |
45,178 | https://en.wikipedia.org/wiki/Process%20%28computing%29 | In computing, a process is the instance of a computer program that is being executed by one or many threads. There are many different process models, some of which are light weight, but almost all processes (even entire virtual machines) are rooted in an operating system (OS) process which comprises the program code, assigned system resources, physical and logical access permissions, and data structures to initiate, control and coordinate execution activity. Depending on the OS, a process may be made up of multiple threads of execution that execute instructions concurrently.
While a computer program is a passive collection of instructions typically stored in a file on disk, a process is the execution of those instructions after being loaded from the disk into memory. Several processes may be associated with the same program; for example, opening up several instances of the same program often results in more than one process being executed.
Multitasking is a method to allow multiple processes to share processors (CPUs) and other system resources. Each CPU (core) executes a single process at a time. However, multitasking allows each processor to switch between tasks that are being executed without having to wait for each task to finish (preemption). Depending on the operating system implementation, switches could be performed when tasks initiate and wait for completion of input/output operations, when a task voluntarily yields the CPU, on hardware interrupts, and when the operating system scheduler decides that a process has expired its fair share of CPU time (e.g, by the Completely Fair Scheduler of the Linux kernel).
A common form of multitasking is provided by CPU's time-sharing that is a method for interleaving the execution of users' processes and threads, and even of independent kernel tasks – although the latter feature is feasible only in preemptive kernels such as Linux. Preemption has an important side effect for interactive processes that are given higher priority with respect to CPU bound processes, therefore users are immediately assigned computing resources at the simple pressing of a key or when moving a mouse. Furthermore, applications like video and music reproduction are given some kind of real-time priority, preempting any other lower priority process. In time-sharing systems, context switches are performed rapidly, which makes it seem like multiple processes are being executed simultaneously on the same processor. This seemingly-simultaneous execution of multiple processes is called concurrency.
For security and reliability, most modern operating systems prevent direct communication between independent processes, providing strictly mediated and controlled inter-process communication.
Representation
In general, a computer system process consists of (or is said to own) the following resources:
An image of the executable machine code associated with a program.
Memory (typically some region of virtual memory); which includes the executable code, process-specific data (input and output), a call stack (to keep track of active subroutines and/or other events), and a heap to hold intermediate computation data generated during run time.
Operating system descriptors of resources that are allocated to the process, such as file descriptors (Unix terminology) or handles (Windows), and data sources and sinks.
Security attributes, such as the process owner and the process' set of permissions (allowable operations).
Processor state (context), such as the content of registers and physical memory addressing. The state is typically stored in computer registers when the process is executing, and in memory otherwise.
The operating system holds most of this information about active processes in data structures called process control blocks. Any subset of the resources, typically at least the processor state, may be associated with each of the process' threads in operating systems that support threads or child processes.
The operating system keeps its processes separate and allocates the resources they need, so that they are less likely to interfere with each other and cause system failures (e.g., deadlock or thrashing). The operating system may also provide mechanisms for inter-process communication to enable processes to interact in safe and predictable ways.
Multitasking and process management
A multitasking operating system may just switch between processes to give the appearance of many processes executing simultaneously (that is, in parallel), though in fact only one process can be executing at any one time on a single CPU (unless the CPU has multiple cores, then multithreading or other similar technologies can be used).
It is usual to associate a single process with a main program, and child processes with any spin-off, parallel processes, which behave like asynchronous subroutines. A process is said to own resources, of which an image of its program (in memory) is one such resource. However, in multiprocessing systems many processes may run off of, or share, the same reentrant program at the same location in memory, but each process is said to own its own image of the program.
Processes are often called "tasks" in embedded operating systems. The sense of "process" (or task) is "something that takes up time", as opposed to "memory", which is "something that takes up space".
The above description applies to both processes managed by an operating system, and processes as defined by process calculi.
If a process requests something for which it must wait, it will be blocked. When the process is in the blocked state, it is eligible for swapping to disk, but this is transparent in a virtual memory system, where regions of a process's memory may be really on disk and not in main memory at any time. Even portions of active processes/tasks (executing programs) are eligible for swapping to disk, if the portions have not been used recently. Not all parts of an executing program and its data have to be in physical memory for the associated process to be active.
Process states
An operating system kernel that allows multitasking needs processes to have certain states. Names for these states are not standardised, but they have similar functionality.
First, the process is "created" by being loaded from a secondary storage device (hard disk drive, CD-ROM, etc.) into main memory. After that the process scheduler assigns it the "waiting" state.
While the process is "waiting", it waits for the scheduler to do a so-called context switch. The context switch loads the process into the processor and changes the state to "running" while the previously "running" process is stored in a "waiting" state.
If a process in the "running" state needs to wait for a resource (wait for user input or file to open, for example), it is assigned the "blocked" state. The process state is changed back to "waiting" when the process no longer needs to wait (in a blocked state).
Once the process finishes execution, or is terminated by the operating system, it is no longer needed. The process is removed instantly or is moved to the "terminated" state. When removed, it just waits to be removed from main memory.
Inter-process communication
When processes need to communicate with each other they must share parts of their address spaces or use other forms of inter-process communication (IPC).
For instance in a shell pipeline, the output of the first process needs to pass to the second one, and so on. Another example is a task that has been decomposed into cooperating but partially independent processes which can run simultaneously (i.e., using concurrency, or true parallelism – the latter model is a particular case of concurrent execution and is feasible whenever multiple CPU cores are available for the processes that are ready to run).
It is even possible for two or more processes to be running on different machines that may run different operating system (OS), therefore some mechanisms for communication and synchronization (called communications protocols for distributed computing) are needed (e.g., the Message Passing Interface {MPI}).
History
By the early 1960s, computer control software had evolved from monitor control software, for example IBSYS, to executive control software. Over time, computers got faster while computer time was still neither cheap nor fully utilized; such an environment made multiprogramming possible and necessary. Multiprogramming means that several programs run concurrently. At first, more than one program ran on a single processor, as a result of underlying uniprocessor computer architecture, and they shared scarce and limited hardware resources; consequently, the concurrency was of a serial nature. On later systems with multiple processors, multiple programs may run concurrently in parallel.
Programs consist of sequences of instructions for processors. A single processor can run only one instruction at a time: it is impossible to run more programs at the same time. A program might need some resource, such as an input device, which has a large delay, or a program might start some slow operation, such as sending output to a printer. This would lead to processor being "idle" (unused). To keep the processor busy at all times, the execution of such a program is halted and the operating system switches the processor to run another program. To the user, it will appear that the programs run at the same time (hence the term "parallel").
Shortly thereafter, the notion of a "program" was expanded to the notion of an "executing program and its context". The concept of a process was born, which also became necessary with the invention of re-entrant code. Threads came somewhat later. However, with the advent of concepts such as time-sharing, computer networks, and multiple-CPU shared memory computers, the old "multiprogramming" gave way to true multitasking, multiprocessing and, later, multithreading.
See also
Background process
Code cave
Child process
Exit
Fork
Light-weight process
Orphan process
Parent process
Process group
Wait
Working directory
Zombie process
Notes
References
Further reading
Remzi H. Arpaci-Dusseau and Andrea C. Arpaci-Dusseau (2014). "Operating Systems: Three Easy Pieces". Arpaci-Dusseau Books. Relevant chapters: Abstraction: The Process The Process API
Gary D. Knott (1974) A proposal for certain process management and intercommunication primitives ACM SIGOPS Operating Systems Review. Volume 8, Issue 4 (October 1974). pp. 7 – 44
External links
Online Resources For Process Information
Computer Process Information Database and Forum
Process Models with Process Creation & Termination Methods
Concurrent computing
Operating system technology | Process (computing) | [
"Technology"
] | 2,140 | [
"Computing platforms",
"Concurrent computing",
"IT infrastructure"
] |
45,189 | https://en.wikipedia.org/wiki/Brandy | Brandy is a liquor produced by distilling wine. Brandy generally contains 35–60% alcohol by volume (70–120 US proof) and is typically consumed as an after-dinner digestif. Some brandies are aged in wooden casks. Others are coloured with caramel colouring to imitate the effect of ageing, and some are produced using a combination of ageing and colouring. Varieties of wine brandy can be found across the winemaking world. Among the most renowned are Cognac and Armagnac from south-western France.
In a broader sense, the term brandy also denotes liquors obtained from the distillation of pomace (yielding pomace brandy), or mash or wine of any other fruit (fruit brandy). These products are also called eau de vie (literally "water of life" in French).
History
The origins of brandy are tied to the development of distillation. While the process was known in classical times, it was not significantly used for beverage production until the 15th century. In the early 16th-century French brandy helped kickstart the cross-Atlantic triangle trade when it took over the central role of the Portuguese fortified wine due to its higher alcohol content and ease of shipping. Canoemen and guards on the African side of the trade were generally paid in brandy. By the late 17th century, the cheaper rum had replaced brandy as the exchange alcohol of choice in the triangle trade.
Initially, wine was distilled as a preservation method and to make it easier for merchants to transport. It is also thought that wine was originally distilled to lessen the tax, which was assessed by volume. The intent was to add the water removed by distillation back to the brandy shortly before consumption. It was discovered that after being stored in wooden casks, the resulting product improved over the original distilled spirit. In addition to removing water, the distillation process led to the formation and decomposition of numerous aromatic compounds, fundamentally altering the distillate composition from its source. Non-volatile substances such as pigments, sugars, and salts remained behind in the still. As a result, the distillate taste was often quite unlike the sources.
As described in the 1728 edition of Cyclopaedia, the following method was used to distill brandy:
A cucurbit was filled half full of the liquor from which brandy was to be drawn and then raised with a little fire until about one-sixth part was distilled, or until that which falls into the receiver was entirely flammable. This liquor, distilled only once, was called the spirit of wine or brandy. Purified by another distillation (or several more), this was called spirit of wine rectified. The second distillation was made in [a] balneo mariae and in a glass cucurbit, and the liquor was distilled to about one-half the quantity. This was further rectified as long as the operator thought it necessary to produce brandy.
To shorten these several distillations, which were long and troublesome, a chemical instrument was invented that reduced them to a single distillation. A portion was ignited to test the purity of the rectified spirit of wine. The liquor was good if a fire consumed the entire contents without leaving any impurities behind. Another, better test involved putting a little gunpowder in the bottom of the spirit. The liquor was good if the gunpowder could ignite after the spirit was consumed by fire. (Hence the modern "proof" to describe alcohol content.)
As most brandies have been distilled from grapes, the regions of the world producing excellent brandies have roughly paralleled those areas producing grapes for viniculture. At the end of the 19th century, the western European markets, including by extension their overseas empires, were dominated by French and Spanish brandies and eastern Europe was dominated by brandies from the Black Sea region, including Bulgaria, the Crimea, Georgia and Armenia. In 1877, the Ararat brandy brand was established in Yerevan, Armenia. It became one of the top brandy brands over time and during the Yalta Conference, Winston Churchill was so impressed with the Armenian brandy Dvin given to him by Joseph Stalin that he asked for several cases of it to be sent to him each year. Reportedly 400 bottles of Dvin were shipped to Churchill annually. In 1884, David Sarajishvili founded a brandy factory in Tbilisi, Georgia, a crossroads for Turkish, Central Asian, and Persian trade routes and a part of the Russian Empire at the time.
Technology
Except for a few major producers, brandy production and consumption tend to have a regional character, and thus production methods significantly vary. Wine brandy is produced from a variety of grape cultivars. A special selection of cultivars, providing distinct aroma and character, is used for high-quality brandies, while cheaper ones are made from whichever wine is available.
Brandy is made from so-called base wine, which significantly differs from regular table wines. It is made from early grapes to achieve higher acid concentration and lower sugar levels. Base wine generally contains smaller amounts (up to 20 mg/L) of sulphur than regular wines, as it creates undesired copper(II) sulfate in reaction with copper in the pot stills. The yeast sediment produced during the fermentation may or may not be kept in the wine, depending on the brandy style.
Brandy is distilled from the base wine in two phases. First, a large part of water and solids is removed from the base, obtaining so-called "low wine", a concentrated wine with 28–30% ABV. In the second stage, low wine is distilled into brandy. The liquid exits the pot still in three phases, referred to as the "heads", "heart", and "tails", respectively. The first part, the "head", has an alcohol concentration of about 83% (166 US proof) and an unpleasant odour. The weak portion on the end, the "tail", is discarded along with the head, and they are generally mixed with another batch of low wine, thereby entering the distillation cycle again. The middle heart fraction, the richest in aromas and flavours, is preserved for later maturation.
Distillation does not simply enhance the alcohol content of wine. The heat under which the product is distilled and the material of the still (usually copper) cause chemical reactions during distillation. This leads to the formation of numerous new volatile aroma components, changes in relative amounts of aroma components in the wine, and the hydrolysis of components such as esters.
Brandy is usually produced in pot stills (batch distillation), but the column still can also be used for continuous distillation. The distillate obtained in this manner has a higher alcohol concentration (approximately 90% ABV) and is less aromatic. The choice of the apparatus depends on the style of brandy produced. Cognac and South African brandy are examples of brandy produced in batches while many American brandies use fractional distillation in column stills.
Aging
After distillation, the unaged brandy is placed into oak barrels to mature. Usually, brandies with a natural golden or brown colour are aged in oak casks (single-barrel ageing). Some brandies, particularly those from Spain, are aged using the solera system, where the producer changes the barrel each year. After a period of ageing, which depends on the style, class and legal requirements, the mature brandy is mixed with distilled water to reduce alcohol concentration and bottled. Some brandies have caramel colour and sugar added to simulate the appearance of barrel ageing.
Consumption
Serving
Brandy is traditionally served at room temperature (neat) from a snifter, a wine glass or a tulip glass. When drunk at room temperature, it is often slightly warmed by holding the glass cupped in the palm or by gentle heating. Excessive heating of brandy may cause the alcohol vapour to become too strong, causing its aroma to become overpowering. Brandy-drinkers who like their brandy warmed may ask for the glass to be heated before the brandy is poured.
Brandy may be added to other beverages to make several popular cocktails; these include the Brandy Sour, the Brandy Alexander, the Sidecar, the Brandy Daisy, and the Brandy Old Fashioned.
Anglo-Indian usage has "brandy-pawnee" (brandy with water).
Culinary uses
Brandy is a common deglazing liquid used in making pan sauces for steak and other meat. It creates a more intense flavour in some soups, notably onion soup.
In English Christmas cooking, brandy is a common flavouring in traditional foods such as Christmas cake, brandy butter, and Christmas pudding. It is also commonly used in drinks such as mulled wine and eggnog, drunk during the festive season.
Brandy is used to flambé dishes such as crêpe Suzette and cherries jubilee while serving. Brandy is traditionally poured over a Christmas pudding and set alight before serving. The use of flambé can retain as much as 75% of the alcohol in the brandy.
Historical medical uses
In the 19th century, brandy was often used as medical treatment due to its alleged "stimulating" qualities. It was also used by many European explorers of tropical Africa, who suggested that regular, moderate doses of brandy might help a traveller to cope with fever, depression, and stress. These views fell out of favour in the late nineteenth and early twentieth century, with suggestions that people were using brandy's "medical" qualities as an excuse for social drinking.
Terminology and legal definitions
The term brandy is a shortening of the archaic English brandewine or brandywine, which was derived from the Dutch word brandewijn, itself derived from gebrande wijn, which literally means "burned wine" and whose cognates include brännvin and brennivín. In Germany, the term Branntwein refers to any distilled spirits, while Weinbrand refers specifically to distilled wine from grapes.
In the general colloquial usage of the term, brandy may also be made from pomace and from fermented fruit other than grapes. If a beverage comes from a particular fruit (or multiple fruits) other than exclusively grapes, or from the must of such fruit, it may be referred to as a "fruit brandy" or "fruit spirit" or named using the specific fruit, such as "peach brandy", rather than just generically as "brandy". If pomace is the raw material, the beverage may be called "pomace brandy", "marc brandy", "grape marc", "fruit marc spirit", or "grape marc spirit", "marc" being the pulp residue after the juice has been pressed from the fruit.
Grape pomace brandy may be designated as "grappa" or "grappa brandy". Apple brandy may be referred to as "applejack", although the process of jacking which was originally used in its production involved no distillation. There is also a product called "grain brandy" that is made from grain spirits.
Within particular jurisdictions, specific regulatory requirements regarding the labelling of products identified as brandy exist. For example:
In the European Union, there are regulations that require products labelled as brandy, except "grain brandy", to be produced exclusively from the distillation or redistillation of grape-based wine or grape-based "wine fortified for distillation" and aged a minimum of six months in oak.
In the US, a brandy that has been produced from other than grape wine must be labelled with a clarifying description of the type of brandy production, such as "peach brandy", "fruit brandy", "dried fruit brandy", or "pomace brandy", and brandy that has not been aged in oak for at least two years must be labelled as "immature".
In Canada, the regulations regarding naming conventions for brandy are similar to those of the US (provisions B.02.050–061). According to Canadian food and drug regulations, Brandy shall be a potable alcoholic distillate, or a mixture of potable alcoholic distillates, obtained by the distillation of wine. The minimum specified ageing period is six months in wood, although not necessarily oak (provision B.02.061.2). Caramel, fruit, other botanical substances, flavourings, and flavouring preparations may also be included in a product called brandy (provisions B.02.050–059).
Within the European Union, the German term Weinbrand is legally equivalent to the English term "brandy", but outside the German-speaking countries, it is particularly used to designate brandy from Austria and Germany.
Varieties and brands
Most American grape brandy production is situated in California. Popular brands include Christian Brothers, E&J Gallo, Korbel, and Paul Masson.
Ararat has been produced since 1877 and comes from the Ararat plain in the southern part of Armenia. Bottles on the market are aged anywhere from 3 to 20 years.
In France:
Armagnac is made from grapes of the Armagnac region in the southwest of France, Gers, Landes and Lot-et-Garonne. It is single-continuous distilled in a copper still and aged in oak casks from Gascony or Limousin or from the renowned Tronçais Forest in Auvergne. Armagnac was the first distilled spirit in France. Its usage was first mentioned in 1310 by Vital Du Four in a book of medicine recipes. Armagnacs have a specificity: they offer vintage qualities. Popular brands are Darroze, Baron de Sigognac, Larressingle, Delord, Laubade, Gélas and Janneau.
Cognac comes from the Cognac region of France, and is double distilled using pot stills. Popular brands include Hine, Martell, Camus, Otard, Rémy Martin, Hennessy, Frapin, Delamain and Courvoisier. The European Union and some other countries legally enforce "Cognac" as the exclusive name for brandy produced and distilled in the Cognac area of France and the name "Armagnac" for brandy from the Gascony area of France. Both must also be made using traditional techniques. Since these are considered "protected designations of origin", a brandy made elsewhere may not be called Cognac in these jurisdictions, even if it was made in an identical manner.
is any high-quality brandy, including Cognac and Armagnac but also Fine de Bordeaux, Fine de Bourgogne, and Fine de la Marne.
Cyprus brandy differs from other varieties in that its alcohol concentration is only 32% ABV (64 US proof).
Greek brandy is distilled from Muscat wine. Mature distillates are made from sun-dried Savatiano, Sultana, and Black Corinth grape varieties blended with an aged Muscat wine.
Brandy de Jerez originates from vineyards around Jerez de la Frontera in Andalusia, Spain. It is used in some sherries and is also available as a separate product. It has a protected designation of origin (PDO).
Kanyak (or konyak) is a variety from Turkey, whose name is both a variation of "cognac" and means "burn blood" in Turkish, a reference to its use in cold weather.
Portuguese Lourinhã region, just north of Lisbon, is one of the few European PDO that produce only brandy (aguardente vínica), together with Cognac, Armagnac and Jerez.
In Moldova and Romania, grape brandy is colloquially called coniac, but is officially named Divin in Moldova and Vinars in Romania. After a double distillation, the beverage is usually aged in oak barrels and labelled according to its age (VS is a minimum of 3 years old, VSOP is a minimum of 5 years old, XO is a minimum of 7 years old, and XXO is a minimum of 20 years old).
In Russia, brandy was first produced in 1885 at the Kizlyar Brandy Factory according to a recipe brought from France. Kizlyar brandy is produced according to the classic cognac technology and is one of the most popular beverages in Russia. Also in 2008, the factory restored the status of the Kremlin Suppliers Guild.
South African brandies are, by law, made almost exactly as Cognac, using a double distillation process in copper pot stills followed by ageing in oak barrels for a minimum of three years. Because of this, South African brandies are considered very high quality.
has been produced since the 1700s in the North of Italy, especially in Emilia-Romagna and Veneto, using grapes that are popular in winemaking such as Sangiovese and Grignolino. Colour, texture and finish resemble those of their French and Spanish counterparts. The most popular brands are , Stravecchio Branca, and . Northern Italy has also been noted since the Middle Ages for its pomace brandy, grappa, which is generally colourless but has some top-shelf varieties called barrique which are aged in oak casks and achieve the same caramel colour as regular brandies. There is a vast production of brandies and grappas in Italy, with more than 600 large, medium or small distilleries. Ticino, in Italian-speaking Switzerland, is also allowed to produce pomace brandy designated as grappa.
Labelling of grades
Brandy has a traditional age grading system, although its use is unregulated outside of Cognac and Armagnac. These indicators can usually be found on the label near the brand name:
V.S. ("very special") or ✯✯✯ (three stars) designates a blend in which the youngest brandy has been stored for at least two years in a cask.
V.S.O.P. ("very superior old pale"), Reserve or ✯✯✯✯✯ (five stars) designates a blend in which the youngest brandy is stored for at least four years in a cask.
XO ("extra old") or Napoléon designates a blend in which the youngest brandy is stored for at least six years.
('beyond age') is a designation formally equal to XO for Cognac, but for Armagnac it designates brandy that is at least ten years old. In practice, the term is used by producers to market a high-quality product beyond the official age scale.
In the case of Brandy de Jerez, the classifies it according to:
Brandy de Jerez Solera: 6 months old.
Brandy de Jerez Solera Reserva: one year old.
Brandy de Jerez Solera Gran Reserva: three years old.
Russian brandy (traditionally called "Cognac" within the country), as well as brandies from many other post-Soviet states, uses the traditional Russian grading system that is similar to the French one, but extends it significantly:
"Three stars" or ✯✯✯ designates the brandy with the youngest component cask-aged for at least two years, analogous to the French V.S.
"Four stars" or ✯✯✯✯ is for the blends where the youngest brandy is aged for at least three years.
"Five stars" means that the youngest brandy in the blend was aged four years, similar to the French V.S.O.P.
/KV ("Aged Cognac") is a designation corresponding to "XO" or "Napoléon", meaning that the youngest spirit in the blend is at least six years old.
/KVVK ("Aged Cognac, Superior Quality") designates the eight-year-old blends and tends to be used only for the highest quality vintages.
/KS ("Old Cognac"): At least ten years of age for the youngest spirit in the blend (similar to the Armagnac's "").
/OS ("Very Old"): Beyond the French system, designating blends older than 20 years.
See also
References
External links
Andalusian cuisine
Distilled drinks | Brandy | [
"Chemistry"
] | 4,221 | [
"Distillation",
"Distilled drinks"
] |
45,194 | https://en.wikipedia.org/wiki/Lp%20space | {{DISPLAYTITLE:Lp space}}
In mathematics, the spaces are function spaces defined using a natural generalization of the -norm for finite-dimensional vector spaces. They are sometimes called Lebesgue spaces, named after Henri Lebesgue , although according to the Bourbaki group they were first introduced by Frigyes Riesz .
spaces form an important class of Banach spaces in functional analysis, and of topological vector spaces. Because of their key role in the mathematical analysis of measure and probability spaces, Lebesgue spaces are used also in the theoretical discussion of problems in physics, statistics, economics, finance, engineering, and other disciplines.
Preliminaries
The -norm in finite dimensions
The Euclidean length of a vector in the -dimensional real vector space is given by the Euclidean norm:
The Euclidean distance between two points and is the length of the straight line between the two points. In many situations, the Euclidean distance is appropriate for capturing the actual distances in a given space. In contrast, consider taxi drivers in a grid street plan who should measure distance not in terms of the length of the straight line to their destination, but in terms of the rectilinear distance, which takes into account that streets are either orthogonal or parallel to each other. The class of -norms generalizes these two examples and has an abundance of applications in many parts of mathematics, physics, and computer science.
For a real number the -norm or -norm of is defined by
The absolute value bars can be dropped when is a rational number with an even numerator in its reduced form, and is drawn from the set of real numbers, or one of its subsets.
The Euclidean norm from above falls into this class and is the -norm, and the -norm is the norm that corresponds to the rectilinear distance.
The -norm or maximum norm (or uniform norm) is the limit of the -norms for , given by:
For all the -norms and maximum norm satisfy the properties of a "length function" (or norm), that is:
only the zero vector has zero length,
the length of the vector is positive homogeneous with respect to multiplication by a scalar (positive homogeneity), and
the length of the sum of two vectors is no larger than the sum of lengths of the vectors (triangle inequality).
Abstractly speaking, this means that together with the -norm is a normed vector space. Moreover, it turns out that this space is complete, thus making it a Banach space.
Relations between -norms
The grid distance or rectilinear distance (sometimes called the "Manhattan distance") between two points is never shorter than the length of the line segment between them (the Euclidean or "as the crow flies" distance). Formally, this means that the Euclidean norm of any vector is bounded by its 1-norm:
This fact generalizes to -norms in that the -norm of any given vector does not grow with :
For the opposite direction, the following relation between the -norm and the -norm is known:
This inequality depends on the dimension of the underlying vector space and follows directly from the Cauchy–Schwarz inequality.
In general, for vectors in where
This is a consequence of Hölder's inequality.
When
In for the formula
defines an absolutely homogeneous function for however, the resulting function does not define a norm, because it is not subadditive. On the other hand, the formula
defines a subadditive function at the cost of losing absolute homogeneity. It does define an F-norm, though, which is homogeneous of degree
Hence, the function
defines a metric. The metric space is denoted by
Although the -unit ball around the origin in this metric is "concave", the topology defined on by the metric is the usual vector space topology of hence is a locally convex topological vector space. Beyond this qualitative statement, a quantitative way to measure the lack of convexity of is to denote by the smallest constant such that the scalar multiple of the -unit ball contains the convex hull of which is equal to The fact that for fixed we have
shows that the infinite-dimensional sequence space defined below, is no longer locally convex.
When
There is one norm and another function called the "norm" (with quotation marks).
The mathematical definition of the norm was established by Banach's Theory of Linear Operations. The space of sequences has a complete metric topology provided by the F-norm on the product metric:
The -normed space is studied in functional analysis, probability theory, and harmonic analysis.
Another function was called the "norm" by David Donoho—whose quotation marks warn that this function is not a proper norm—is the number of non-zero entries of the vector Many authors abuse terminology by omitting the quotation marks. Defining the zero "norm" of is equal to
This is not a norm because it is not homogeneous. For example, scaling the vector by a positive constant does not change the "norm". Despite these defects as a mathematical norm, the non-zero counting "norm" has uses in scientific computing, information theory, and statistics–notably in compressed sensing in signal processing and computational harmonic analysis. Despite not being a norm, the associated metric, known as Hamming distance, is a valid distance, since homogeneity is not required for distances.
spaces and sequence spaces
The -norm can be extended to vectors that have an infinite number of components (sequences), which yields the space This contains as special cases:
the space of sequences whose series are absolutely convergent,
the space of square-summable sequences, which is a Hilbert space, and
the space of bounded sequences.
The space of sequences has a natural vector space structure by applying scalar addition and multiplication. Explicitly, the vector sum and the scalar action for infinite sequences of real (or complex) numbers are given by:
Define the -norm:
Here, a complication arises, namely that the series on the right is not always convergent, so for example, the sequence made up of only ones, will have an infinite -norm for The space is then defined as the set of all infinite sequences of real (or complex) numbers such that the -norm is finite.
One can check that as increases, the set grows larger. For example, the sequence
is not in but it is in for as the series
diverges for (the harmonic series), but is convergent for
One also defines the -norm using the supremum:
and the corresponding space of all bounded sequences. It turns out that
if the right-hand side is finite, or the left-hand side is infinite. Thus, we will consider spaces for
The -norm thus defined on is indeed a norm, and together with this norm is a Banach space.
General ℓp-space
In complete analogy to the preceding definition one can define the space over a general index set (and ) as
where convergence on the right means that only countably many summands are nonzero (see also Unconditional convergence).
With the norm
the space becomes a Banach space.
In the case where is finite with elements, this construction yields with the -norm defined above.
If is countably infinite, this is exactly the sequence space defined above.
For uncountable sets this is a non-separable Banach space which can be seen as the locally convex direct limit of -sequence spaces.
For the -norm is even induced by a canonical inner product called the , which means that holds for all vectors This inner product can expressed in terms of the norm by using the polarization identity.
On it can be defined by
Now consider the case Define
where for all
The index set can be turned into a measure space by giving it the discrete σ-algebra and the counting measure. Then the space is just a special case of the more general -space (defined below).
Lp spaces and Lebesgue integrals
An space may be defined as a space of measurable functions for which the -th power of the absolute value is Lebesgue integrable, where functions which agree almost everywhere are identified. More generally, let be a measure space and
When , consider the set of all measurable functions from to or whose absolute value raised to the -th power has a finite integral, or in symbols:
To define the set for recall that two functions and defined on are said to be , written , if the set is measurable and has measure zero.
Similarly, a measurable function (and its absolute value) is (or ) by a real number written , if the (necessarily) measurable set has measure zero.
The space is the set of all measurable functions that are bounded almost everywhere (by some real ) and is defined as the infimum of these bounds:
When then this is the same as the essential supremum of the absolute value of :
For example, if is a measurable function that is equal to almost everywhere then for every and thus for all
For every positive the value under of a measurable function and its absolute value are always the same (that is, for all ) and so a measurable function belongs to if and only if its absolute value does. Because of this, many formulas involving -norms are stated only for non-negative real-valued functions. Consider for example the identity which holds whenever is measurable, is real, and (here when ). The non-negativity requirement can be removed by substituting in for which gives
Note in particular that when is finite then the formula relates the -norm to the -norm.
Seminormed space of -th power integrable functions
Each set of functions forms a vector space when addition and scalar multiplication are defined pointwise.
That the sum of two -th power integrable functions and is again -th power integrable follows from
although it is also a consequence of Minkowski's inequality
which establishes that satisfies the triangle inequality for (the triangle inequality does not hold for ).
That is closed under scalar multiplication is due to being absolutely homogeneous, which means that for every scalar and every function
Absolute homogeneity, the triangle inequality, and non-negativity are the defining properties of a seminorm.
Thus is a seminorm and the set of -th power integrable functions together with the function defines a seminormed vector space. In general, the seminorm is not a norm because there might exist measurable functions that satisfy but are not equal to ( is a norm if and only if no such exists).
Zero sets of -seminorms
If is measurable and equals a.e. then for all positive
On the other hand, if is a measurable function for which there exists some such that then almost everywhere. When is finite then this follows from the case and the formula mentioned above.
Thus if is positive and is any measurable function, then if and only if almost everywhere. Since the right hand side ( a.e.) does not mention it follows that all have the same zero set (it does not depend on ). So denote this common set by
This set is a vector subspace of for every positive
Quotient vector space
Like every seminorm, the seminorm induces a norm (defined shortly) on the canonical quotient vector space of by its vector subspace
This normed quotient space is called and it is the subject of this article. We begin by defining the quotient vector space.
Given any the coset consists of all measurable functions that are equal to almost everywhere.
The set of all cosets, typically denoted by
forms a vector space with origin when vector addition and scalar multiplication are defined by and
This particular quotient vector space will be denoted by
Two cosets are equal if and only if (or equivalently, ), which happens if and only if almost everywhere; if this is the case then and are identified in the quotient space. Hence, strictly speaking consists of equivalence classes of functions.
Given any the value of the seminorm on the coset is constant and equal to , that is:
The map is a norm on called the .
The value of a coset is independent of the particular function that was chosen to represent the coset, meaning that if is any coset then for every (since for every ).
The Lebesgue space
The normed vector space is called or the of -th power integrable functions and it is a Banach space for every (meaning that it is a complete metric space, a result that is sometimes called the [[Riesz–Fischer theorem#Completeness of Lp, 0 < p ≤ ∞|Riesz–Fischer theorem]]).
When the underlying measure space is understood then is often abbreviated or even just
Depending on the author, the subscript notation might denote either or
If the seminorm on happens to be a norm (which happens if and only if ) then the normed space will be linearly isometrically isomorphic to the normed quotient space via the canonical map (since ); in other words, they will be, up to a linear isometry, the same normed space and so they may both be called " space".
The above definitions generalize to Bochner spaces.
In general, this process cannot be reversed: there is no consistent way to define a "canonical" representative of each coset of in For however, there is a theory of lifts enabling such recovery.
Special cases
For the spaces are a special case of spaces; when are the natural numbers and is the counting measure. More generally, if one considers any set with the counting measure, the resulting space is denoted For example, is the space of all sequences indexed by the integers, and when defining the -norm on such a space, one sums over all the integers. The space where is the set with elements, is with its -norm as defined above.
Similar to spaces, is the only Hilbert space among spaces. In the complex case, the inner product on is defined by
Functions in are sometimes called square-integrable functions, quadratically integrable functions or square-summable functions, but sometimes these terms are reserved for functions that are square-integrable in some other sense, such as in the sense of a Riemann integral .
As any Hilbert space, every space is linearly isometric to a suitable where the cardinality of the set is the cardinality of an arbitrary basis for this particular
If we use complex-valued functions, the space is a commutative C*-algebra with pointwise multiplication and conjugation. For many measure spaces, including all sigma-finite ones, it is in fact a commutative von Neumann algebra. An element of defines a bounded operator on any space by multiplication.
When
If then can be defined as above, that is:
In this case, however, the -norm does not satisfy the triangle inequality and defines only a quasi-norm. The inequality valid for implies that
and so the function
is a metric on The resulting metric space is complete.
In this setting satisfies a reverse Minkowski inequality, that is for
This result may be used to prove Clarkson's inequalities, which are in turn used to establish the uniform convexity of the spaces for .
The space for is an F-space: it admits a complete translation-invariant metric with respect to which the vector space operations are continuous. It is the prototypical example of an F-space that, for most reasonable measure spaces, is not locally convex: in or every open convex set containing the function is unbounded for the -quasi-norm; therefore, the vector does not possess a fundamental system of convex neighborhoods. Specifically, this is true if the measure space contains an infinite family of disjoint measurable sets of finite positive measure.
The only nonempty convex open set in is the entire space. Consequently, there are no nonzero continuous linear functionals on the continuous dual space is the zero space. In the case of the counting measure on the natural numbers (i.e. ), the bounded linear functionals on are exactly those that are bounded on , i.e., those given by sequences in Although does contain non-trivial convex open sets, it fails to have enough of them to give a base for the topology.
Having no linear functionals is highly undesirable for the purposes of doing analysis. In case of the Lebesgue measure on rather than work with for it is common to work with the Hardy space whenever possible, as this has quite a few linear functionals: enough to distinguish points from one another. However, the Hahn–Banach theorem still fails in for .
Properties
Hölder's inequality
Suppose satisfy . If and then and
This inequality, called Hölder's inequality, is in some sense optimal since if and is a measurable function such that
where the supremum is taken over the closed unit ball of then and
Atomic decomposition
If then every non-negative has an , meaning that there exist a sequence of non-negative real numbers and a sequence of non-negative functions called , whose supports are pairwise disjoint sets of measure such that
and for every integer
and
and where moreover, the sequence of functions depends only on (it is independent of ).
These inequalities guarantee that for all integers while the supports of being pairwise disjoint implies
Dual spaces
The dual space of for has a natural isomorphism with where is such that . This isomorphism associates with the functional defined by
for every
is a well defined continuous linear mapping which is an isometry by the extremal case of Hölder's inequality. If is a -finite measure space one can use the Radon–Nikodym theorem to show that any can be expressed this way, i.e., is an isometric isomorphism of Banach spaces. Hence, it is usual to say simply that is the continuous dual space of
For the space is reflexive. Let be as above and let be the corresponding linear isometry. Consider the map from to obtained by composing with the transpose (or adjoint) of the inverse of
This map coincides with the canonical embedding of into its bidual. Moreover, the map is onto, as composition of two onto isometries, and this proves reflexivity.
If the measure on is sigma-finite, then the dual of is isometrically isomorphic to (more precisely, the map corresponding to is an isometry from onto
The dual of is subtler. Elements of can be identified with bounded signed finitely additive measures on that are absolutely continuous with respect to See ba space for more details. If we assume the axiom of choice, this space is much bigger than except in some trivial cases. However, Saharon Shelah proved that there are relatively consistent extensions of Zermelo–Fraenkel set theory (ZF + DC + "Every subset of the real numbers has the Baire property") in which the dual of is
Embeddings
Colloquially, if then contains functions that are more locally singular, while elements of can be more spread out. Consider the Lebesgue measure on the half line A continuous function in might blow up near but must decay sufficiently fast toward infinity. On the other hand, continuous functions in need not decay at all but no blow-up is allowed. More formally, suppose that , then:
if and only if does not contain sets of finite but arbitrarily large measure (e.g. any finite measure).
if and only if does not contain sets of non-zero but arbitrarily small measure (e.g. the counting measure).
Neither condition holds for the Lebesgue measure on the real line while both conditions holds for the counting measure on any finite set. As a consequence of the closed graph theorem, the embedding is continuous, i.e., the identity operator is a bounded linear map from to in the first case and to in the second. Indeed, if the domain has finite measure, one can make the following explicit calculation using Hölder's inequality
leading to
The constant appearing in the above inequality is optimal, in the sense that the operator norm of the identity is precisely
the case of equality being achieved exactly when -almost-everywhere.
Dense subspaces
Let and be a measure space and consider an integrable simple function on given by
where are scalars, has finite measure and is the indicator function of the set for By construction of the integral, the vector space of integrable simple functions is dense in
More can be said when is a normal topological space and its Borel –algebra.
Suppose is an open set with Then for every Borel set contained in there exist a closed set and an open set such that
for every . Subsequently, there exists a Urysohn function on that is on and on with
If can be covered by an increasing sequence of open sets that have finite measure, then the space of –integrable continuous functions is dense in More precisely, one can use bounded continuous functions that vanish outside one of the open sets
This applies in particular when and when is the Lebesgue measure. For example, the space of continuous and compactly supported functions as well as the space of integrable step functions are dense in .
Closed subspaces
Suppose . If is a probability space and is a closed subspace of then is finite-dimensional.
It is crucial that the vector space be a subset of since it is possible to construct an infinite-dimensional closed vector subspace of which lies in ; taking the Lebesgue measure on the circle group divided by as the probability measure.
Applications
Statistics
In statistics, measures of central tendency and statistical dispersion, such as the mean, median, and standard deviation, can be defined in terms of metrics, and measures of central tendency can be characterized as solutions to variational problems.
In penalized regression, "L1 penalty" and "L2 penalty" refer to penalizing either the norm of a solution's vector of parameter values (i.e. the sum of its absolute values), or its squared norm (its Euclidean length). Techniques which use an L1 penalty, like LASSO, encourage sparse solutions (where the many parameters are zero). Elastic net regularization uses a penalty term that is a combination of the norm and the squared norm of the parameter vector.
Hausdorff–Young inequality
The Fourier transform for the real line (or, for periodic functions, see Fourier series), maps to (or to ) respectively, where and This is a consequence of the Riesz–Thorin interpolation theorem, and is made precise with the Hausdorff–Young inequality.
By contrast, if the Fourier transform does not map into
Hilbert spaces
Hilbert spaces are central to many applications, from quantum mechanics to stochastic calculus. The spaces and are both Hilbert spaces. In fact, by choosing a Hilbert basis i.e., a maximal orthonormal subset of or any Hilbert space, one sees that every Hilbert space is isometrically isomorphic to (same as above), i.e., a Hilbert space of type
Generalizations and extensions
Weak
Let be a measure space, and a measurable function with real or complex values on The distribution function of is defined for by
If is in for some with then by Markov's inequality,
A function is said to be in the space weak , or if there is a constant such that, for all
The best constant for this inequality is the -norm of and is denoted by
The weak coincide with the Lorentz spaces so this notation is also used to denote them.
The -norm is not a true norm, since the triangle inequality fails to hold. Nevertheless, for in
and in particular
In fact, one has
and raising to power and taking the supremum in one has
Under the convention that two functions are equal if they are equal almost everywhere, then the spaces are complete .
For any the expression
is comparable to the -norm. Further in the case this expression defines a norm if Hence for the weak spaces are Banach spaces .
A major result that uses the -spaces is the Marcinkiewicz interpolation theorem, which has broad applications to harmonic analysis and the study of singular integrals.
Weighted spaces
As before, consider a measure space Let be a measurable function. The -weighted space is defined as where means the measure defined by
or, in terms of the Radon–Nikodym derivative, the norm for is explicitly
As -spaces, the weighted spaces have nothing special, since is equal to But they are the natural framework for several results in harmonic analysis ; they appear for example in the Muckenhoupt theorem: for the classical Hilbert transform is defined on where denotes the unit circle and the Lebesgue measure; the (nonlinear) Hardy–Littlewood maximal operator is bounded on Muckenhoupt's theorem describes weights such that the Hilbert transform remains bounded on and the maximal operator on
spaces on manifolds
One may also define spaces on a manifold, called the intrinsic spaces of the manifold, using densities.
Vector-valued spaces
Given a measure space and a locally convex space (here assumed to be complete), it is possible to define spaces of -integrable -valued functions on in a number of ways. One way is to define the spaces of Bochner integrable and Pettis integrable functions, and then endow them with locally convex TVS-topologies that are (each in their own way) a natural generalization of the usual topology. Another way involves topological tensor products of with Element of the vector space are finite sums of simple tensors where each simple tensor may be identified with the function that sends This tensor product is then endowed with a locally convex topology that turns it into a topological tensor product, the most common of which are the projective tensor product, denoted by and the injective tensor product, denoted by In general, neither of these space are complete so their completions are constructed, which are respectively denoted by and (this is analogous to how the space of scalar-valued simple functions on when seminormed by any is not complete so a completion is constructed which, after being quotiented by is isometrically isomorphic to the Banach space ). Alexander Grothendieck showed that when is a nuclear space (a concept he introduced), then these two constructions are, respectively, canonically TVS-isomorphic with the spaces of Bochner and Pettis integral functions mentioned earlier; in short, they are indistinguishable.
space of measurable functions
The vector space of (equivalence classes of) measurable functions on is denoted . By definition, it contains all the and is equipped with the topology of convergence in measure. When is a probability measure (i.e., ), this mode of convergence is named convergence in probability. The space is always a topological abelian group but is only a topological vector space if This is because scalar multiplication is continuous if and only if If is -finite then the weaker topology of local convergence in measure is an F-space, i.e. a completely metrizable topological vector space. Moreover, this topology is isometric to global convergence in measure for a suitable choice of probability measure
The description is easier when is finite. If is a finite measure on the function admits for the convergence in measure the following fundamental system of neighborhoods
The topology can be defined by any metric of the form
where is bounded continuous concave and non-decreasing on with and when (for example, Such a metric is called Lévy-metric for Under this metric the space is complete. However, as mentioned above, scalar multiplication is continuous with respect to this metric only if . To see this, consider the Lebesgue measurable function defined by . Then clearly . The space is in general not locally bounded, and not locally convex.
For the infinite Lebesgue measure on the definition of the fundamental system of neighborhoods could be modified as follows
The resulting space , with the topology of local convergence in measure, is isomorphic to the space for any positive –integrable density
See also
Notes
References
.
.
.
.
.
.
External links
Proof that Lp spaces are complete
Banach spaces
Function spaces
Mathematical series
Measure theory
Normed spaces
Lp spaces | Lp space | [
"Mathematics"
] | 5,796 | [
"Sequences and series",
"Function spaces",
"Series (mathematics)",
"Vector spaces",
"Mathematical structures",
"Calculus",
"Space (mathematics)"
] |
45,196 | https://en.wikipedia.org/wiki/Injective%20function | In mathematics, an injective function (also known as injection, or one-to-one function ) is a function that maps distinct elements of its domain to distinct elements of its codomain; that is, implies (equivalently by contraposition, implies ). In other words, every element of the function's codomain is the image of one element of its domain. The term must not be confused with that refers to bijective functions, which are functions such that each element in the codomain is an image of exactly one element in the domain.
A homomorphism between algebraic structures is a function that is compatible with the operations of the structures. For all common algebraic structures, and, in particular for vector spaces, an is also called a . However, in the more general context of category theory, the definition of a monomorphism differs from that of an injective homomorphism. This is thus a theorem that they are equivalent for algebraic structures; see for more details.
A function that is not injective is sometimes called many-to-one.
Definition
Let be a function whose domain is a set The function is said to be injective provided that for all and in if then ; that is, implies Equivalently, if then in the contrapositive statement.
Symbolically,
which is logically equivalent to the contrapositive,An injective function (or, more generally, a monomorphism) is often denoted by using the specialized arrows ↣ or ↪ (for example, or ), although some authors specifically reserve ↪ for an inclusion map.
Examples
For visual examples, readers are directed to the gallery section.
For any set and any subset the inclusion map (which sends any element to itself) is injective. In particular, the identity function is always injective (and in fact bijective).
If the domain of a function is the empty set, then the function is the empty function, which is injective.
If the domain of a function has one element (that is, it is a singleton set), then the function is always injective.
The function defined by is injective.
The function defined by is injective, because (for example) However, if is redefined so that its domain is the non-negative real numbers [0,+∞), then is injective.
The exponential function defined by is injective (but not surjective, as no real value maps to a negative number).
The natural logarithm function defined by is injective.
The function defined by is not injective, since, for example,
More generally, when and are both the real line then an injective function is one whose graph is never intersected by any horizontal line more than once. This principle is referred to as the .
Injections can be undone
Functions with left inverses are always injections. That is, given if there is a function such that for every , , then is injective. In this case, is called a retraction of Conversely, is called a section of
Conversely, every injection with a non-empty domain has a left inverse . It can be defined by choosing an element in the domain of and setting to the unique element of the pre-image (if it is non-empty) or to (otherwise).
The left inverse is not necessarily an inverse of because the composition in the other order, may differ from the identity on In other words, an injective function can be "reversed" by a left inverse, but is not necessarily invertible, which requires that the function is bijective.
Injections may be made invertible
In fact, to turn an injective function into a bijective (hence invertible) function, it suffices to replace its codomain by its actual image That is, let such that for all ; then is bijective. Indeed, can be factored as where is the inclusion function from into
More generally, injective partial functions are called partial bijections.
Other properties
If and are both injective then is injective.
If is injective, then is injective (but need not be).
is injective if and only if, given any functions whenever then In other words, injective functions are precisely the monomorphisms in the category Set of sets.
If is injective and is a subset of then Thus, can be recovered from its image
If is injective and and are both subsets of then
Every function can be decomposed as for a suitable injection and surjection This decomposition is unique up to isomorphism, and may be thought of as the inclusion function of the range of as a subset of the codomain of
If is an injective function, then has at least as many elements as in the sense of cardinal numbers. In particular, if, in addition, there is an injection from to then and have the same cardinal number. (This is known as the Cantor–Bernstein–Schroeder theorem.)
If both and are finite with the same number of elements, then is injective if and only if is surjective (in which case is bijective).
An injective function which is a homomorphism between two algebraic structures is an embedding.
Unlike surjectivity, which is a relation between the graph of a function and its codomain, injectivity is a property of the graph of the function alone; that is, whether a function is injective can be decided by only considering the graph (and not the codomain) of
Proving that functions are injective
A proof that a function is injective depends on how the function is presented and what properties the function holds.
For functions that are given by some formula there is a basic idea.
We use the definition of injectivity, namely that if then
Here is an example:
Proof: Let Suppose So implies which implies Therefore, it follows from the definition that is injective.
There are multiple other methods of proving that a function is injective. For example, in calculus if is a differentiable function defined on some interval, then it is sufficient to show that the derivative is always positive or always negative on that interval. In linear algebra, if is a linear transformation it is sufficient to show that the kernel of contains only the zero vector. If is a function with finite domain it is sufficient to look through the list of images of each domain element and check that no image occurs twice on the list.
A graphical approach for a real-valued function of a real variable is the horizontal line test. If every horizontal line intersects the curve of in at most one point, then is injective or one-to-one.
Gallery
See also
Notes
References
, p. 17 ff.
, p. 38 ff.
External links
Earliest Uses of Some of the Words of Mathematics: entry on Injection, Surjection and Bijection has the history of Injection and related terms.
Khan Academy – Surjective (onto) and Injective (one-to-one) functions: Introduction to surjective and injective functions
Functions and mappings
Basic concepts in set theory
Types of functions | Injective function | [
"Mathematics"
] | 1,491 | [
"Mathematical analysis",
"Functions and mappings",
"Mathematical objects",
"Basic concepts in set theory",
"Mathematical relations",
"Types of functions"
] |
45,199 | https://en.wikipedia.org/wiki/Inverse%20element | In mathematics, the concept of an inverse element generalises the concepts of opposite () and reciprocal () of numbers.
Given an operation denoted here , and an identity element denoted , if , one says that is a left inverse of , and that is a right inverse of . (An identity element is an element such that and for all and for which the left-hand sides are defined.)
When the operation is associative, if an element has both a left inverse and a right inverse, then these two inverses are equal and unique; they are called the inverse element or simply the inverse. Often an adjective is added for specifying the operation, such as in additive inverse, multiplicative inverse, and functional inverse. In this case (associative operation), an invertible element is an element that has an inverse. In a ring, an invertible element, also called a unit, is an element that is invertible under multiplication (this is not ambiguous, as every element is invertible under addition).
Inverses are commonly used in groupswhere every element is invertible, and ringswhere invertible elements are also called units. They are also commonly used for operations that are not defined for all possible operands, such as inverse matrices and inverse functions. This has been generalized to category theory, where, by definition, an isomorphism is an invertible morphism.
The word 'inverse' is derived from that means 'turned upside down', 'overturned'. This may take its origin from the case of fractions, where the (multiplicative) inverse is obtained by exchanging the numerator and the denominator (the inverse of is ).
Definitions and basic properties
The concepts of inverse element and invertible element are commonly defined for binary operations that are everywhere defined (that is, the operation is defined for any two elements of its domain). However, these concepts are also commonly used with partial operations, that is operations that are not defined everywhere. Common examples are matrix multiplication, function composition and composition of morphisms in a category. It follows that the common definitions of associativity and identity element must be extended to partial operations; this is the object of the first subsections.
In this section, is a set (possibly a proper class) on which a partial operation (possibly total) is defined, which is denoted with
Associativity
A partial operation is associative if
for every in for which one of the members of the equality is defined; the equality means that the other member of the equality must also be defined.
Examples of non-total associative operations are multiplication of matrices of arbitrary size, and function composition.
Identity elements
Let be a possibly partial associative operation on a set .
An identity element, or simply an identity is an element such that
for every and for which the left-hand sides of the equalities are defined.
If and are two identity elements such that is defined, then (This results immediately from the definition, by )
It follows that a total operation has at most one identity element, and if and are different identities, then is not defined.
For example, in the case of matrix multiplication, there is one identity matrix for every positive integer , and two identity matrices of different size cannot be multiplied together.
Similarly, identity functions are identity elements for function composition, and the composition of the identity functions of two different sets are not defined.
Left and right inverses
If
where is an identity element, one says that is a left inverse of , and is a right inverse of .
Left and right inverses do not always exist, even when the operation is total and associative. For example, addition is a total associative operation on nonnegative integers, which has as additive identity, and is the only element that has an additive inverse. This lack of inverses is the main motivation for extending the natural numbers into the integers.
An element can have several left inverses and several right inverses, even when the operation is total and associative. For example, consider the functions from the integers to the integers. The doubling function has infinitely many left inverses under function composition, which are the functions that divide by two the even numbers, and give any value to odd numbers. Similarly, every function that maps to either or is a right inverse of the function the floor function that maps to or depending whether is even or odd.
More generally, a function has a left inverse for function composition if and only if it is injective, and it has a right inverse if and only if it is surjective.
In category theory, right inverses are also called sections, and left inverses are called retractions.
Inverses
An element is invertible under an operation if it has a left inverse and a right inverse.
In the common case where the operation is associative, the left and right inverse of an element are equal and unique. Indeed, if and are respectively a left inverse and a right inverse of , then
The inverse of an invertible element is its unique left or right inverse.
If the operation is denoted as an addition, the inverse, or additive inverse, of an element is denoted Otherwise, the inverse of is generally denoted or, in the case of a commutative multiplication When there may be a confusion between several operations, the symbol of the operation may be added before the exponent, such as in The notation is not commonly used for function composition, since can be used for the multiplicative inverse.
If and are invertible, and is defined, then is invertible, and its inverse is
An invertible homomorphism is called an isomorphism. In
category theory, an invertible morphism is also called an isomorphism.
In groups
A group is a set with an associative operation that has an identity element, and for which every element has an inverse.
Thus, the inverse is a function from the group to itself that may also be considered as an operation of arity one. It is also an involution, since the inverse of the inverse of an element is the element itself.
A group may act on a set as transformations of this set. In this case, the inverse of a group element defines a transformation that is the inverse of the transformation defined by that is, the transformation that "undoes" the transformation defined by
For example, the Rubik's cube group represents the finite sequences of elementary moves. The inverse of such a sequence is obtained by applying the inverse of each move in the reverse order.
In monoids
A monoid is a set with an associative operation that has an identity element.
The invertible elements in a monoid form a group under monoid operation.
A ring is a monoid for ring multiplication. In this case, the invertible elements are also called units and form the group of units of the ring.
If a monoid is not commutative, there may exist non-invertible elements that have a left inverse or a right inverse (not both, as, otherwise, the element would be invertible).
For example, the set of the functions from a set to itself is a monoid under function composition. In this monoid, the invertible elements are the bijective functions; the elements that have left inverses are the injective functions, and those that have right inverses are the surjective functions.
Given a monoid, one may want extend it by adding inverse to some elements. This is generally impossible for non-commutative monoids, but, in a commutative monoid, it is possible to add inverses to the elements that have the cancellation property (an element has the cancellation property if implies and implies This extension of a monoid is allowed by Grothendieck group construction. This is the method that is commonly used for constructing integers from natural numbers, rational numbers from integers and, more generally, the field of fractions of an integral domain, and localizations of commutative rings.
In rings
A ring is an algebraic structure with two operations, addition and multiplication, which are denoted as the usual operations on numbers.
Under addition, a ring is an abelian group, which means that addition is commutative and associative; it has an identity, called the additive identity, and denoted ; and every element has an inverse, called its additive inverse and denoted . Because of commutativity, the concepts of left and right inverses are meaningless since they do not differ from inverses.
Under multiplication, a ring is a monoid; this means that multiplication is associative and has an identity called the multiplicative identity and denoted . An invertible element for multiplication is called a unit. The inverse or multiplicative inverse (for avoiding confusion with additive inverses) of a unit is denoted or, when the multiplication is commutative,
The additive identity is never a unit, except when the ring is the zero ring, which has as its unique element.
If is the only non-unit, the ring is a field if the multiplication is commutative, or a division ring otherwise.
In a noncommutative ring (that is, a ring whose multiplication is not commutative), a non-invertible element may have one or several left or right inverses. This is, for example, the case of the linear functions from an infinite-dimensional vector space to itself.
A commutative ring (that is, a ring whose multiplication is commutative) may be extended by adding inverses to elements that are not zero divisors (that is, their product with a nonzero element cannot be ). This is the process of localization, which produces, in particular, the field of rational numbers from the ring of integers, and, more generally, the field of fractions of an integral domain. Localization is also used with zero divisors, but, in this case the original ring is not a subring of the localisation; instead, it is mapped non-injectively to the localization.
Matrices
Matrix multiplication is commonly defined for matrices over a field, and straightforwardly extended to matrices over rings, rngs and semirings. However, in this section, only matrices over a commutative ring are considered, because of the use of the concept of rank and determinant.
If is a matrix (that is, a matrix with rows and columns), and is a matrix, the product is defined if , and only in this case. An identity matrix, that is, an identity element for matrix multiplication is a square matrix (same number for rows and columns) whose entries of the main diagonal are all equal to , and all other entries are .
An invertible matrix is an invertible element under matrix multiplication. A matrix over a commutative ring is invertible if and only if its determinant is a unit in (that is, is invertible in . In this case, its inverse matrix can be computed with Cramer's rule.
If is a field, the determinant is invertible if and only if it is not zero. As the case of fields is more common, one see often invertible matrices defined as matrices with a nonzero determinant, but this is incorrect over rings.
In the case of integer matrices (that is, matrices with integer entries), an invertible matrix is a matrix that has an inverse that is also an integer matrix. Such a matrix is called a unimodular matrix for distinguishing it from matrices that are invertible over the real numbers. A square integer matrix is unimodular if and only if its determinant is or , since these two numbers are the only units in the ring of integers.
A matrix has a left inverse if and only if its rank equals its number of columns. This left inverse is not unique except for square matrices where the left inverse equal the inverse matrix. Similarly, a right inverse exists if and only if the rank equals the number of rows; it is not unique in the case of a rectangular matrix, and equals the inverse matrix in the case of a square matrix.
Functions, homomorphisms and morphisms
Composition is a partial operation that generalizes to homomorphisms of algebraic structures and morphisms of categories into operations that are also called composition, and share many properties with function composition.
In all the case, composition is associative.
If and the composition is defined if and only if or, in the function and homomorphism cases, In the function and homomorphism cases, this means that the codomain of equals or is included in the domain of . In the morphism case, this means that the codomain of equals the domain of .
There is an identity for every object (set, algebraic structure or object), which is called also an identity function in the function case.
A function is invertible if and only if it is a bijection. An invertible homomorphism or morphism is called an isomorphism. An homomorphism of algebraic structures is an isomorphism if and only if it is a bijection. The inverse of a bijection is called an inverse function. In the other cases, one talks of inverse isomorphisms.
A function has a left inverse or a right inverse if and only it is injective or surjective, respectively. An homomorphism of algebraic structures that has a left inverse or a right inverse is respectively injective or surjective, but the converse is not true in some algebraic structures. For example, the converse is true for vector spaces but not for modules over a ring: a homomorphism of modules that has a left inverse of a right inverse is called respectively a split epimorphism or a split monomorphism. This terminology is also used for morphisms in any category.
Generalizations
In a unital magma
Let be a unital magma, that is, a set with a binary operation and an identity element . If, for , we have , then is called a left inverse of and is called a right inverse of . If an element is both a left inverse and a right inverse of , then is called a two-sided inverse, or simply an inverse, of . An element with a two-sided inverse in is called invertible in . An element with an inverse element only on one side is left invertible or right invertible.
Elements of a unital magma may have multiple left, right or two-sided inverses. For example, in the magma given by the Cayley table
the elements 2 and 3 each have two two-sided inverses.
A unital magma in which all elements are invertible need not be a loop. For example, in the magma given by the Cayley table
every element has a unique two-sided inverse (namely itself), but is not a loop because the Cayley table is not a Latin square.
Similarly, a loop need not have two-sided inverses. For example, in the loop given by the Cayley table
the only element with a two-sided inverse is the identity element 1.
If the operation is associative then if an element has both a left inverse and a right inverse, they are equal. In other words, in a monoid (an associative unital magma) every element has at most one inverse (as defined in this section). In a monoid, the set of invertible elements is a group, called the group of units of , and denoted by or H1.
In a semigroup
The definition in the previous section generalizes the notion of inverse in group relative to the notion of identity. It's also possible, albeit less obvious, to generalize the notion of an inverse by dropping the identity element but keeping associativity; that is, in a semigroup.
In a semigroup S an element x is called (von Neumann) regular if there exists some element z in S such that xzx = x; z is sometimes called a pseudoinverse. An element y is called (simply) an inverse of x if xyx = x and y = yxy. Every regular element has at least one inverse: if x = xzx then it is easy to verify that y = zxz is an inverse of x as defined in this section. Another easy to prove fact: if y is an inverse of x then e = xy and f = yx are idempotents, that is ee = e and ff = f. Thus, every pair of (mutually) inverse elements gives rise to two idempotents, and ex = xf = x, ye = fy = y, and e acts as a left identity on x, while f acts a right identity, and the left/right roles are reversed for y. This simple observation can be generalized using Green's relations: every idempotent e in an arbitrary semigroup is a left identity for Re and right identity for Le. An intuitive description of this fact is that every pair of mutually inverse elements produces a local left identity, and respectively, a local right identity.
In a monoid, the notion of inverse as defined in the previous section is strictly narrower than the definition given in this section. Only elements in the Green class H1 have an inverse from the unital magma perspective, whereas for any idempotent e, the elements of He have an inverse as defined in this section. Under this more general definition, inverses need not be unique (or exist) in an arbitrary semigroup or monoid. If all elements are regular, then the semigroup (or monoid) is called regular, and every element has at least one inverse. If every element has exactly one inverse as defined in this section, then the semigroup is called an inverse semigroup. Finally, an inverse semigroup with only one idempotent is a group. An inverse semigroup may have an absorbing element 0 because 000 = 0, whereas a group may not.
Outside semigroup theory, a unique inverse as defined in this section is sometimes called a quasi-inverse. This is generally justified because in most applications (for example, all examples in this article) associativity holds, which makes this notion a generalization of the left/right inverse relative to an identity (see Generalized inverse).
U-semigroups
A natural generalization of the inverse semigroup is to define an (arbitrary) unary operation ° such that (a°)° = a for all a in S; this endows S with a type 2,1 algebra. A semigroup endowed with such an operation is called a U-semigroup. Although it may seem that a° will be the inverse of a, this is not necessarily the case. In order to obtain interesting notion(s), the unary operation must somehow interact with the semigroup operation. Two classes of U-semigroups have been studied:
I-semigroups, in which the interaction axiom is aa°a = a
*-semigroups, in which the interaction axiom is (ab)° = b°a°. Such an operation is called an involution, and typically denoted by a*
Clearly a group is both an I-semigroup and a *-semigroup. A class of semigroups important in semigroup theory are completely regular semigroups; these are I-semigroups in which one additionally has aa° = a°a; in other words every element has commuting pseudoinverse a°. There are few concrete examples of such semigroups however; most are completely simple semigroups. In contrast, a subclass of *-semigroups, the *-regular semigroups (in the sense of Drazin), yield one of best known examples of a (unique) pseudoinverse, the Moore–Penrose inverse. In this case however the involution a* is not the pseudoinverse. Rather, the pseudoinverse of x is the unique element y such that xyx = x, yxy = y, (xy)* = xy, (yx)* = yx. Since *-regular semigroups generalize inverse semigroups, the unique element defined this way in a *-regular semigroup is called the generalized inverse or Moore–Penrose inverse.
Semirings
Examples
All examples in this section involve associative operators.
Galois connections
The lower and upper adjoints in a (monotone) Galois connection, L and G are quasi-inverses of each other; that is, LGL = L and GLG = G and one uniquely determines the other. They are not left or right inverses of each other however.
Generalized inverses of matrices
A square matrix with entries in a field is invertible (in the set of all square matrices of the same size, under matrix multiplication) if and only if its determinant is different from zero. If the determinant of is zero, it is impossible for it to have a one-sided inverse; therefore a left inverse or right inverse implies the existence of the other one. See invertible matrix for more.
More generally, a square matrix over a commutative ring is invertible if and only if its determinant is invertible in .
Non-square matrices of full rank have several one-sided inverses:
For we have left inverses; for example,
For we have right inverses; for example,
The left inverse can be used to determine the least norm solution of , which is also the least squares formula for regression and is given by
No rank deficient matrix has any (even one-sided) inverse. However, the Moore–Penrose inverse exists for all matrices, and coincides with the left or right (or true) inverse when it exists.
As an example of matrix inverses, consider:
So, as m < n, we have a right inverse, By components it is computed as
The left inverse doesn't exist, because
which is a singular matrix, and cannot be inverted.
See also
Division ring
Latin square property
Loop (algebra)
Unit (ring theory)
Notes
References
M. Kilp, U. Knauer, A.V. Mikhalev, Monoids, Acts and Categories with Applications to Wreath Products and Graphs, De Gruyter Expositions in Mathematics vol. 29, Walter de Gruyter, 2000, , p. 15 (def in unital magma) and p. 33 (def in semigroup)
contains all of the semigroup material herein except *-regular semigroups.
Drazin, M.P., Regular semigroups with involution, Proc. Symp. on Regular Semigroups (DeKalb, 1979), 29–46
Miyuki Yamada, P-systems in regular semigroups, Semigroup Forum, 24(1), December 1982, pp. 173–187
Nordahl, T.E., and H.E. Scheiblich, Regular * Semigroups, Semigroup Forum, 16(1978), 369–377.
Algebra
Abstract algebra
Inverse element | Inverse element | [
"Mathematics"
] | 4,822 | [
"Algebra",
"Binary operations",
"Binary relations",
"Mathematical relations",
"Abstract algebra"
] |
45,200 | https://en.wikipedia.org/wiki/Universal%20algebra | Universal algebra (sometimes called general algebra) is the field of mathematics that studies algebraic structures themselves, not examples ("models") of algebraic structures.
For instance, rather than take particular groups as the object of study, in universal algebra one takes the class of groups as an object of study.
Basic idea
In universal algebra, an algebra (or algebraic structure) is a set A together with a collection of operations on A.
Arity
An n-ary operation on A is a function that takes n elements of A and returns a single element of A. Thus, a 0-ary operation (or nullary operation) can be represented simply as an element of A, or a constant, often denoted by a letter like a. A 1-ary operation (or unary operation) is simply a function from A to A, often denoted by a symbol placed in front of its argument, like ~x. A 2-ary operation (or binary operation) is often denoted by a symbol placed between its arguments (also called infix notation), like x ∗ y. Operations of higher or unspecified arity are usually denoted by function symbols, with the arguments placed in parentheses and separated by commas, like f(x,y,z) or f(x1,...,xn). One way of talking about an algebra, then, is by referring to it as an algebra of a certain type , where is an ordered sequence of natural numbers representing the arity of the operations of the algebra. However, some researchers also allow infinitary operations, such as where J is an infinite index set, which is an operation in the algebraic theory of complete lattices.
Equations
After the operations have been specified, the nature of the algebra is further defined by axioms, which in universal algebra often take the form of identities, or equational laws. An example is the associative axiom for a binary operation, which is given by the equation x ∗ (y ∗ z) = (x ∗ y) ∗ z. The axiom is intended to hold for all elements x, y, and z of the set A.
Varieties
A collection of algebraic structures defined by identities is called a variety or equational class.
Restricting one's study to varieties rules out:
quantification, including universal quantification (∀) except before an equation, and existential quantification (∃)
logical connectives other than conjunction (∧)
relations other than equality, in particular inequalities, both and order relations
The study of equational classes can be seen as a special branch of model theory, typically dealing with structures having operations only (i.e. the type can have symbols for functions but not for relations other than equality), and in which the language used to talk about these structures uses equations only.
Not all algebraic structures in a wider sense fall into this scope. For example, ordered groups involve an ordering relation, so would not fall within this scope.
The class of fields is not an equational class because there is no type (or "signature") in which all field laws can be written as equations (inverses of elements are defined for all non-zero elements in a field, so inversion cannot be added to the type).
One advantage of this restriction is that the structures studied in universal algebra can be defined in any category that has finite products. For example, a topological group is just a group in the category of topological spaces.
Examples
Most of the usual algebraic systems of mathematics are examples of varieties, but not always in an obvious way, since the usual definitions often involve quantification or inequalities.
Groups
As an example, consider the definition of a group. Usually a group is defined in terms of a single binary operation ∗, subject to the axioms:
Associativity (as in the previous section): x ∗ (y ∗ z) = (x ∗ y) ∗ z; formally: ∀x,y,z. x∗(y∗z)=(x∗y)∗z.
Identity element: There exists an element e such that for each element x, one has e ∗ x = x = x ∗ e; formally: ∃e ∀x. e∗x=x=x∗e.
Inverse element: The identity element is easily seen to be unique, and is usually denoted by e. Then for each x, there exists an element i such that x ∗ i = e = i ∗ x; formally: ∀x ∃i. x∗i=e=i∗x.
(Some authors also use the "closure" axiom that x ∗ y belongs to A whenever x and y do, but here this is already implied by calling ∗ a binary operation.)
This definition of a group does not immediately fit the point of view of universal algebra, because the axioms of the identity element and inversion are not stated purely in terms of equational laws which hold universally "for all ..." elements, but also involve the existential quantifier "there exists ...". The group axioms can be phrased as universally quantified equations by specifying, in addition to the binary operation ∗, a nullary operation e and a unary operation ~, with ~x usually written as x−1. The axioms become:
Associativity: .
Identity element: ; formally: .
Inverse element: ; formally: .
To summarize, the usual definition has:
a single binary operation (signature (2))
1 equational law (associativity)
2 quantified laws (identity and inverse)
while the universal algebra definition has:
3 operations: one binary, one unary, and one nullary (signature )
3 equational laws (associativity, identity, and inverse)
no quantified laws (except outermost universal quantifiers, which are allowed in varieties)
A key point is that the extra operations do not add information, but follow uniquely from the usual definition of a group. Although the usual definition did not uniquely specify the identity element e, an easy exercise shows that it is unique, as is the inverse of each element.
The universal algebra point of view is well adapted to category theory. For example, when defining a group object in category theory, where the object in question may not be a set, one must use equational laws (which make sense in general categories), rather than quantified laws (which refer to individual elements). Further, the inverse and identity are specified as morphisms in the category. For example, in a topological group, the inverse must not only exist element-wise, but must give a continuous mapping (a morphism). Some authors also require the identity map to be a closed inclusion (a cofibration).
Other examples
Most algebraic structures are examples of universal algebras.
Rings, semigroups, quasigroups, groupoids, magmas, loops, and others.
Vector spaces over a fixed field and modules over a fixed ring are universal algebras. These have a binary addition and a family of unary scalar multiplication operators, one for each element of the field or ring.
Examples of relational algebras include semilattices, lattices, and Boolean algebras.
Basic constructions
We assume that the type, , has been fixed. Then there are three basic constructions in universal algebra: homomorphic image, subalgebra, and product.
A homomorphism between two algebras A and B is a function from the set A to the set B such that, for every operation fA of A and corresponding fB of B (of arity, say, n), h(fA(x1, ..., xn)) = fB(h(x1), ..., h(xn)). (Sometimes the subscripts on f are taken off when it is clear from context which algebra the function is from.) For example, if e is a constant (nullary operation), then h(eA) = eB. If ~ is a unary operation, then h(~x) = ~h(x). If ∗ is a binary operation, then h(x ∗ y) = h(x) ∗ h(y). And so on. A few of the things that can be done with homomorphisms, as well as definitions of certain special kinds of homomorphisms, are listed under Homomorphism. In particular, we can take the homomorphic image of an algebra, h(A).
A subalgebra of A is a subset of A that is closed under all the operations of A. A product of some set of algebraic structures is the cartesian product of the sets with the operations defined coordinatewise.
Some basic theorems
The isomorphism theorems, which encompass the isomorphism theorems of groups, rings, modules, etc.
Birkhoff's HSP Theorem, which states that a class of algebras is a variety if and only if it is closed under homomorphic images, subalgebras, and arbitrary direct products.
Motivations and applications
In addition to its unifying approach, universal algebra also gives deep theorems and important examples and counterexamples. It provides a useful framework for those who intend to start the study of new classes of algebras.
It can enable the use of methods invented for some particular classes of algebras to other classes of algebras, by recasting the methods in terms of universal algebra (if possible), and then interpreting these as applied to other classes. It has also provided conceptual clarification; as J.D.H. Smith puts it, "What looks messy and complicated in a particular framework may turn out to be simple and obvious in the proper general one."
In particular, universal algebra can be applied to the study of monoids, rings, and lattices. Before universal algebra came along, many theorems (most notably the isomorphism theorems) were proved separately in all of these classes, but with universal algebra, they can be proven once and for all for every kind of algebraic system.
The 1956 paper by Higgins referenced below has been well followed up for its framework for a range of particular algebraic systems, while his 1963 paper is notable for its discussion of algebras with operations which are only partially defined, typical examples for this being categories and groupoids. This leads on to the subject of higher-dimensional algebra which can be defined as the study of algebraic theories with partial operations whose domains are defined under geometric conditions. Notable examples of these are various forms of higher-dimensional categories and groupoids.
Constraint satisfaction problem
Universal algebra provides a natural language for the constraint satisfaction problem (CSP). CSP refers to an important class of computational problems where, given a relational algebra A and an existential sentence over this algebra, the question is to find out whether can be satisfied in A. The algebra A is often fixed, so that CSPA refers to the problem whose instance is only the existential sentence .
It is proved that every computational problem can be formulated as CSPA for some algebra A.
For example, the n-coloring problem can be stated as CSP of the algebra , i.e. an algebra with n elements and a single relation, inequality.
The dichotomy conjecture (proved in April 2017) states that if A is a finite algebra, then CSPA is either P or NP-complete.
Generalizations
Universal algebra has also been studied using the techniques of category theory. In this approach, instead of writing a list of operations and equations obeyed by those operations, one can describe an algebraic structure using categories of a special sort, known as Lawvere theories or more generally algebraic theories. Alternatively, one can describe algebraic structures using monads. The two approaches are closely related, with each having their own advantages.
In particular, every Lawvere theory gives a monad on the category of sets, while any "finitary" monad on the category of sets arises from a Lawvere theory. However, a monad describes algebraic structures within one particular category (for example the category of sets), while algebraic theories describe structure within any of a large class of categories (namely those having finite products).
A more recent development in category theory is operad theory – an operad is a set of operations, similar to a universal algebra, but restricted in that equations are only allowed between expressions with the variables, with no duplication or omission of variables allowed. Thus, rings can be described as the so-called "algebras" of some operad, but not groups, since the law duplicates the variable g on the left side and omits it on the right side. At first this may seem to be a troublesome restriction, but the payoff is that operads have certain advantages: for example, one can hybridize the concepts of ring and vector space to obtain the concept of associative algebra, but one cannot form a similar hybrid of the concepts of group and vector space.
Another development is partial algebra where the operators can be partial functions. Certain partial functions can also be handled by a generalization of Lawvere theories known as "essentially algebraic theories".
Another generalization of universal algebra is model theory, which is sometimes described as "universal algebra + logic".
History
In Alfred North Whitehead's book A Treatise on Universal Algebra, published in 1898, the term universal algebra had essentially the same meaning that it has today. Whitehead credits William Rowan Hamilton and Augustus De Morgan as originators of the subject matter, and James Joseph Sylvester with coining the term itself.
At the time structures such as Lie algebras and hyperbolic quaternions drew attention to the need to expand algebraic structures beyond the associatively multiplicative class. In a review Alexander Macfarlane wrote: "The main idea of the work is not unification of the several methods, nor generalization of ordinary algebra so as to include them, but rather the comparative study of their several structures." At the time George Boole's algebra of logic made a strong counterpoint to ordinary number algebra, so the term "universal" served to calm strained sensibilities.
Whitehead's early work sought to unify quaternions (due to Hamilton), Grassmann's Ausdehnungslehre, and Boole's algebra of logic. Whitehead wrote in his book:
"Such algebras have an intrinsic value for separate detailed study; also they are worthy of comparative study, for the sake of the light thereby thrown on the general theory of symbolic reasoning, and on algebraic symbolism in particular. The comparative study necessarily presupposes some previous separate study, comparison being impossible without knowledge."
Whitehead, however, had no results of a general nature. Work on the subject was minimal until the early 1930s, when Garrett Birkhoff and Øystein Ore began publishing on universal algebras. Developments in metamathematics and category theory in the 1940s and 1950s furthered the field, particularly the work of Abraham Robinson, Alfred Tarski, Andrzej Mostowski, and their students.
In the period between 1935 and 1950, most papers were written along the lines suggested by Birkhoff's papers, dealing with free algebras, congruence and subalgebra lattices, and homomorphism theorems. Although the development of mathematical logic had made applications to algebra possible, they came about slowly; results published by Anatoly Maltsev in the 1940s went unnoticed because of the war. Tarski's lecture at the 1950 International Congress of Mathematicians in Cambridge ushered in a new period in which model-theoretic aspects were developed, mainly by Tarski himself, as well as C.C. Chang, Leon Henkin, Bjarni Jónsson, Roger Lyndon, and others.
In the late 1950s, Edward Marczewski emphasized the importance of free algebras, leading to the publication of more than 50 papers on the algebraic theory of free algebras by Marczewski himself, together with Jan Mycielski, Władysław Narkiewicz, Witold Nitka, J. Płonka, S. Świerczkowski, K. Urbanik, and others.
Starting with William Lawvere's thesis in 1963, techniques from category theory have become important in universal algebra.
See also
Equational logic
Graph algebra
Term algebra
Clone
Universal algebraic geometry
Simple algebra (universal algebra)
Footnotes
References
.
Burris, Stanley N., and H.P. Sankappanavar, 1981. A Course in Universal Algebra Springer-Verlag. Free online edition.
(First published in 1965 by Harper & Row)
Freese, Ralph, and Ralph McKenzie, 1987. Commutator Theory for Congruence Modular Varieties, 1st ed. London Mathematical Society Lecture Note Series, 125. Cambridge Univ. Press. . Free online second edition.
Hobby, David, and Ralph McKenzie, 1988. The Structure of Finite Algebras American Mathematical Society. . Free online edition.
Jipsen, Peter, and Henry Rose, 1992. Varieties of Lattices, Lecture Notes in Mathematics 1533. Springer Verlag. . Free online edition.
Pigozzi, Don. General Theory of Algebras. Free online edition.
(Mainly of historical interest.)
External links
Algebra Universalis—a journal dedicated to Universal Algebra. | Universal algebra | [
"Mathematics"
] | 3,556 | [
"Fields of abstract algebra",
"Universal algebra"
] |
45,206 | https://en.wikipedia.org/wiki/Submarine%20communications%20cable | A submarine communications cable is a cable laid on the seabed between land-based stations to carry telecommunication signals across stretches of ocean and sea. The first submarine communications cables were laid beginning in the 1850s and carried telegraphy traffic, establishing the first instant telecommunications links between continents, such as the first transatlantic telegraph cable which became operational on 16 August 1858.
Submarine cables first connected all the world's continents (except Antarctica) when Java was connected to Darwin, Northern Territory, Australia, in 1871 in anticipation of the completion of the Australian Overland Telegraph Line in 1872 connecting to Adelaide, South Australia and thence to the rest of Australia.
Subsequent generations of cables carried telephone traffic, then data communications traffic. These early cables used copper wires in their cores, but modern cables use optical fiber technology to carry digital data, which includes telephone, Internet and private data traffic. Modern cables are typically about in diameter and weigh around for the deep-sea sections which comprise the majority of the run, although larger and heavier cables are used for shallow-water sections near shore.
Early history: telegraph and coaxial cables
First successful trials
After William Cooke and Charles Wheatstone had introduced their working telegraph in 1839, the idea of a submarine line across the Atlantic Ocean began to be thought of as a possible triumph of the future. Samuel Morse proclaimed his faith in it as early as 1840, and in 1842, he submerged a wire, insulated with tarred hemp and India rubber, in the water of New York Harbor, and telegraphed through it. The following autumn, Wheatstone performed a similar experiment in Swansea Bay. A good insulator to cover the wire and prevent the electric current from leaking into the water was necessary for the success of a long submarine line. India rubber had been tried by Moritz von Jacobi, the Prussian electrical engineer, as far back as the early 19th century.
Another insulating gum which could be melted by heat and readily applied to wire made its appearance in 1842. Gutta-percha, the adhesive juice of the Palaquium gutta tree, was introduced to Europe by William Montgomerie, a Scottish surgeon in the service of the British East India Company. Twenty years earlier, Montgomerie had seen whips made of gutta-percha in Singapore, and he believed that it would be useful in the fabrication of surgical apparatus. Michael Faraday and Wheatstone soon discovered the merits of gutta-percha as an insulator, and in 1845, the latter suggested that it should be employed to cover the wire which was proposed to be laid from Dover to Calais. In 1847 William Siemens, then an officer in the army of Prussia, laid the first successful underwater cable using gutta percha insulation, across the Rhine between Deutz and Cologne. In 1849, Charles Vincent Walker, electrician to the South Eastern Railway, submerged of wire coated with gutta-percha off the coast from Folkestone, which was tested successfully.
First commercial cables
In August 1850, having earlier obtained a concession from the French government, John Watkins Brett's English Channel Submarine Telegraph Company laid the first line across the English Channel, using the converted tugboat Goliath. It was simply a copper wire coated with gutta-percha, without any other protection, and was not successful. However, the experiment served to secure renewal of the concession, and in September 1851, a protected core, or true, cable was laid by the reconstituted Submarine Telegraph Company from a government hulk, Blazer, which was towed across the Channel.
In 1853, more successful cables were laid, linking Great Britain with Ireland, Belgium, and the Netherlands, and crossing The Belts in Denmark. The British & Irish Magnetic Telegraph Company completed the first successful Irish link on May 23 between Portpatrick and Donaghadee using the collier William Hutt. The same ship was used for the link from Dover to Ostend in Belgium, by the Submarine Telegraph Company. Meanwhile, the Electric & International Telegraph Company completed two cables across the North Sea, from Orford Ness to Scheveningen, the Netherlands. These cables were laid by Monarch, a paddle steamer which later became the first vessel with permanent cable-laying equipment.
In 1858, the steamship Elba was used to lay a telegraph cable from Jersey to Guernsey, on to Alderney and then to Weymouth, the cable being completed successfully in September of that year. Problems soon developed with eleven breaks occurring by 1860 due to storms, tidal and sand movements, and wear on rocks. A report to the Institution of Civil Engineers in 1860 set out the problems to assist in future cable-laying operations.
Crimean War (1853–1856)
In the Crimean War various forms of telegraphy played a major role; this was a first. At the start of the campaign there was a telegraph link at Bucharest connected to London. In the winter of 1854 the French extended the telegraph link to the Black Sea coast. In April 1855 the British laid an underwater cable from Varna to the Crimean peninsula so that news of the Crimean War could reach London in a handful of hours.
Transatlantic telegraph cable
The first attempt at laying a transatlantic telegraph cable was promoted by Cyrus West Field, who persuaded British industrialists to fund and lay one in 1858. However, the technology of the day was not capable of supporting the project; it was plagued with problems from the outset, and was in operation for only a month. Subsequent attempts in 1865 and 1866 with the world's largest steamship, the SS Great Eastern, used a more advanced technology and produced the first successful transatlantic cable. Great Eastern later went on to lay the first cable reaching to India from Aden, Yemen, in 1870.
British dominance of early cable
From the 1850s until 1911, British submarine cable systems dominated the most important market, the North Atlantic Ocean. The British had both supply side and demand side advantages. In terms of supply, Britain had entrepreneurs willing to put forth enormous amounts of capital necessary to build, lay and maintain these cables. In terms of demand, Britain's vast colonial empire led to business for the cable companies from news agencies, trading and shipping companies, and the British government. Many of Britain's colonies had significant populations of European settlers, making news about them of interest to the general public in the home country.
British officials believed that depending on telegraph lines that passed through non-British territory posed a security risk, as lines could be cut and messages could be interrupted during wartime. They sought the creation of a worldwide network within the empire, which became known as the All Red Line, and conversely prepared strategies to quickly interrupt enemy communications. Britain's very first action after declaring war on Germany in World War I was to have the cable ship Alert (not the CS Telconia as frequently reported) cut the five cables linking Germany with France, Spain and the Azores, and through them, North America. Thereafter, the only way Germany could communicate was by wireless, and that meant that Room 40 could listen in.
The submarine cables were an economic benefit to trading companies, because owners of ships could communicate with captains when they reached their destination and give directions as to where to go next to pick up cargo based on reported pricing and supply information. The British government had obvious uses for the cables in maintaining administrative communications with governors throughout its empire, as well as in engaging other nations diplomatically and communicating with its military units in wartime. The geographic location of British territory was also an advantage as it included both Ireland on the east side of the Atlantic Ocean and Newfoundland in North America on the west side, making for the shortest route across the ocean, which reduced costs significantly.
A few facts put this dominance of the industry in perspective. In 1896, there were 30 cable-laying ships in the world, 24 of which were owned by British companies. In 1892, British companies owned and operated two-thirds of the world's cables and by 1923, their share was still 42.7 percent. During World War I, Britain's telegraph communications were almost completely uninterrupted, while it was able to quickly cut Germany's cables worldwide.
Cable to India, Singapore, East Asia and Australia
Throughout the 1860s and 1870s, British cable expanded eastward, into the Mediterranean Sea and the Indian Ocean. An 1863 cable to Bombay (now Mumbai), India, provided a crucial link to Saudi Arabia. In 1870, Bombay was linked to London via submarine cable in a combined operation by four cable companies, at the behest of the British Government. In 1872, these four companies were combined to form the mammoth globe-spanning Eastern Telegraph Company, owned by John Pender. A spin-off from Eastern Telegraph Company was a second sister company, the Eastern Extension, China and Australasia Telegraph Company, commonly known simply as "the Extension." In 1872, Australia was linked by cable to Bombay via Singapore and China and in 1876, the cable linked the British Empire from London to New Zealand.
Submarine cables across the Pacific, 1902-1991
The first trans-Pacific cables providing telegraph service were completed in 1902 and 1903, linking the US mainland to Hawaii in 1902 and Guam to the Philippines in 1903. Canada, Australia, New Zealand and Fiji were also linked in 1902 with the trans-Pacific segment of the All Red Line. Japan was connected into the system in 1906. Service beyond Midway Atoll was abandoned in 1941 due to World War II, but the remainder stayed in operation until 1951 when the FCC gave permission to cease operations.
The first trans-Pacific telephone cable was laid from Hawaii to Japan in 1964, with an extension from Guam to The Philippines. Also in 1964, the Commonwealth Pacific Cable System (COMPAC), with 80 telephone channel capacity, opened for traffic from Sydney to Vancouver, and in 1967, the South East Asia Commonwealth (SEACOM) system, with 160 telephone channel capacity, opened for traffic. This system used microwave radio from Sydney to Cairns (Queensland), cable running from Cairns to Madang (Papua New Guinea), Guam, Hong Kong, Kota Kinabalu (capital of Sabah, Malaysia), Singapore, then overland by microwave radio to Kuala Lumpur. In 1991, the North Pacific Cable system was the first regenerative system (i.e., with repeaters) to completely cross the Pacific from the US mainland to Japan. The US portion of NPC was manufactured in Portland, Oregon, from 1989 to 1991 at STC Submarine Systems, and later Alcatel Submarine Networks. The system was laid by Cable & Wireless Marine on the CS Cable Venture.
Construction, 19-20th century
Transatlantic cables of the 19th century consisted of an outer layer of iron and later steel wire, wrapping India rubber, wrapping gutta-percha, which surrounded a multi-stranded copper wire at the core. The portions closest to each shore landing had additional protective armour wires. Gutta-percha, a natural polymer similar to rubber, had nearly ideal properties for insulating submarine cables, with the exception of a rather high dielectric constant which made cable capacitance high. William Thomas Henley had developed a machine in 1837 for covering wires with silk or cotton thread that he developed into a wire wrapping capability for submarine cable with a factory in 1857 that became W.T. Henley's Telegraph Works Co., Ltd. The India Rubber, Gutta Percha and Telegraph Works Company, established by the Silver family and giving that name to a section of London, furnished cores to Henley's as well as eventually making and laying finished cable. In 1870 William Hooper established Hooper's Telegraph Works to manufacture his patented vulcanized rubber core, at first to furnish other makers of finished cable, that began to compete with the gutta-percha cores. The company later expanded into complete cable manufacture and cable laying, including the building of the first cable ship specifically designed to lay transatlantic cables.
Gutta-percha and rubber were not replaced as a cable insulation until polyethylene was introduced in the 1930s. Even then, the material was only available to the military and the first submarine cable using it was not laid until 1945 during World War II across the English Channel. In the 1920s, the American military experimented with rubber-insulated cables as an alternative to gutta-percha, since American interests controlled significant supplies of rubber but did not have easy access to gutta-percha manufacturers. The 1926 development by John T. Blake of deproteinized rubber improved the impermeability of cables to water.
Many early cables suffered from attack by sea life. The insulation could be eaten, for instance, by species of Teredo (shipworm) and Xylophaga. Hemp laid between the steel wire armouring gave pests a route to eat their way in. Damaged armouring, which was not uncommon, also provided an entrance. Cases of sharks biting cables and attacks by sawfish have been recorded. In one case in 1873, a whale damaged the Persian Gulf Cable between Karachi and Gwadar. The whale was apparently attempting to use the cable to clean off barnacles at a point where the cable descended over a steep drop. The unfortunate whale got its tail entangled in loops of cable and drowned. The cable repair ship Amber Witch was only able to winch up the cable with difficulty, weighed down as it was with the dead whale's body.
Bandwidth problems
Early long-distance submarine telegraph cables exhibited formidable electrical problems. Unlike modern cables, the technology of the 19th century did not allow for in-line repeater amplifiers in the cable. Large voltages were used to attempt to overcome the electrical resistance of their tremendous length but the cables' distributed capacitance and inductance combined to distort the telegraph pulses in the line, reducing the cable's bandwidth, severely limiting the data rate for telegraph operation to 10–12 words per minute.
As early as 1816, Francis Ronalds had observed that electric signals were slowed in passing through an insulated wire or core laid underground, and outlined the cause to be induction, using the analogy of a long Leyden jar. The same effect was noticed by Latimer Clark (1853) on cores immersed in water, and particularly on the lengthy cable between England and The Hague. Michael Faraday showed that the effect was caused by capacitance between the wire and the earth (or water) surrounding it. Faraday had noticed that when a wire is charged from a battery (for example when pressing a telegraph key), the electric charge in the wire induces an opposite charge in the water as it travels along. In 1831, Faraday described this effect in what is now referred to as Faraday's law of induction. As the two charges attract each other, the exciting charge is retarded. The core acts as a capacitor distributed along the length of the cable which, coupled with the resistance and inductance of the cable, limits the speed at which a signal travels through the conductor of the cable.
Early cable designs failed to analyse these effects correctly. Famously, E.O.W. Whitehouse had dismissed the problems and insisted that a transatlantic cable was feasible. When he subsequently became chief electrician of the Atlantic Telegraph Company, he became involved in a public dispute with William Thomson. Whitehouse believed that, with enough voltage, any cable could be driven. Thomson believed that his law of squares showed that retardation could not be overcome by a higher voltage. His recommendation was a larger cable. Because of the excessive voltages recommended by Whitehouse, Cyrus West Field's first transatlantic cable never worked reliably, and eventually short circuited to the ocean when Whitehouse increased the voltage beyond the cable design limit.
Thomson designed a complex electric-field generator that minimized current by resonating the cable, and a sensitive light-beam mirror galvanometer for detecting the faint telegraph signals. Thomson became wealthy on the royalties of these, and several related inventions. Thomson was elevated to Lord Kelvin for his contributions in this area, chiefly an accurate mathematical model of the cable, which permitted design of the equipment for accurate telegraphy. The effects of atmospheric electricity and the geomagnetic field on submarine cables also motivated many of the early polar expeditions.
Thomson had produced a mathematical analysis of propagation of electrical signals into telegraph cables based on their capacitance and resistance, but since long submarine cables operated at slow rates, he did not include the effects of inductance. By the 1890s, Oliver Heaviside had produced the modern general form of the telegrapher's equations, which included the effects of inductance and which were essential to extending the theory of transmission lines to the higher frequencies required for high-speed data and voice.
Transatlantic telephony
While laying a transatlantic telephone cable was seriously considered from the 1920s, the technology required for economically feasible telecommunications was not developed until the 1940s. A first attempt to lay a "pupinized" telephone cable—one with loading coils added at regular intervals—failed in the early 1930s due to the Great Depression.
TAT-1 (Transatlantic No. 1) was the first transatlantic telephone cable system. Between 1955 and 1956, cable was laid between Gallanach Bay, near Oban, Scotland and Clarenville, Newfoundland and Labrador, in Canada. It was inaugurated on September 25, 1956, initially carrying 36 telephone channels.
In the 1960s, transoceanic cables were coaxial cables that transmitted frequency-multiplexed voiceband signals. A high-voltage direct current on the inner conductor powered repeaters (two-way amplifiers placed at intervals along the cable). The first-generation repeaters remain among the most reliable vacuum tube amplifiers ever designed. Later ones were transistorized. Many of these cables are still usable, but have been abandoned because their capacity is too small to be commercially viable. Some have been used as scientific instruments to measure earthquake waves and other geomagnetic events.
Other uses
In 1942, Siemens Brothers of New Charlton, London, in conjunction with the United Kingdom National Physical Laboratory, adapted submarine communications cable technology to create the world's first submarine oil pipeline in Operation Pluto during World War II.
Active fiber-optic cables may be useful in detecting seismic events which alter cable polarization.
Modern history
Optical telecommunications cables
In the 1980s, fiber-optic cables were developed. The first transatlantic telephone cable to use optical fiber was TAT-8, which went into operation in 1988. A fiber-optic cable comprises multiple pairs of fibers. Each pair has one fiber in each direction. TAT-8 had two operational pairs and one backup pair. Except for very short lines, fiber-optic submarine cables include repeaters at regular intervals.
Modern optical fiber repeaters use a solid-state optical amplifier, usually an erbium-doped fiber amplifier (EDFA). Each repeater contains separate equipment for each fiber. These comprise signal reforming, error measurement and controls. A solid-state laser dispatches the signal into the next length of fiber. The solid-state laser excites a short length of doped fiber that itself acts as a laser amplifier. As the light passes through the fiber, it is amplified. This system also permits wavelength-division multiplexing, which dramatically increases the capacity of the fiber. EDFA amplifiers were first used in submarine cables in 1995.
Repeaters are powered by a constant direct current passed down the conductor near the centre of the cable, so all repeaters in a cable are in series. Power feed equipment (PFE) is installed at the terminal stations. Typically both ends share the current generation with one end providing a positive voltage and the other a negative voltage. A virtual earth point exists roughly halfway along the cable under normal operation. The amplifiers or repeaters derive their power from the potential difference across them. The voltage passed down the cable is often anywhere from 3000 to 15,000VDC at a current of up to 1,100mA, with the current increasing with decreasing voltage; the current at 10,000VDC is up to 1,650mA. Hence the total amount of power sent into the cable is often up to 16.5 kW.
The optic fiber used in undersea cables is chosen for its exceptional clarity, permitting runs of more than between repeaters to minimize the number of amplifiers and the distortion they cause. Unrepeated cables are cheaper than repeated cables and their maximum transmission distance is limited, although this has increased over the years; in 2014 unrepeated cables of up to in length were in service; however these require unpowered repeaters to be positioned every 100 km.
The rising demand for these fiber-optic cables outpaced the capacity of providers such as AT&T. Having to shift traffic to satellites resulted in lower-quality signals. To address this issue, AT&T had to improve its cable-laying abilities. It invested $100 million in producing two specialized fiber-optic cable laying vessels. These included laboratories in the ships for splicing cable and testing its electrical properties. Such field monitoring is important because the glass of fiber-optic cable is less malleable than the copper cable that had been formerly used. The ships are equipped with thrusters that increase maneuverability. This capability is important because fiber-optic cable must be laid straight from the stern, which was another factor that copper-cable-laying ships did not have to contend with.
Originally, submarine cables were simple point-to-point connections. With the development of submarine branching units (SBUs), more than one destination could be served by a single cable system. Modern cable systems now usually have their fibers arranged in a self-healing ring to increase their redundancy, with the submarine sections following different paths on the ocean floor. One reason for this development was that the capacity of cable systems had become so large that it was not possible to completely back up a cable system with satellite capacity, so it became necessary to provide sufficient terrestrial backup capability. Not all telecommunications organizations wish to take advantage of this capability, so modern cable systems may have dual landing points in some countries (where back-up capability is required) and only single landing points in other countries where back-up capability is either not required, the capacity to the country is small enough to be backed up by other means, or having backup is regarded as too expensive.
A further redundant-path development over and above the self-healing rings approach is the mesh network whereby fast switching equipment is used to transfer services between network paths with little to no effect on higher-level protocols if a path becomes inoperable. As more paths become available to use between two points, it is less likely that one or two simultaneous failures will prevent end-to-end service.
As of 2012, operators had "successfully demonstrated long-term, error-free transmission at 100 Gbps across Atlantic Ocean" routes of up to , meaning a typical cable can move tens of terabits per second overseas. Speeds improved rapidly in the previous few years, with 40 Gbit/s having been offered on that route only three years earlier in August 2009.
Switching and all-by-sea routing commonly increases the distance and thus the round trip latency by more than 50%. For example, the round trip delay (RTD) or latency of the fastest transatlantic connections is under 60 ms, close to the theoretical optimum for an all-sea route. While in theory, a great circle route (GCP) between London and New York City is only , this requires several land masses (Ireland, Newfoundland, Prince Edward Island and the isthmus connecting New Brunswick to Nova Scotia) to be traversed, as well as the extremely tidal Bay of Fundy and a land route along Massachusetts' north shore from Gloucester to Boston and through fairly built up areas to Manhattan itself. In theory, using this partial land route could result in round trip times below 40 ms (which is the speed of light minimum time), and not counting switching. Along routes with less land in the way, round trip times can approach speed of light minimums in the long term.
The type of optical fiber used in unrepeated and very long cables is often PCSF (pure silica core) due to its low loss of 0.172dB per kilometer when carrying a 1550 nm wavelength laser light. The large chromatic dispersion of PCSF means that its use requires transmission and receiving equipment designed with this in mind; this property can also be used to reduce interference when transmitting multiple channels through a single fiber using wavelength division multiplexing (WDM), which allows for multiple optical carrier channels to be transmitted through a single fiber, each carrying its own information. WDM is limited by the optical bandwidth of the amplifiers used to transmit data through the cable and by the spacing between the frequencies of the optical carriers; however this minimum spacing is also limited, with the minimum spacing often being 50 GHz (0.4 nm). The use of WDM can reduce the maximum length of the cable although this can be overcome by designing equipment with this in mind.
Optical post amplifiers, used to increase the strength of the signal generated by the optical transmitter often use a diode-pumped erbium-doped fiber laser. The diode is often a high power 980 or 1480 nm laser diode. This setup allows for an amplification of up to +24dBm in an affordable manner. Using an erbium-ytterbium doped fiber instead allows for a gain of +33dBm, however again the amount of power that can be fed into the fiber is limited. In single carrier configurations the dominating limitation is self phase modulation induced by the Kerr effect which limits the amplification to +18 dBm per fiber. In WDM configurations the limitation due to crossphase modulation becomes predominant instead. Optical pre-amplifiers are often used to negate the thermal noise of the receiver. Pumping the pre-amplifier with a 980 nm laser leads to a noise of at most 3.5 dB, with a noise of 5 dB usually obtained with a 1480 nm laser. The noise has to be filtered using optical filters.
Raman amplification can be used to extend the reach or the capacity of an unrepeatered cable, by launching 2 frequencies into a single fiber; one carrying data signals at 1550 nm, and the other pumping them at 1450 nm. Launching a pump frequency (pump laser light) at a power of just one watt leads to an increase in reach of 45 km or a 6-fold increase in capacity.
Another way to increase the reach of a cable is by using unpowered repeaters called remote optical pre-amplifiers (ROPAs); these still make a cable count as unrepeatered since the repeaters do not require electrical power but they do require a pump laser light to be transmitted alongside the data carried by the cable; the pump light and the data are often transmitted in physically separate fibers. The ROPA contains a doped fiber that uses the pump light (often a 1480 nm laser light) to amplify the data signals carried on the rest of the fibers.
WDM or wavelength division multiplexing was first implemented in submarine fiber optic cables from the 1990s to the 2000s, followed by DWDM or dense wavelength division mulltiplexing around 2007. Each fiber can carry 30 wavelengths at a time. SDM or spatial division multiplexing submarine cables have at least 12 fiber pairs which is an increase from the maximum of 8 pairs found in conventional submarine cables, and submarine cables with up to 24 fiber pairs have been deployed. The type of modulation employed in a submarine cable can have a major impact in its capacity. SDM is combined with DWDM to improve capacity.
Transponders are used to send data through the cable. The open cable concept allows for the design of a submarine cable independently of the transponders that will be used to transmit data through the cable. SLTE (Submarine Line Terminal Equipment) has transponders and a ROADM (Reconfigurable optical add-drop multiplexer) used for handling the signals in the cable via software control. The ROADM is used to improve the reliability of the cable by allowing it to operate even if it has faults. This equipment is located inside a cable landing station (CLS). C-OTDR (Coherent Optical Time Domain Reflectometry) is used in submarine cables to detect the location of cable faults. The wet plant of a submarine cable comprises the cable itself, branching units, repeaters and possibly OADMs (Optical add-drop multiplexers).
Investment and finances
A typical multi-terabit, transoceanic submarine cable system costs several hundred million dollars to construct. Almost all fiber-optic cables from TAT-8 in 1988 until approximately 1997 were constructed by consortia of operators. For example, TAT-8 counted 35 participants including most major international carriers at the time such as AT&T Corporation. Two privately financed, non-consortium cables were constructed in the late 1990s, which preceded a massive, speculative rush to construct privately financed cables that peaked in more than $22 billion worth of investment between 1999 and 2001. This was followed by the bankruptcy and reorganization of cable operators such as Global Crossing, 360networks, FLAG, Worldcom, and Asia Global Crossing. Tata Communications' Global Network (TGN) is the only wholly owned fiber network circling the planet.
Most cables in the 20th century crossed the Atlantic Ocean, to connect the United States and Europe. However, capacity in the Pacific Ocean was much expanded starting in the 1990s. For example, between 1998 and 2003, approximately 70% of undersea fiber-optic cable was laid in the Pacific. This is in part a response to the emerging significance of Asian markets in the global economy.
After decades of heavy investment in already developed markets such as the transatlantic and transpacific routes, efforts increased in the 21st century to expand the submarine cable network to serve the developing world. For instance, in July 2009, an underwater fiber-optic cable line plugged East Africa into the broader Internet. The company that provided this new cable was SEACOM, which is 75% owned by East African and South African investors. The project was delayed by a month due to increased piracy along the coast.
Investments in cables present a commercial risk because cables cover 6,200 km of ocean floor, cross submarine mountain ranges and rifts. Because of this most companies only purchase capacity after the cable is finished.
Antarctica
Antarctica is the only continent not yet reached by a submarine telecommunications cable. Phone, video, and e-mail traffic must be relayed to the rest of the world via satellite links that have limited availability and capacity. Bases on the continent itself are able to communicate with one another via radio, but this is only a local network. To be a viable alternative, a fiber-optic cable would have to be able to withstand temperatures of as well as massive strain from ice flowing up to per year. Thus, plugging into the larger Internet backbone with the high bandwidth afforded by fiber-optic cable is still an as-yet infeasible economic and technical challenge in the Antarctic.
Arctic
The climate change induced melting of Arctic ice has provided the opportunity to lay new cable networks, linking continents and remote regions. Several projects are underway in the Arctic including 12,650 km "Polar Express" and 14,500 km Far North Fiber. However, scholars have raised environmental concerns about the laying of submarine cables in the region and the general lack of a nuanced regulatory framework. Environmental concerns pertain both to ice-related hazards damaging the cables, and cable installation disturbing the seabed or electromagnetic fields and thermal radiation of the cables impacting sensitive organisms.
Importance of submarine cables
Submarine cables, while often perceived as ‘insignificant’ parts of communication infrastructure as they lay “hidden” in the seabed, are an essential infrastructure in the digital era, carrying 99% of the data traffic across the oceans. This data includes all internet traffic, military transmissions, and financial transactions.
The total carrying capacity of a submarine cable is in the terabits per second, while a satellite typically offers only 1 gigabit per second, a ratio of more than 1000 to 1. Satellites handle less than 5% - to an estimate of even 0.5% - of global data transmission, and are less efficient, slower, and more expensive. Therefore, satellites are often exclusively considered for remote areas with challenging conditions for laying submarine cables. Submarine cables are thus the essential technical infrastructure for all internet communication.
National security
As a result of these cables' cost and usefulness, they are highly valued not only by the corporations building and operating them for profit, but also by national governments. For instance, the Australian government considers its submarine cable systems to be "vital to the national economy". Accordingly, the Australian Communications and Media Authority (ACMA) has created protection zones that restrict activities that could potentially damage cables linking Australia to the rest of the world. The ACMA also regulates all projects to install new submarine cables.
Due to their critical role, disruptions to these cables can lead to communication blackouts and, thus, extensive economic losses. The impact of such disruptions is often exemplified by the 2022 Tonga volcanic eruption that severed the island's only submarine cable and thus connectivity to the rest of the world for several days. The cable break was declared a “national crisis,” and repairs took several weeks, leaving Tonga largely isolated during a crucial period for disaster response.
Submarine cable infrastructure may even have additional technical advantages, such as carrying SMART environmental sensors supporting national disaster early warning systems. Furthermore, the cables are predicted to become even more critical with growing demands from 5G networks, the ‘Internet of Things’ (IoT), and artificial intelligence on large data transfers.
International security
Submarine communication cables are a critical infrastructure within the context of international security. Transmitting massive amounts of sensitive data every day, they are essential for both state operations and private enterprises. One of the catalysts for the amount and sensitivity of data flowing through these cables has been the global rise of cloud computing.
The U.S military, for example, uses the submarine cable network for data transfer from conflict zones to command staff in the United States (U.S.). Interruption of the cable network during intense operations could have direct consequences for the military on the ground.
The criticality of cable services makes their geopolitical influence profound. Scholars argue that state dominance in cable networks can exert political pressure, or shape global internet governance.
An example of such state dominance in the global cable infrastructure is China’s ‘Digital Silk Road’ strategy funding the expansion of Chinese cable networks, with the Chinese company HMN Technologies often criticised for providing networks for other states, holding up to 10% of the global market share. Some critiques argue that Chinese investments in critical cable infrastructure, being involvement in approximately 25% of global submarine cables, such as the PEACE cable linking Eastafrica and Europe, may enable China to reroute data traffic through its own networks, and thus apply political pressure. The strategy is countered by the U.S., supporting alternative projects.
Vulnerabilities of submarine cables to organized crime
Submarine cables are exposed to a variety of potential threats. Many of these threats are accidental, such as by fishing trawlers, ship anchors, earthquakes, turbidity currents, and even shark bites.
Based on surveying breaks in the Atlantic Ocean and the Caribbean Sea, it was found that between 1959 and 1996, fewer than 9% were due to natural events. In response to this threat to the communications network, the practice of cable burial has developed. The average incidence of cable faults was 3.7 per per year from 1959 to 1979. That rate was reduced to 0.44 faults per 1,000 km per year after 1985, due to widespread burial of cable starting in 1980.
Still, cable breaks are by no means a thing of the past, with more than 50 repairs a year in the Atlantic Ocean alone, and significant breaks in 2006, 2008, 2009 and 2011.
Several vulnerabilities of submarine communication cables make them attractive targets for organized crime. The following section explores these vulnerabilities and currently proposed counter measures to organized crime from different perspectives.
Technical perspective
Technical vulnerabilities
The remoteness of these cables in international waters, poses significant challenges for continuous monitoring and increases their attractiveness as targets of physical tampering, data theft, and service disruptions.
The cables' vulnerability is further compounded by technological advancements, such as the development of Unmanned Underwater Vehicles (UUVs), which enable covert cable damage while avoiding detection. However, even low-tech attacks can impact the cable's security significantly, as demonstrated in 2013, when three divers were arrested for severing the main cable linking Egypt with Europe, drastically lowering Egypt's internet speed.
Even in shallow waters, cables remain exposed to risks, as illustrated in the context of the Korea Strait. Such sea passages are often marked as ‘maritime choke points’ where several nations have conflicting interests, increasing the risk of harm from shipping activities and disputes.
Further, most cable locations are publicly available, making them an easy target for criminal acts such as disrupting services or stealing cable materials, which potentially lead to substantial communication blackouts. The stealing of submarine cable has been reported in Vietnam, where more than 11 km of cables went missing in 2007 and were later presumed to be found on fishing boats, attributed to their incentives to sell them, according to media reports.
Technical countermeasures
Typically, cables are buried in waters with a depth of less than 2,000 meters, but increasingly, they are buried in deeper seabed as a means of protection against high seas fishing and bottom trawling. However, this may also be advantageous against physical attacks from organized crime.
Further technical solutions are advanced protective casings, and monitoring them with, e.g., UUVs. Such technical solutions, however, can be challenging to implement and are limited in the remote areas of the high sea. Other proposed solutions include spatial modelling through protective or safety zones and penalties, increasing resources for surveillance, and a more collaborative approach between states and the private sector. However, how to implement and enforce these solutions remains to be determined. The cables' remoteness thus complicates both physical attacks and their protection.
Cable repair
Shore stations can locate a break in a cable by electrical measurements, such as through spread-spectrum time-domain reflectometry (SSTDR), a type of time-domain reflectometry that can be used in live environments very quickly. Presently, SSTDR can collect a complete data set in 20ms. Spread spectrum signals are sent down the wire and then the reflected signal is observed. It is then correlated with the copy of the sent signal and algorithms are applied to the shape and timing of the signals to locate the break.
A cable repair ship will be sent to the location to drop a marker buoy near the break. Several types of grapples are used depending on the situation. If the sea bed in question is sandy, a grapple with rigid prongs is used to plough under the surface and catch the cable. If the cable is on a rocky sea surface, the grapple is more flexible, with hooks along its length so that it can adjust to the changing surface. In especially deep water, the cable may not be strong enough to lift as a single unit, so a special grapple that cuts the cable soon after it has been hooked is used and only one length of cable is brought to the surface at a time, whereupon a new section is spliced in. The repaired cable is longer than the original, so the excess is deliberately laid in a "U" shape on the seabed. A submersible can be used to repair cables that lie in shallower waters.
A number of ports near important cable routes became homes to specialized cable repair ships. Halifax, Nova Scotia, was home to a half dozen such vessels for most of the 20th century including long-lived vessels such as the CS Cyrus West Field, CS Minia and CS Mackay-Bennett. The latter two were contracted to recover victims from the sinking of the RMS Titanic. The crews of these vessels developed many new techniques and devices to repair and improve cable laying, such as the "plough".
Cybersecurity perspective
Cyber vulnerabilities
Increasingly, sophisticated cyber-attacks threaten the data traffic on the cables, with incentives ranging from financial gain, espionage, or extortion by either state actors or non-state actors. Further, hybrid warfare tactics can interfere with or even weaponize the data transferred by the cables. For example, low-intensity cyber-attacks can be employed for ransomware, data manipulation and theft, opening up new a new opportunity for the use of cybercrime and grey-zone tactics in interstate disputes.
The lack of binding international cybersecurity standards may create a gap in dealing with cyber-enabled sabotage, that can be used by organized crime. However, attributing an incident to a specific actor or motivation of such actor can be challenging, specifically in cyberspace.
Cyber espionage and Intelligence-gathering
The rising sophistication of cyberattacks underscores the vulnerability of submarine cables to cyberespionage, ultimately complicating their security. Techniques like cable tapping, hacking into network management systems, and targeting cable landing stations enable covert data access by intelligence agencies, with Russia, the U.S., and the United Kingdom (U.K.) noted as primary players.
These activities are driven by both strategic and economic motives, with advancements in technology making interception and data manipulation more effective and difficult to detect. Recent technological advancements increasing the vulnerability include the use of remote access portals and remote network management systems centralizing control over components, enabling attackers to monitor traffic and potentially disrupt data flows.
Intelligence-gathering techniques have been deployed since the late 19th century. Frequently at the beginning of wars, nations have cut the cables of the other sides to redirect the information flow into cables that were being monitored. The most ambitious efforts occurred in World War I, when British and German forces systematically attempted to destroy the others' worldwide communications systems by cutting their cables with surface ships or submarines.
During the Cold War, the United States Navy and National Security Agency (NSA) succeeded in placing wire taps on Soviet underwater communication lines in Operation Ivy Bells.
These historical intelligence-gathering techniques were eventually countered with technological advancements like the widespread use of end-to-end encryption minimizing the threat of wire tapping.
Cybersecurity countermeasures
Cybersecurity strategies for submarine cables, such as encryption, access controls, and continuous monitoring, primarily focus on preventing unauthorized data access but do not adequately address the physical protection of cables in vulnerable, remote, high-sea areas as stated above.
As a result, while cybersecurity protocols are effective near coastal landing points, their enforcement across vast stretches of the open ocean becomes a challenge. To address these limitations, experts suggest a broader, multi-layered approach that integrates physical security measures with international cooperation and legal frameworks, especially given the jurisdictional ambiguities in international waters.
Multilateral agreements to establish cybersecurity standards specific to submarine cables are highlighted as critical. These agreements can help bridge the jurisdictional ambiguities and often resulting enforcement gaps in international waters, which ultimately hinder effective protection and are frequently exploited by organized crime.
Some scholars advocate for heightened European Union (E.U.) coordination, recommending improvements in surveillance and response capabilities across various agencies, such as the Coast Guard and specific telecommunication regulators. Given the central role of private companies in cable ownership, some experts also underscore the need for stronger collaboration between governments and tech firms to pool resources and develop more innovative security measures tailored to this critical infrastructure.
Geopolitical perspective
Geopolitical vulnerabilities
Fishing vessels are the leading cause of accidental damage to submarine communication cables. However, some of the academic discussions and recent incidents point to geopolitical tactics influencing the cable's security more than previously expected. These tactics include the ease and potential with which fishing vessels can blend into regular maritime traffic and implement their attacks.
The propensity for fishing trawler nets to cause cable faults may well have been exploited during the Cold War. For example, in February 1959, a series of 12 breaks occurred in five American trans-Atlantic communications cables. In response, a U.S. naval vessel, the USS Roy O. Hale, detained and investigated the Soviet trawler Novorosiysk. A review of the ship's log indicated it had been in the region of each of the cables when they broke. Broken sections of cable were also found on the deck of the Novorosiysk. It appeared that the cables had been dragged along by the ship's nets, and then cut once they were pulled up onto the deck to release the nets. The Soviet Union's stance on the investigation was that it was unjustified, but the U.S. cited the Convention for the Protection of Submarine Telegraph Cables of 1884 to which Russia had signed (prior to the formation of the Soviet Union) as evidence of violation of international protocol.
Several media outlets and organizations indicate that Russian fishing vessels, particularly in 2022, passed over a damaged submarine cable up to 20 times, suggesting potential political motives and the possibility of hybrid warfare tactics used from Russia's side. Russian naval activities near submarine cables are often linked to increased hybrid warfare strategies targeting submarine cables, where sabotage is argued to serve as a tool to disrupt communication networks during conflict and destabilise adversaries.
These tactics elevate cable security to a significant geopolitical issue. Criminal actors may further target cables as a means of economic warfare, aiming to destabilize economies or convey political messages. The disruption of submarine communication cables in highly politicised maritime areas thus has a significant political component that is receiving increased attention.
After two cable breaks in the Baltic Sea in November 2024, one between Lithuania and Sweden and the other between Finland and Germany, Defence Minister Boris Pistorius argued:
“No one believes that these cables were cut accidentally. I also don't want to believe in versions that these were ship anchors that accidentally caused the damage. Therefore, we have to state, without knowing specifically who it came from, that it is a 'hybrid' action. And we also have to assume, without knowing it yet, that it is sabotage."
This statement underlines the current discourse to recognize cable disruptions as threats to national securiy, which ultimately leads to their securitization in the international context.
Geopolitical risks and countermeasures
Submarine cables are inherently vulnerable to transnational threats like organized crime. International collaboration to address these threats tends to fall to existing organizations with a cable specific focus - such as the International Cable Protection Committee (ICPC) - which represent key submarine stakeholders, and play a vital role in promoting cooperation and information sharing among stakeholders. Such organizations are argued to be crucial to develop and implement a comprehensive and coordinated global strategy for cable security.
As of 2025, a tense U.S.-China relationship complicates this task especially in the South China Sea where there are territorial disputes. China has increasing control and influence over global cables networks, while both it and the USA financially supports allied-owned cable projects and exerts diplomatic pressure and regulatory action, e.g. against Vietnam.
In light of Nord Stream pipelines sabotage in the Baltic Sea, where subsea infrastructure vital to Germany and Russia was physically destroyed, and other incidents there, NATO has increased patrols and monitoring operations.
Legal perspective
Legal vulnerabilities
Submarine cables are internationally regulated within the framework of the United Nations Convention on the Law of the Sea (UNCLOS), in particular through the provisions of Articles 112 and 97, 112 and 115, which mandate operational freedom to lay cables in international waters and beyond the continental shelf and reward measures to protect against shipping accidents.
However, submarine cables face significant legal challenges and lack specific legal protection in UNCLOS and enforcement mechanisms against emerging threats, particular in international waters. This is further complicated by the non-ratification of the treaty by key states such as the U.S. and Turkey. Many countries lack explicit legal provisions to criminalize the destruction or theft of undersea cables, creating jurisdictional ambiguities that organized crime can exploit. Other legal frameworks, such as the 1884 Convention for the Protection of Submarine Telegraph Cables are outdated and fail to address modern threats like cyberattacks and hybrid warfare tactics. The unclear jurisdiction and weak enforcement mechanisms, demonstrate the difficulty to protect submarine cables from organized crime.
The Arctic Ocean in particular exemplifies the challenges associated with surveillance and enforcement in vast and remote areas, leaving a legal vacuum that criminals may exploit. In the Arctic, the absence of a central international authority to oversee submarine cable protection and the reliance on military organizations like NATO hinders general coordinated global responses.
Organizations such as the ICPC thus highlight the need for updated and more comprehensive legal frameworks to ensure the security of submarine cables.
Legal countermeasures
The legal challenges of protecting submarine cables from organized crime have resulted in recommendations ranging from treaty amendments to domestic law reforms and multi-level governance models.
Some scholars argue that UNCLOS should be updated to protect cables extensively, including cooperative monitoring and enforcement protocols. Additionally, principles from the law of the sea, state responsibility, and the laws on the use of force could be creatively applied to strengthen protections for cables. Enforcement issues could be tackled by aligning domestic laws with UNCLOS, implementing national response protocols, and creating streamlined points of contact for cable incidents. Given the increased involvement of organizations like NATO, others recommend to clarify the roles of military and non-military actors in cable security and enhanced multi-level governance models.
While these proposed legal solutions seem promising, their practical implementation still remains a challenge due to the complexity of international treaties, the need for international cooperation, the lack of domestic criminalization of cable damage, and the evolving nature of technological threats. Additionally, while UNCLOS's ambiguous jurisdiction in international waters hinders effective enforcement, limited political interests seems to hamper treaty development.
Environmental impact
The presence of cables in the oceans can be a danger to marine life. With the proliferation of cable installations and the increasing demand for inter-connectivity that today's society demands, the environmental impact is increasing.
Submarine cables can impact marine life in a number of ways.
Alteration of the seabed
Seabed ecosystems can be disturbed by the installation and maintenance of cables. The effects of cable installation are generally limited to specific areas. The intensity of disturbance depends on the installation method.
Cables are often laid in the so-called benthic zone of the seabed. The benthic zone is the ecological region at the bottom of the sea where benthos, clams and crabs live, and where the surface sediments, which are deposits of matter and particles in the water that provide a habitat for marine species, are located.
Sediment can be damaged by cable installation by trenching with water jets or ploughing. This can lead to reworking of the sediments, altering the substrate of which they are composed.
According to several studies, the biota of the benthic zone is only slightly affected by the presence of cables. However, the presence of cables can trigger behavioral disturbances in living organisms. The main observation is that the presence of cables provides a hard substrate for anemones attachment. These organisms are found in large number around cables that run through soft sediments, which are not normally suitable for these organisms. This is also the case for flatfish. Although little observed, the presence of cables can also change the water temperature and therefore disturb the surrounding natural habitat.
However, these disturbances are not very persistent over time, and can stabilize within a few days. Cable operators are trying to implement measures to route cables in such a way as to avoid areas with sensitive and vulnerable ecosystems.
Entanglement
Entanglement of marine animals in cables is one of the main causes of cable damage. Whales and sperm whales are the main animals that entangle themselves in cables and damage them. The encounter between these animals and cables can cause injury and sometimes death. Studies carried out between 1877 and 1955 reported 16 cable ruptures caused by whale entanglement, 13 of them by sperm whales. Between 1907 and 2006, 39 such events were recorded. Cable burial techniques are gradually being introduced to prevent such incidents.
The risk of fishing
Although submarine cables are located on the seabed, fishing activity can damage the cables. Fishermen using fishing techniques that involve scraping the seabed, or dragging equipment such as trawls or cages, can damage the cables, resulting in the loss of liquids and the chemical and toxic materials that make up the cables.
Areas with a high density of submarine cables have the advantage of being safer from fishing. At the expense of benthic and sedimentary zones, marine fauna is better protected in these maritime regions, thanks to limitations and bans. Studies have shown a positive effect on the fauna surrounding cable installation zones.
Pollution
Submarine cables are made of copper or optical fibers, surrounded by several protective layers of plastic, wire or synthetic materials. Cables can also be composed of dielectric fluids or hydrocarbon fluids, which act as electrical insulators. These substances can be harmful to marine life.
Fishing, aging cables and marine species that collide with or become entangled in cables can damage cables and spread toxic and harmful substances into the sea. However, the impact of submarine cables is limited compared with other sources of ocean pollution.
There is also a risk of releasing pollutants buried in sediments. When sediments are re-suspended due to the installation of cables, toxic substances such as hydrocarbons may be released.
Preliminary analyses can assess the level of sediment toxicity and select a cable route that avoids the remobilization and dispersion of sediment pollutants. And new, more modern techniques will make it possible to use less polluting materials for cable construction.
Sound waves and electromagnetic waves
The installation and maintenance of cables requires the use of machinery and equipment that can trigger sound waves or electromagnetic waves that can disturb animals that use waves to find their bearings in space or to communicate. Underwater sound waves depend on the equipment used, the characteristics of the seabed area where the cables are located, and the relief of the area.
Underwater noise and waves can modify the behavior of certain underwater species, such as migratory behavior, disrupting communication or reproduction. Available information is that underwater noise generated by submarine cable engineering operations has limited acoustic footprint and limited duration.
See also
Bathometer
Cable layer
Cable landing point
List of domestic submarine communications cables
List of international submarine communications cables
Loaded submarine cable
Submarine power cable
Transatlantic communications cable
References
Further reading
External links
The International Cable Protection Committee – includes a register of submarine cables worldwide (though not always updated as often as one might hope)
Timeline of Submarine Communications Cables, 1850–2010
Kingfisher Information Service – Cable Awareness; UK Fisherman's Submarine Cable Awareness site
Orange's Fishermen's/Submarine Cable Information
Oregon Fisherman's Cable Committee
Articles
History of the Atlantic Cable & Submarine Telegraphy – Wire Rope and the Submarine Cable Industry
Mother Earth Mother Board – Wired article by Neal Stephenson about submarine cables
Winkler, Jonathan Reed. Nexus: Strategic Communications and American Security in World War I. (Cambridge, MA: Harvard University Press, 2008) Account of how U.S. government discovered strategic significance of communications lines, including submarine cables, during World War I.
Animations from Alcatel showing how submarine cables are installed and repaired
Work begins to repair severed net
Flexibility in Undersea Networks – Ocean News & Technology magazine Dec. 2014
Maps
Submarine Cable Map by TeleGeography
Map gallery of submarine cable maps by TeleGeography, showing evolution since 2000. 2008 map in the Guardian; 2014 map on CNN.
Map and Satellite views of US landing sites for transatlantic cables
Map and Satellite views of US landing sites for transpacific cables
Positions and Route information of Submarine Cables in the Seas Around the UK
Coastal construction
Telecommunications equipment
History of telecommunications
no:Sjøkabel | Submarine communications cable | [
"Engineering"
] | 11,412 | [
"Construction",
"Coastal construction"
] |
45,207 | https://en.wikipedia.org/wiki/Communications%20satellite | A communications satellite is an artificial satellite that relays and amplifies radio telecommunication signals via a transponder; it creates a communication channel between a source transmitter and a receiver at different locations on Earth. Communications satellites are used for television, telephone, radio, internet, and military applications. Many communications satellites are in geostationary orbit above the equator, so that the satellite appears stationary at the same point in the sky; therefore the satellite dish antennas of ground stations can be aimed permanently at that spot and do not have to move to track the satellite. Others form satellite constellations in low Earth orbit, where antennas on the ground have to follow the position of the satellites and switch between satellites frequently.
The radio waves used for telecommunications links travel by line of sight and so are obstructed by the curve of the Earth. The purpose of communications satellites is to relay the signal around the curve of the Earth allowing communication between widely separated geographical points. Communications satellites use a wide range of radio and microwave frequencies. To avoid signal interference, international organizations have regulations for which frequency ranges or "bands" certain organizations are allowed to use. This allocation of bands minimizes the risk of signal interference.
History
Origins
In October 1945, Arthur C. Clarke published an article titled "Extraterrestrial Relays" in the British magazine Wireless World. The article described the fundamentals behind the deployment of artificial satellites in geostationary orbits to relay radio signals. Because of this, Arthur C. Clarke is often quoted as being the inventor of the concept of the communications satellite, and the term 'Clarke Belt' is employed as a description of the orbit.
The first artificial Earth satellite was Sputnik 1, which was put into orbit by the Soviet Union on 4 October 1957. It was developed by Mikhail Tikhonravov and Sergey Korolev, building on work by Konstantin Tsiolkovsky. Sputnik 1 was equipped with an on-board radio transmitter that worked on two frequencies of 20.005 and 40.002 MHz, or 7 and 15 meters wavelength. The satellite was not placed in orbit to send data from one point on Earth to another, but the radio transmitter was meant to study the properties of radio wave distribution throughout the ionosphere. The launch of Sputnik 1 was a major step in the exploration of space and rocket development, and marks the beginning of the Space Age.
Early active and passive satellite experiments
There are two major classes of communications satellites, passive and active. Passive satellites only reflect the signal coming from the source, toward the direction of the receiver. With passive satellites, the reflected signal is not amplified at the satellite, and only a small amount of the transmitted energy actually reaches the receiver. Since the satellite is so far above Earth, the radio signal is attenuated due to free-space path loss, so the signal received on Earth is very weak. Active satellites, on the other hand, amplify the received signal before retransmitting it to the receiver on the ground. Passive satellites were the first communications satellites, but are little used now.
Work that was begun in the field of electrical intelligence gathering at the United States Naval Research Laboratory in 1951 led to a project named Communication Moon Relay. Military planners had long shown considerable interest in secure and reliable communications lines as a tactical necessity, and the ultimate goal of this project was the creation of the longest communications circuit in human history, with the Moon, Earth's natural satellite, acting as a passive relay. After achieving the first transoceanic communication between Washington, D.C., and Hawaii on 23 January 1956, this system was publicly inaugurated and put into formal production in January 1960.
The first satellite purpose-built to actively relay communications was Project SCORE, led by Advanced Research Projects Agency (ARPA) and launched on 18 December 1958, which used a tape recorder to carry a stored voice message, as well as to receive, store, and retransmit messages. It was used to send a Christmas greeting to the world from U.S. President Dwight D. Eisenhower. The satellite also executed several realtime transmissions before the non-rechargeable batteries failed on 30 December 1958 after eight hours of actual operation.
The direct successor to SCORE was another ARPA-led project called Courier. Courier 1B was launched on 4 October 1960 to explore whether it would be possible to establish a global military communications network by using "delayed repeater" satellites, which receive and store information until commanded to rebroadcast them. After 17 days, a command system failure ended communications from the satellite.
NASA's satellite applications program launched the first artificial satellite used for passive relay communications in Echo 1 on 12 August 1960. Echo 1 was an aluminized balloon satellite acting as a passive reflector of microwave signals. Communication signals were bounced off the satellite from one point on Earth to another. This experiment sought to establish the feasibility of worldwide broadcasts of telephone, radio, and television signals.
More firsts and further experiments
Telstar was the first active, direct relay communications commercial satellite and marked the first transatlantic transmission of television signals. Belonging to AT&T as part of a multi-national agreement between AT&T, Bell Telephone Laboratories, NASA, the British General Post Office, and the French National PTT (Post Office) to develop satellite communications, it was launched by NASA from Cape Canaveral on 10 July 1962, in the first privately sponsored space launch.
Another passive relay experiment primarily intended for military communications purposes was Project West Ford, which was led by Massachusetts Institute of Technology's Lincoln Laboratory. After an initial failure in 1961, a launch on 9 May 1963 dispersed 350 million copper needle dipoles to create a passive reflecting belt. Even though only about half of the dipoles properly separated from each other, the project was able to successfully experiment and communicate using frequencies in the SHF X band spectrum.
An immediate antecedent of the geostationary satellites was the Hughes Aircraft Company's Syncom 2, launched on 26 July 1963. Syncom 2 was the first communications satellite in a geosynchronous orbit. It revolved around the Earth once per day at constant speed, but because it still had north–south motion, special equipment was needed to track it. Its successor, Syncom 3, launched on 19 July 1964, was the first geostationary communications satellite. Syncom 3 obtained a geosynchronous orbit, without a north–south motion, making it appear from the ground as a stationary object in the sky.
A direct extension of the passive experiments of Project West Ford was the Lincoln Experimental Satellite program, also conducted by the Lincoln Laboratory on behalf of the United States Department of Defense. The LES-1 active communications satellite was launched on 11 February 1965 to explore the feasibility of active solid-state X band long-range military communications. A total of nine satellites were launched between 1965 and 1976 as part of this series.
International commercial satellite projects
In the United States, 1962 saw the creation of the Communications Satellite Corporation (COMSAT) private corporation, which was subject to instruction by the US Government on matters of national policy. Over the next two years, international negotiations led to the Intelsat Agreements, which in turn led to the launch of Intelsat 1, also known as Early Bird, on 6 April 1965, and which was the first commercial communications satellite to be placed in geosynchronous orbit. Subsequent Intelsat launches in the 1960s provided multi-destination service and video, audio, and data service to ships at sea (Intelsat 2 in 1966–67), and the completion of a fully global network with Intelsat 3 in 1969–70. By the 1980s, with significant expansions in commercial satellite capacity, Intelsat was on its way to become part of the competitive private telecommunications industry, and had started to get competition from the likes of PanAmSat in the United States, which, ironically, was then bought by its archrival in 2005.
When Intelsat was launched, the United States was the only launch source outside of the Soviet Union, who did not participate in the Intelsat agreements. The Soviet Union launched its first communications satellite on 23 April 1965 as part of the Molniya program. This program was also unique at the time for its use of what then became known as the Molniya orbit, which describes a highly elliptical orbit, with two high apogees daily over the northern hemisphere. This orbit provides a long dwell time over Russian territory as well as over Canada at higher latitudes than geostationary orbits over the equator.
In the 2020s, the popularity of low Earth orbit satellite internet constellations providing relatively low-cost internet services led to reducing demand for new geostationary orbit communications satellites.
Satellite orbits
Communications satellites usually have one of three primary types of orbit, while other orbital classifications are used to further specify orbital details. MEO and LEO are non-geostationary orbit (NGSO).
Geostationary satellites have a geostationary orbit (GEO), which is from Earth's surface. This orbit has the special characteristic that the apparent position of the satellite in the sky when viewed by a ground observer does not change, the satellite appears to "stand still" in the sky. This is because the satellite's orbital period is the same as the rotation rate of the Earth. The advantage of this orbit is that ground antennas do not have to track the satellite across the sky, they can be fixed to point at the location in the sky the satellite appears.
Medium Earth orbit (MEO) satellites are closer to Earth. Orbital altitudes range from above Earth.
The region below medium orbits is referred to as low Earth orbit (LEO), and is about above Earth.
As satellites in MEO and LEO orbit the Earth faster, they do not remain visible in the sky to a fixed point on Earth continually like a geostationary satellite, but appear to a ground observer to cross the sky and "set" when they go behind the Earth beyond the visible horizon. Therefore, to provide continuous communications capability with these lower orbits requires a larger number of satellites, so that one of these satellites will always be visible in the sky for transmission of communication signals. However, due to their closer distance to the Earth, LEO or MEO satellites can communicate to ground with reduced latency and at lower power than would be required from a geosynchronous orbit.
Low Earth orbit (LEO)
A low Earth orbit (LEO) typically is a circular orbit about above the Earth's surface and, correspondingly, a period (time to revolve around the Earth) of about 90 minutes.
Because of their low altitude, these satellites are only visible from within a radius of roughly from the sub-satellite point. In addition, satellites in low Earth orbit change their position relative to the ground position quickly. So even for local applications, many satellites are needed if the mission requires uninterrupted connectivity.
Low-Earth-orbiting satellites are less expensive to launch into orbit than geostationary satellites and, due to proximity to the ground, do not require as high signal strength (signal strength falls off as the square of the distance from the source, so the effect is considerable). Thus there is a trade off between the number of satellites and their cost.
In addition, there are important differences in the onboard and ground equipment needed to support the two types of missions.
Satellite constellation
A group of satellites working in concert is known as a satellite constellation. Two such constellations, intended to provide satellite phone and low-speed data services, primarily to remote areas, are the Iridium and Globalstar systems. The Iridium system has 66 satellites, which orbital inclination of 86.4° and inter-satellite links provide service availability over the entire surface of Earth. Starlink is a satellite internet constellation operated by SpaceX, that aims for global satellite Internet access coverage.
It is also possible to offer discontinuous coverage using a low-Earth-orbit satellite capable of storing data received while passing over one part of Earth and transmitting it later while passing over another part. This will be the case with the CASCADE system of Canada's CASSIOPE communications satellite. Another system using this store and forward method is Orbcomm.
Medium Earth orbit (MEO)
A medium Earth orbit is a satellite in orbit somewhere between above the Earth's surface. MEO satellites are similar to LEO satellites in functionality. MEO satellites are visible for much longer periods of time than LEO satellites, usually between 2 and 8 hours. MEO satellites have a larger coverage area than LEO satellites. A MEO satellite's longer duration of visibility and wider footprint means fewer satellites are needed in a MEO network than a LEO network. One disadvantage is that a MEO satellite's distance gives it a longer time delay and weaker signal than a LEO satellite, although these limitations are not as severe as those of a GEO satellite.
Like LEOs, these satellites do not maintain a stationary distance from the Earth. This is in contrast to the geostationary orbit, where satellites are always from Earth.
Typically the orbit of a medium Earth orbit satellite is about above Earth. In various patterns, these satellites make the trip around Earth in anywhere from 2 to 8 hours.
Examples of MEO
In 1962, the communications satellite, Telstar, was launched. It was a medium Earth orbit satellite designed to help facilitate high-speed telephone signals. Although it was the first practical way to transmit signals over the horizon, its major drawback was soon realised. Because its orbital period of about 2.5 hours did not match the Earth's rotational period of 24 hours, continuous coverage was impossible. It was apparent that multiple MEOs needed to be used in order to provide continuous coverage.
In 2013, the first four of a constellation of 20 MEO satellites was launched. The O3b satellites provide broadband internet services, in particular to remote locations and maritime and in-flight use, and orbit at an altitude of ).
Geostationary orbit (GEO)
To an observer on Earth, a satellite in a gestationary orbit appears motionless, in a fixed position in the sky. This is because it revolves around the Earth at Earth's own angular velocity (one revolution per sidereal day, in an equatorial orbit).
A geostationary orbit is useful for communications because ground antennas can be aimed at the satellite without their having to track the satellite's motion. This is relatively inexpensive.
In applications that require many ground antennas, such as DirecTV distribution, the savings in ground equipment can more than outweigh the cost and complexity of placing a satellite into orbit.
Examples of GEO
The first geostationary satellite was Syncom 3, launched on 19 August 1964, and used for communication across the Pacific starting with television coverage of the 1964 Summer Olympics. Shortly after Syncom 3, Intelsat I, aka Early Bird, was launched on 6 April 1965 and placed in orbit at 28° west longitude. It was the first geostationary satellite for telecommunications over the Atlantic Ocean.
On 9 November 1972, Canada's first geostationary satellite serving the continent, Anik A1, was launched by Telesat Canada, with the United States following suit with the launch of Westar 1 by Western Union on 13 April 1974.
On 30 May 1974, the first geostationary communications satellite in the world to be three-axis stabilized was launched: the experimental satellite ATS-6 built for NASA.
After the launches of the Telstar through Westar 1 satellites, RCA Americom (later GE Americom, now SES) launched Satcom 1 in 1975. It was Satcom 1 that was instrumental in helping early cable TV channels such as WTBS (now TBS), HBO, CBN (now Freeform) and The Weather Channel become successful, because these channels distributed their programming to all of the local cable TV headends using the satellite. Additionally, it was the first satellite used by broadcast television networks in the United States, like ABC, NBC, and CBS, to distribute programming to their local affiliate stations. Satcom 1 was widely used because it had twice the communications capacity of the competing Westar 1 in America (24 transponders as opposed to the 12 of Westar 1), resulting in lower transponder-usage costs. Satellites in later decades tended to have even higher transponder numbers.
By 2000, Hughes Space and Communications (now Boeing Satellite Development Center) had built nearly 40 percent of the more than one hundred satellites in service worldwide. Other major satellite manufacturers include Space Systems/Loral, Orbital Sciences Corporation with the Star Bus series, Indian Space Research Organisation, Lockheed Martin (owns the former RCA Astro Electronics/GE Astro Space business), Northrop Grumman, Alcatel Space, now Thales Alenia Space, with the Spacebus series, and Astrium.
Molniya orbit
Geostationary satellites must operate above the equator and therefore appear lower on the horizon as the receiver gets farther from the equator. This will cause problems for extreme northerly latitudes, affecting connectivity and causing multipath interference (caused by signals reflecting off the ground and into the ground antenna).
Thus, for areas close to the North (and South) Pole, a geostationary satellite may appear below the horizon. Therefore, Molniya orbit satellites have been launched, mainly in Russia, to alleviate this problem.
Molniya orbits can be an appealing alternative in such cases. The Molniya orbit is highly inclined, guaranteeing good elevation over selected positions during the northern portion of the orbit. (Elevation is the extent of the satellite's position above the horizon. Thus, a satellite at the horizon has zero elevation and a satellite directly overhead has elevation of 90 degrees.)
The Molniya orbit is designed so that the satellite spends the great majority of its time over the far northern latitudes, during which its ground footprint moves only slightly. Its period is one half day, so that the satellite is available for operation over the targeted region for six to nine hours every second revolution. In this way a constellation of three Molniya satellites (plus in-orbit spares) can provide uninterrupted coverage.
The first satellite of the Molniya series was launched on 23 April 1965 and was used for experimental transmission of TV signals from a Moscow uplink station to downlink stations located in Siberia and the Russian Far East, in Norilsk, Khabarovsk, Magadan and Vladivostok. In November 1967 Soviet engineers created a unique system of national TV network of satellite television, called Orbita, that was based on Molniya satellites.
Polar orbit
In the United States, the National Polar-orbiting Operational Environmental Satellite System (NPOESS) was established in 1994 to consolidate the polar satellite operations of
NASA (National Aeronautics and Space Administration)
NOAA (National Oceanic and Atmospheric Administration). NPOESS manages a number of satellites for various purposes; for example, METSAT for meteorological satellite, EUMETSAT for the European branch of the program, and METOP for meteorological operations.
These orbits are Sun synchronous, meaning that they cross the equator at the same local time each day. For example, the satellites in the NPOESS (civilian) orbit will cross the equator, going from south to north, at times 1:30 P.M., 5:30 P.M., and 9:30 P.M.
Beyond geostationary orbit
There are plans and initiatives to bring dedicated communications satellite beyond geostationary orbits.
NASA proposed LunaNet as a data network aiming to provide a "Lunar Internet" for cis-lunar spacecraft and Installations.
The Moonlight Initiative is an equivalent ESA project that is stated to be compatible and providing navigational services for the lunar surface. Both programmes are satellite constellations of several satellites in various orbits around the Moon.
Other orbits are also planned to be used. Positions in the Earth-Moon-Libration points are also proposed for communication satellites covering the Moon alike communication satellites in geosynchronous orbit cover the Earth. Also, dedicated communication satellites in orbits around Mars supporting different missions on surface and other orbits are considered, such as the Mars Telecommunications Orbiter.
Structure
Communications Satellites are usually composed of the following subsystems:
Communication Payload, normally composed of transponders, antennas, amplifiers and switching systems
Engines used to bring the satellite to its desired orbit
A station keeping tracking and stabilization subsystem used to keep the satellite in the right orbit, with its antennas pointed in the right direction, and its power system pointed towards the Sun
Power subsystem, used to power the Satellite systems, normally composed of solar cells, and batteries that maintain power during solar eclipse
Command and Control subsystem, which maintains communications with ground control stations. The ground control Earth stations monitor the satellite performance and control its functionality during various phases of its life-cycle.
The bandwidth available from a satellite depends upon the number of transponders provided by the satellite. Each service (TV, Voice, Internet, radio) requires a different amount of bandwidth for transmission. This is typically known as link budgeting and a network simulator can be used to arrive at the exact value.
Frequency allocation for satellite systems
Allocating frequencies to satellite services is a complicated process which requires international coordination and planning. This is carried out under the auspices of the International Telecommunication Union (ITU).
To facilitate frequency planning, the world is divided into three regions:
Region 1: Europe, Africa, the Middle East, what was formerly the Soviet Union, and Mongolia
Region 2: North and South America and Greenland
Region 3: Asia (excluding region 1 areas), Australia, and the southwest Pacific
Within these regions, frequency bands are allocated to various satellite services, although a given service may be allocated different frequency bands in different regions. Some of the services provided by satellites are:
Fixed satellite service (FSS)
Broadcasting satellite service (BSS)
Mobile-satellite service
Radionavigation-satellite service
Meteorological-satellite service
Applications
Telephony
The first and historically most important application for communication satellites was in intercontinental long distance telephony. The fixed Public Switched Telephone Network relays telephone calls from land line telephones to an Earth station, where they are then transmitted to a geostationary satellite. The downlink follows an analogous path. Improvements in submarine communications cables through the use of fiber-optics caused some decline in the use of satellites for fixed telephony in the late 20th century.
Satellite communications are still used in many applications today. Remote islands such as Ascension Island, Saint Helena, Diego Garcia, and Easter Island, where no submarine cables are in service, need satellite telephones. There are also regions of some continents and countries where landline telecommunications are rare to non existent, for example large regions of South America, Africa, Canada, China, Russia, and Australia. Satellite communications also provide connection to the edges of Antarctica and Greenland. Other land use for satellite phones are rigs at sea, a backup for hospitals, military, and recreation. Ships at sea, as well as planes, often use satellite phones.
Satellite phone systems can be accomplished by a number of means. On a large scale, often there will be a local telephone system in an isolated area with a link to the telephone system in a main land area. There are also services that will patch a radio signal to a telephone system. In this example, almost any type of satellite can be used. Satellite phones connect directly to a constellation of either geostationary or low-Earth-orbit satellites. Calls are then forwarded to a satellite teleport connected to the Public Switched Telephone Network .
Television
As television became the main market, its demand for simultaneous delivery of relatively few signals of large bandwidth to many receivers being a more precise match for the capabilities of geosynchronous comsats. Two satellite types are used for North American television and radio: Direct broadcast satellite (DBS), and Fixed Service Satellite (FSS).
The definitions of FSS and DBS satellites outside of North America, especially in Europe, are a bit more ambiguous. Most satellites used for direct-to-home television in Europe have the same high power output as DBS-class satellites in North America, but use the same linear polarization as FSS-class satellites. Examples of these are the Astra, Eutelsat, and Hotbird spacecraft in orbit over the European continent. Because of this, the terms FSS and DBS are more so used throughout the North American continent, and are uncommon in Europe.
Fixed Service Satellites use the C band, and the lower portions of the Ku band. They are normally used for broadcast feeds to and from television networks and local affiliate stations (such as program feeds for network and syndicated programming, live shots, and backhauls), as well as being used for distance learning by schools and universities, business television (BTV), Videoconferencing, and general commercial telecommunications. FSS satellites are also used to distribute national cable channels to cable television headends.
Free-to-air satellite TV channels are also usually distributed on FSS satellites in the Ku band. The Intelsat Americas 5, Galaxy 10R and AMC 3 satellites over North America provide a quite large amount of FTA channels on their Ku band transponders.
The American Dish Network DBS service has also recently used FSS technology as well for their programming packages requiring their SuperDish antenna, due to Dish Network needing more capacity to carry local television stations per the FCC's "must-carry" regulations, and for more bandwidth to carry HDTV channels.
A direct broadcast satellite is a communications satellite that transmits to small DBS satellite dishes (usually 18 to 24 inches or 45 to 60 cm in diameter). Direct broadcast satellites generally operate in the upper portion of the microwave Ku band. DBS technology is used for DTH-oriented (Direct-To-Home) satellite TV services, such as DirecTV, DISH Network and Orby TV in the United States, Bell Satellite TV and Shaw Direct in Canada, Freesat and Sky in the UK, Ireland, and New Zealand and DSTV in South Africa.
Operating at lower frequency and lower power than DBS, FSS satellites require a much larger dish for reception (3 to 8 feet (1 to 2.5 m) in diameter for Ku band, and 12 feet (3.6 m) or larger for C band). They use linear polarization for each of the transponders' RF input and output (as opposed to circular polarization used by DBS satellites), but this is a minor technical difference that users do not notice. FSS satellite technology was also originally used for DTH satellite TV from the late 1970s to the early 1990s in the United States in the form of TVRO (Television Receive Only) receivers and dishes. It was also used in its Ku band form for the now-defunct Primestar satellite TV service.
Some satellites have been launched that have transponders in the Ka band, such as DirecTV's SPACEWAY-1 satellite, and Anik F2. NASA and ISRO have also launched experimental satellites carrying Ka band beacons recently.
Some manufacturers have also introduced special antennas for mobile reception of DBS television. Using Global Positioning System (GPS) technology as a reference, these antennas automatically re-aim to the satellite no matter where or how the vehicle (on which the antenna is mounted) is situated. These mobile satellite antennas are popular with some recreational vehicle owners. Such mobile DBS antennas are also used by JetBlue Airways for DirecTV (supplied by LiveTV, a subsidiary of JetBlue), which passengers can view on-board on LCD screens mounted in the seats.
Radio broadcasting
Satellite radio offers audio broadcast services in some countries, notably the United States. Mobile services allow listeners to roam a continent, listening to the same audio programming anywhere.
A satellite radio or subscription radio (SR) is a digital radio signal that is broadcast by a communications satellite, which covers a much wider geographical range than terrestrial radio signals.
Amateur radio
Amateur radio operators have access to amateur satellites, which have been designed specifically to carry amateur radio traffic. Most such satellites operate as spaceborne repeaters, and are generally accessed by amateurs equipped with UHF or VHF radio equipment and highly directional antennas such as Yagis or dish antennas. Due to launch costs, most current amateur satellites are launched into fairly low Earth orbits, and are designed to deal with only a limited number of brief contacts at any given time. Some satellites also provide data-forwarding services using the X.25 or similar protocols.
Internet access
After the 1990s, satellite communication technology has been used as a means to connect to the Internet via broadband data connections. This can be very useful for users who are located in remote areas, and cannot access a broadband connection, or require high availability of services.
Military
Communications satellites are used for military communications applications, such as Global Command and Control Systems. Examples of military systems that use communication satellites are the MILSTAR, the DSCS, and the FLTSATCOM of the United States, NATO satellites, United Kingdom satellites (for instance Skynet), and satellites of the former Soviet Union. India has launched its first Military Communication satellite GSAT-7, its transponders operate in UHF, F, C and bands. Typically military satellites operate in the UHF, SHF (also known as X-band) or EHF (also known as Ka band) frequency bands.
Data collection
Near-ground in situ environmental monitoring equipment (such as tide gauges, weather stations, weather buoys, and radiosondes), may use satellites for one-way data transmission or two-way telemetry and telecontrol. It may be based on a secondary payload of a weather satellite (as in the case of GOES and METEOSAT and others in the Argos system) or in dedicated satellites (such as SCD). The data rate is typically much lower than in satellite Internet access.
See also
Commercialization of space
High-altitude platform station
History of telecommunication
Inter-satellite communications satellite
List of communication satellite companies
List of communications satellite firsts
NewSpace
Reconnaissance satellite
Relay (disambiguation)
Satcom On The Move
Satellite data unit
Satellite delay
Satellite modem
Satellite space segment
Space pollution
Traveling-wave tube
References
Notes
Citations
Further reading
Slotten, Hugh R. Beyond Sputnik and the Space Race: The Origins of Global Satellite Communications (Johns Hopkins University Press, 2022); online review
External links
Satellite Industry Association
Communications satellites short history by David J. Whalen
Beyond The Ionosphere: Fifty Years of Satellite Communication (NASA SP-4217, 1997)
Satellite broadcasting
Satellites by type
Telecommunications-related introductions in 1962
Wireless communication systems | Communications satellite | [
"Technology",
"Engineering"
] | 6,247 | [
"Telecommunications engineering",
"Wireless communication systems",
"Satellite broadcasting"
] |
45,239 | https://en.wikipedia.org/wiki/Subalgebra | In mathematics, a subalgebra is a subset of an algebra, closed under all its operations, and carrying the induced operations.
"Algebra", when referring to a structure, often means a vector space or module equipped with an additional bilinear operation. Algebras in universal algebra are far more general: they are a common generalisation of all algebraic structures. "Subalgebra" can refer to either case.
Subalgebras for algebras over a ring or field
A subalgebra of an algebra over a commutative ring or field is a vector subspace which is closed under the multiplication of vectors. The restriction of the algebra multiplication makes it an algebra over the same ring or field. This notion also applies to most specializations, where the multiplication must satisfy additional properties, e.g. to associative algebras or to Lie algebras. Only for unital algebras is there a stronger notion, of unital subalgebra, for which it is also required that the unit of the subalgebra be the unit of the bigger algebra.
Example
The 2×2-matrices over the reals R form a four-dimensional unital algebra M(2,R) in the obvious way. The 2×2-matrices for which all entries are zero, except for the first one on the diagonal, form a subalgebra. It is also unital, but it is not a unital subalgebra.
The identity element of M(2,R) is the identity matrix I , so the unital subalgebras contain the line of diagonal matrices {x I : x in R}. For two-dimensional subalgebras, consider
When p = 0, then E is nilpotent and the subalgebra { x I + y E : x, y in R } is a copy of the dual number plane. When p is negative, take q = 1/√−p, so that (q E)2 = − I, and subalgebra { x I + y (qE) : x,y in R } is a copy of the complex plane. Finally, when p is positive, take q = 1/√p, so that (qE)2 = I, and subalgebra { x I + y (qE) : x,y in R } is a copy of the plane of split-complex numbers.
Subalgebras in universal algebra
In universal algebra, a subalgebra of an algebra A is a subset S of A that also has the structure of an algebra of the same type when the algebraic operations are restricted to S. If the axioms of a kind of algebraic structure is described by equational laws, as is typically the case in universal algebra, then the only thing that needs to be checked is that S is closed under the operations.
Some authors consider algebras with partial functions. There are various ways of defining subalgebras for these. Another generalization of algebras is to allow relations. These more general algebras are usually called structures, and they are studied in model theory and in theoretical computer science. For structures with relations there are notions of weak and of induced substructures.
Example
For example, the standard signature for groups in universal algebra is . (Inversion and unit are needed to get the right notions of homomorphism and so that the group laws can be expressed as equations.) Therefore, a subgroup of a group G is a subset S of G such that:
the identity e of G belongs to S (so that S is closed under the identity constant operation);
whenever x belongs to S, so does x−1 (so that S is closed under the inverse operation);
whenever x and y belong to S, so does (so that S is closed under the group's multiplication operation).
References
Universal algebra | Subalgebra | [
"Mathematics"
] | 802 | [
"Fields of abstract algebra",
"Universal algebra"
] |
45,240 | https://en.wikipedia.org/wiki/Kernel%20%28algebra%29 | In algebra, the kernel of a homomorphism (function that preserves the structure) is generally the inverse image of 0 (except for groups whose operation is denoted multiplicatively, where the kernel is the inverse image of 1). An important special case is the kernel of a linear map. The kernel of a matrix, also called the null space, is the kernel of the linear map defined by the matrix.
The kernel of a homomorphism is reduced to 0 (or 1) if and only if the homomorphism is injective, that is if the inverse image of every element consists of a single element. This means that the kernel can be viewed as a measure of the degree to which the homomorphism fails to be injective.
For some types of structure, such as abelian groups and vector spaces, the possible kernels are exactly the substructures of the same type. This is not always the case, and, sometimes, the possible kernels have received a special name, such as normal subgroup for groups and two-sided ideals for rings.
Kernels allow defining quotient objects (also called quotient algebras in universal algebra, and cokernels in category theory). For many types of algebraic structure, the fundamental theorem on homomorphisms (or first isomorphism theorem) states that image of a homomorphism is isomorphic to the quotient by the kernel.
The concept of a kernel has been extended to structures such that the inverse image of a single element is not sufficient for deciding whether a homomorphism is injective. In these cases, the kernel is a congruence relation.
This article is a survey for some important types of kernels in algebraic structures.
Survey of examples
Linear maps
Let V and W be vector spaces over a field (or more generally, modules over a ring) and let T be a linear map from V to W. If 0W is the zero vector of W, then the kernel of T is the preimage of the zero subspace {0W}; that is, the subset of V consisting of all those elements of V that are mapped by T to the element 0W. The kernel is usually denoted as , or some variation thereof:
Since a linear map preserves zero vectors, the zero vector 0V of V must belong to the kernel. The transformation T is injective if and only if its kernel is reduced to the zero subspace.
The kernel ker T is always a linear subspace of V. Thus, it makes sense to speak of the quotient space . The first isomorphism theorem for vector spaces states that this quotient space is naturally isomorphic to the image of T (which is a subspace of W). As a consequence, the dimension of V equals the dimension of the kernel plus the dimension of the image.
If V and W are finite-dimensional and bases have been chosen, then T can be described by a matrix M, and the kernel can be computed by solving the homogeneous system of linear equations . In this case, the kernel of T may be identified to the kernel of the matrix M, also called "null space" of M. The dimension of the null space, called the nullity of M, is given by the number of columns of M minus the rank of M, as a consequence of the rank–nullity theorem.
Solving homogeneous differential equations often amounts to computing the kernel of certain differential operators.
For instance, in order to find all twice-differentiable functions f from the real line to itself such that
let V be the space of all twice differentiable functions, let W be the space of all functions, and define a linear operator T from V to W by
for f in V and x an arbitrary real number.
Then all solutions to the differential equation are in .
One can define kernels for homomorphisms between modules over a ring in an analogous manner. This includes kernels for homomorphisms between abelian groups as a special case. This example captures the essence of kernels in general abelian categories; see Kernel (category theory).
Group homomorphisms
Let G and H be groups and let f be a group homomorphism from G to H. If eH is the identity element of H, then the kernel of f is the preimage of the singleton set {eH}; that is, the subset of G consisting of all those elements of G that are mapped by f to the element eH.
The kernel is usually denoted (or a variation). In symbols:
Since a group homomorphism preserves identity elements, the identity element eG of G must belong to the kernel.
The homomorphism f is injective if and only if its kernel is only the singleton set {eG}. If f were not injective, then the non-injective elements can form a distinct element of its kernel: there would exist such that and . Thus . f is a group homomorphism, so inverses and group operations are preserved, giving ; in other words, , and ker f would not be the singleton. Conversely, distinct elements of the kernel violate injectivity directly: if there would exist an element , then , thus f would not be injective.
is a subgroup of G and further it is a normal subgroup. Thus, there is a corresponding quotient group . This is isomorphic to f(G), the image of G under f (which is a subgroup of H also), by the first isomorphism theorem for groups.
In the special case of abelian groups, there is no deviation from the previous section.
Example
Let G be the cyclic group on 6 elements with modular addition, H be the cyclic on 2 elements with modular addition, and f the homomorphism that maps each element g in G to the element g modulo 2 in H. Then , since all these elements are mapped to 0H. The quotient group has two elements: and . It is indeed isomorphic to H.
Ring homomorphisms
Let R and S be rings (assumed unital) and let f be a ring homomorphism from R to S.
If 0S is the zero element of S, then the kernel of f is its kernel as linear map over the integers, or, equivalently, as additive groups. It is the preimage of the zero ideal , which is, the subset of R consisting of all those elements of R that are mapped by f to the element 0S.
The kernel is usually denoted (or a variation).
In symbols:
Since a ring homomorphism preserves zero elements, the zero element 0R of R must belong to the kernel.
The homomorphism f is injective if and only if its kernel is only the singleton set .
This is always the case if R is a field, and S is not the zero ring.
Since ker f contains the multiplicative identity only when S is the zero ring, it turns out that the kernel is generally not a subring of R. The kernel is a subrng, and, more precisely, a two-sided ideal of R.
Thus, it makes sense to speak of the quotient ring .
The first isomorphism theorem for rings states that this quotient ring is naturally isomorphic to the image of f (which is a subring of S). (Note that rings need not be unital for the kernel definition).
To some extent, this can be thought of as a special case of the situation for modules, since these are all bimodules over a ring R:
R itself;
any two-sided ideal of R (such as ker f);
any quotient ring of R (such as ); and
the codomain of any ring homomorphism whose domain is R (such as S, the codomain of f).
However, the isomorphism theorem gives a stronger result, because ring isomorphisms preserve multiplication while module isomorphisms (even between rings) in general do not.
This example captures the essence of kernels in general Mal'cev algebras.
Monoid homomorphisms
Let M and N be monoids and let f be a monoid homomorphism from M to N. Then the kernel of f is the subset of the direct product consisting of all those ordered pairs of elements of M whose components are both mapped by f to the same element in N. The kernel is usually denoted (or a variation thereof). In symbols:
Since f is a function, the elements of the form must belong to the kernel. The homomorphism f is injective if and only if its kernel is only the diagonal set .
It turns out that is an equivalence relation on M, and in fact a congruence relation. Thus, it makes sense to speak of the quotient monoid . The first isomorphism theorem for monoids states that this quotient monoid is naturally isomorphic to the image of f (which is a submonoid of N; for the congruence relation).
This is very different in flavour from the above examples. In particular, the preimage of the identity element of N is not enough to determine the kernel of f.
Universal algebra
All the above cases may be unified and generalized in universal algebra.
General case
Let A and B be algebraic structures of a given type and let f be a homomorphism of that type from A to B.
Then the kernel of f is the subset of the direct product consisting of all those ordered pairs of elements of A whose components are both mapped by f to the same element in B.
The kernel is usually denoted (or a variation).
In symbols:
Since f is a function, the elements of the form must belong to the kernel.
The homomorphism f is injective if and only if its kernel is exactly the diagonal set .
It is easy to see that ker f is an equivalence relation on A, and in fact a congruence relation.
Thus, it makes sense to speak of the quotient algebra .
The first isomorphism theorem in general universal algebra states that this quotient algebra is naturally isomorphic to the image of f (which is a subalgebra of B).
Note that the definition of kernel here (as in the monoid example) doesn't depend on the algebraic structure; it is a purely set-theoretic concept.
For more on this general concept, outside of abstract algebra, see kernel of a function.
Malcev algebras
In the case of Malcev algebras, this construction can be simplified. Every Malcev algebra has a special neutral element (the zero vector in the case of vector spaces, the identity element in the case of commutative groups, and the zero element in the case of rings or modules). The characteristic feature of a Malcev algebra is that we can recover the entire equivalence relation ker f from the equivalence class of the neutral element.
To be specific, let A and B be Malcev algebraic structures of a given type and let f be a homomorphism of that type from A to B. If eB is the neutral element of B, then the kernel of f is the preimage of the singleton set {eB}; that is, the subset of A consisting of all those elements of A that are mapped by f to the element eB.
The kernel is usually denoted (or a variation). In symbols:
Since a Malcev algebra homomorphism preserves neutral elements, the identity element eA of A must belong to the kernel. The homomorphism f is injective if and only if its kernel is only the singleton set .
The notion of ideal generalises to any Malcev algebra (as linear subspace in the case of vector spaces, normal subgroup in the case of groups, two-sided ideals in the case of rings, and submodule in the case of modules).
It turns out that ker f is not a subalgebra of A, but it is an ideal.
Then it makes sense to speak of the quotient algebra .
The first isomorphism theorem for Malcev algebras states that this quotient algebra is naturally isomorphic to the image of f (which is a subalgebra of B).
The connection between this and the congruence relation for more general types of algebras is as follows.
First, the kernel-as-an-ideal is the equivalence class of the neutral element eA under the kernel-as-a-congruence. For the converse direction, we need the notion of quotient in the Mal'cev algebra (which is division on either side for groups and subtraction for vector spaces, modules, and rings).
Using this, elements a and b of A are equivalent under the kernel-as-a-congruence if and only if their quotient a/b is an element of the kernel-as-an-ideal.
Algebras with nonalgebraic structure
Sometimes algebras are equipped with a nonalgebraic structure in addition to their algebraic operations.
For example, one may consider topological groups or topological vector spaces, which are equipped with a topology.
In this case, we would expect the homomorphism f to preserve this additional structure; in the topological examples, we would want f to be a continuous map.
The process may run into a snag with the quotient algebras, which may not be well-behaved.
In the topological examples, we can avoid problems by requiring that topological algebraic structures be Hausdorff (as is usually done); then the kernel (however it is constructed) will be a closed set and the quotient space will work fine (and also be Hausdorff).
Kernels in category theory
The notion of kernel in category theory is a generalisation of the kernels of abelian algebras; see Kernel (category theory).
The categorical generalisation of the kernel as a congruence relation is the kernel pair.
(There is also the notion of difference kernel, or binary equaliser.)
See also
Kernel (linear algebra)
Zero set
Notes
References
Algebra
Isomorphism theorems
Broad-concept articles | Kernel (algebra) | [
"Mathematics"
] | 2,897 | [
"Algebra"
] |
45,243 | https://en.wikipedia.org/wiki/Chacmool | A chacmool (also spelled chac-mool or Chac Mool) is a form of pre-Columbian Mesoamerican sculpture depicting a reclining figure with its head facing 90 degrees from the front, supporting itself on its elbows and supporting a bowl or a disk upon its stomach. These figures possibly symbolised slain warriors carrying offerings to the gods; the bowl upon the chest was used to hold sacrificial offerings, including pulque, tamales, tortillas, tobacco, turkeys, feathers, and incense. In Aztec examples, the receptacle is a cuauhxicalli (a stone bowl to receive sacrificed human hearts). Chacmools were often associated with sacrificial stones or thrones. The chacmool form of sculpture first appeared around the 9th century AD in the Valley of Mexico and the northern Yucatán Peninsula.
Aztec chacmools bore water imagery and were associated with Tlaloc, the rain god. Their symbolism placed them on the frontier between the physical and supernatural realms, as intermediaries with the gods.
Form
The chacmool is a distinctive form of Mesoamerican sculpture representing a reclining figure with its head facing 90 degrees from the front, leaning on its elbows and supporting a bowl or a disk upon its chest. There is great variation among individual chacmools, with some possessing heads that are right-facing and others left-facing, and some with the heads facing upwards; some examples have movable heads. The figure may be lying on its back or on its side and the abdomen can be sunken below the level of the chest and knees or at the same level. Some chacmools were raised upon rectangular bases. Some of the figures are richly attired whilst others are almost naked.
The chacmools of Chichen Itza and Tula depict young men with warrior attributes, while the chacmools of Michoacán depict elderly men with wrinkled faces and erect penises. A chacmool from Guácimo, Costa Rica, combines human and jaguar features and grips a bowl. The face of the figure looks upwards and the bowl was apparently used to grind foodstuffs.
A wide variety of materials were used to sculpt chacmools, including limestone and hard metamorphic and igneous rock types. Other materials employed include ceramic and cement.
Discovery and naming
The ancient name for these type of sculptures is unknown. The term chacmool is derived from the name "Chaacmol," which Augustus Le Plongeon in 1875 gave to a sculpture that he and his wife Alice Dixon Le Plongeon excavated within the Temple of the Eagles and Jaguars at Chichén Itzá in 1875; he translated Chaacmol from Yucatecan Mayan as the "paw swift like thunder." Le Plongeon believed the statue, which he had found buried beneath the Platform of the Eagles and the Jaguars, depicted a former ruler of Chichen Itza. Le Plongeon's sponsor, Stephen Salisbury of Worcester, Massachusetts, published Le Plongeon's find, but revised the spelling to "Chac-Mool."
Le Plongeon sought permission from Mexico's president to display the statue at the Centennial Exhibition in Philadelphia in 1876, a request that was denied. In 1877 the Yucatecan government seized the statue and brought it to Mérida. Weeks later Yucatán turned over the statue to the federal government, which brought it to Mexico City to the National Museum of Anthropology. Museum worker Jesús Sanchez realised that the Chichen Itza sculpture was stylistically similar to two sculptures from central Mexico and the wide occurrence of the form within Mesoamerica was first recognised. The 19th century discovery of chacmools in both central Mexico and the Yucatán Peninsula helped to promote the idea of a Toltec empire although the chacmool sculptures may have originated in the Maya region.
Although the name chacmool was inappropriately applied, it has become a useful label to link stylistically similar sculptures from different regions and periods without imposing a unified interpretation. Besides these sites, the sculpture was also found in Michoacán, where it is called Uaxanoti (The Seated One) in the Purépecha language.
Distribution
Examples of chacmool sculptures have been found widely across Mesoamerica from Michoacán in Mexico down to El Salvador. The earliest examples date from the Terminal Classic period of Mesoamerican chronology (c. AD 800–900). Examples are known from the Postclassic Aztec capital of Tenochtitlan, from the central Mexican city of Tula and from the Maya city of Chichen Itza in the Yucatán Peninsula. Fourteen chacmools are known from Chichen Itza and twelve from Tula. The chacmool from the palace at Tula is dated to the Early Postclassic (c. AD 900–1200). Further examples are known from Acolman, Cempoala, Michoacán, Querétaro and Tlaxcala.
In Chichen Itza, only five of the fourteen chacmools were securely confirmed in architectural contexts, those in the Castillo, the Chacmool Temple, the North Colonnade, the Temple of the Little Tables and the Temple of the Warriors. The rest were found interred in or near important structures. The five that were found in secure architectural contexts were all placed within entrance areas near a ritual seat or throne. The chacmools in Tula also had an association with thrones or raised seating platforms, either in front of the throne or at the entrance to a chamber containing a throne.
Two chacmools have been recovered that were associated with the Great Temple of Tenochtitlan, the Aztec capital. The first was discovered in 1943, on the junction of Venustiano Carranza and Pino Suarez, about two blocks south of the temple itself. The second chacmool was excavated in the sacred precinct. This is the only fully polychrome chacmool that has been recovered anywhere; it had an open mouth and exposed teeth and stood in front of the temple of Tlaloc, the Aztec rain god; its sculpted bowl probably received heart and blood sacrifices. This latter sculpture is by far the older of the two.
Chacmools have been reported as far south as the Maya city of Quiriguá, near the Guatemalan border with Honduras. The Quiriguá chacmool most likely dated to the Postclassic period and is stylistically similar to those of Tula rather than Chichen Itza. Two chacmools were reported from Tazumal, a Maya site in western El Salvador. A chacmool was excavated at Las Mercedes in Guácimo, Costa Rica.
Dating and origin
The oldest chacmool ever discovered was the Terminal Classic. The form was unknown in such important Mesoamerican cities as Teotihuacan and Tikal. After the first appearance of the form, it was rapidly disseminated throughout Mesoamerica, spreading as far south as Costa Rica. Although a central Mexican origin is generally assumed, there are no antecedents pre-dating the Toltecs and the form is not present in central Mexican codices.
The positioning and context of the chacmool form do have antecedents in Classic Maya art and art historian Mary Ellen Miller has argued that the chacmool developed out of Classic period Maya imagery. No central Mexican chacmool has been found that clearly predates the Chichen Itza examples. However, Tula and Chichen Itza may have developed simultaneously with rapid communication of the chacmool form from one city to the other.
The wider variety of chacmool forms at Chichen Itza has also been used to support the development of the form there; no two possess identical form, dress and proportions. At Tula the chacmools have a standardised form with little variation in position or proportions. Miller has proposed that the chacmool developed out of Classic Maya iconography and underwent a transition into three dimensional sculpture at Chichen Itza, perhaps spurred by the influence of central Mexican sculptural forms. A chacmool from Costa Rica was dated by the excavators to approximately AD 1000.
Aztec Chacmool
During the 1930 excavation of Templo Mayor, the only fully polychrome chacmool to be found at that site was in its original context on the top level of the Tlaloc side (the rain god) of the temple. The position of this chacmool statue mirrored the position of the sacrificial stone on the Huitzilopochtli (the Aztecs' patron deity, associated with war) side of the temple. Archaeologist Eduardo Matos Moctezuma posits that this mirroring confirms his interpretation that the chacmool acted as an "intermediary between the priest and the god, a divine messenger," in the same way the sacrificial stone on the Huitzilopochtli side does.
The pigment that remained on this chacmool sculpture was crucial to its identification, as it does not contain any sculpted iconography or symbols associated with the rain god Tlaloc. Archaeologists were able to create a reconstruction of the sculpture's original colors, which they then compared to pictographic representations of Tlaloc. This comparison confirmed that the polychrome chacmool discovered at Tlaloc's side of the Templo Mayor was a representation of the deity itself. Characteristics such as the "chia circles on the cheeks, the circular gold pectoral medallion, and the color combination of the petticoat, as well as the black skin, the red hands and feet, and the white headdress and bangles" echo the iconography of other depictions of Tlaloc. Art historians Leonardo Lopez Lujan and Giacomo Chiari argue that this "confirms that there is symbolic continuity between the early and late Mexica [Aztec] chacmool," due to early Aztec chacmools containing iconographic nods to Tlaloc.
A second chacmool discovery from the Templo Mayor, dating to a later period, displays iconographic features which are distinct from the larger corpus of chacmool figures but consistent with other sculptures (Tlaloc ritual vessels and bench reliefs) found in a similar context at the Templo Mayor. Whereas Tlaloc's eyes are generally represented with a round goggle-like frame, the later chacmool, the vessels, and the bench relief feature a rectangular eye frame within which almond eyes are engraved. All three sculptures also include large fangs at the corners of the god's mouth. The ornaments worn by the later chacmool and included in the vessels and the bench relief also differ from other representations of Tlaloc. The later chacmool, vessels, and bench relief sport oversized circular earspools, rather than the characteristic earspools with a square plug and central dangal; they are also adorned with a multistrand, beaded collar in which one strand has larger beads that have been interpreted to be hanging bells. The chacmool holds onto a cuauhxicalli vessel that is engraved with the face of Tlaloc, including the same rectangular eye and mouth features.
In 1942, archaeologists recovered another chacmool example located a few blocks away from the Templo Mayor. This chacmool has overt iconographic associations with Tlaloc, wearing his mask and holding a cuauhxicalli vessel whose top is carved with the face of Tlaloc (rather than being concave and able to hold something). He is wearing several strands of beaded necklaces, with the outer most ring containing oliva shells, which were a characteristic of Maya costuming. Another Maya influence can be seen in his headdress, which scholars Mary Miller and Marco Samayoa compare to a headdress worn by Maya king Shield Jaguar (also known as Itzamnaaj Bahlam III) of Yaxchilan. Perhaps the most interesting iconographic feature of this chacmool, however, is the large necklace pendant he wears, which Miller and Samayoa argue is a representation of an actual heirloom pendant. They suggest that the pendant was looted from a Maya site, probably "from a stone vessel interred behind a chacmool" and that "its subject is probably the enthroned, resurrected Maize God." This association between chacmools and maize deities is rooted in Maya examples (from which the Aztecs were clearly drawing inspiration, as this example's headdress and shell necklace demonstrate), but does not necessarily mean that the Aztecs would have associated their chacmools with maize deities. In all likelihood, the Aztecs conceived of chacmools as being connected to Tlaloc, as it is his image and associated iconographic characteristics that cover the majority of discovered Aztec chacmools. This chacmool, for instance, features a carving of Tlaloc on its underside, the symbolic meaning of which Miller explores: "With their undersides carved with aquatic symbols, these sculptures seem to float on water. This suspension suggests the liminal qualities of the messenger, the link between earth and supernaturals."
Interpretations
The meaning of the chacmool figures varied across time depending upon the geographical and cultural context. Chacmools do not appear to have been worshipped, since they are never found within inner sanctuary of temples or shrines; it appears to have rather been a piece of religious paraphernalia used by the priesthood in the course of their duties. Three uses have generally been attributed to chacmools.
The first interpretation is that the chacmool is an offering table (or tlamanalco) to receive gifts such as pulque, tamales, tortillas, tobacco, turkeys, feathers and incense. The second is that the chacmool was a cuauhxicalli to receive blood and human hearts; this use is particularly relevant to the Aztecs, who used a cuauhxicalli bowl in place of the usual disc-altar. These bowls may have accepted these blood offerings directly or may have been holders for portable cuauhxicalli bowls that were placed within them. A chacmool from Tlaxcala has a bloodied heart sculpted on the underside, supporting this interpretation.
It has also been suggested that chacmools were used as a techcatl, or sacrificial stone over which victims were stretched so their hearts could be cut from their chests. The Crónica Mexicayotl describes such a sacrificial stone as sculpted in the form of a person with a twisted head. Techcatl were not just used for human sacrifice, they were also used in the yacaxapotlaliztli ceremony, where the nose of a future ruler was pierced. Such rituals may also have been executed upon chacmools, and the presence of small nose jewels sculpted onto various chacmools at Chichen Itza and one at Tula has been used to support this idea.
The backward reclining figure of the chacmool presents a defenceless, passive appearance and has been likened by Miller to the positioning of captives in Classic period Maya sculpture and painting. Bent elbows and knees are common in depictions of Maya captives; the full-frontal view of the face is rare in Maya art except among representations of captives. The form of the Chichen Itza chacmools lacks the typical traits of Maya deities and most scholars have assumed that the iconography of Maya chacmools is equivalent to that of the central Mexican examples. Eduard Seler commented in the early 1960s that chacmools in Chichen Itza tended to be located in temple antechambers, where the bowl or disc gripped by the figure served to receive pulque as an offering.
The chacmools at Chichen Itza were found in a combination of chacmool, throne and serpent column; this chacmool-throne-serpent complex was associated with rulership during the Early Postclassic period. The original chacmool described by Le Plongeon in the 19th century included small images of the central Mexican deity Tlaloc on its ear ornaments. Among the Classic period Maya, such Tlaloc imagery was associated with war and human sacrifice. Associations between the rain god, war and human sacrifice may have continued into the Postclassic period as demonstrated by the chacmool within the Castillo at Chichen Itza, which bears small images of the Maya rain god Chaac on its ear ornaments. The chacmools at Tula, with contextual similarity to those at Chichen Itza, probably also represent war captives.
The lack of the representation of chacmools in Central Mexican codices has led to them being associated with a great variety of deities by scholars, including Cinteotl, Tezcatzoncatl and Tlaloc. Both of the chacmools from the Great Temple of Tenochtitlan were clearly associated with Tlaloc. The chacmool found two blocks south of the temple was sculpted with three images of the deity. These included an elaborate relief image of Tlaloc amongst aquatic symbols on the underside, one on the bowl that the figure grips and the last is the Tlaloc mask with characteristic goggles and fangs that is worn by the chacmool.
The fully polychrome chacmool found in situ in the Great Temple was associated with Tlaloc by its placement on the Tlaloc half of the double pyramid. A further Aztec chacmool was described in the 19th century; it is of uncertain origin but stylistically it is typical of Tenochtitlan. It is sculpted on the underside with aquatic imagery and the figure wears a goggle-and-fangs Tlaloc mask. Spanish observers reported the great quantity of human sacrifices during important ceremonies at the Great Temple and the chacmool was probably used during these rituals to symbolise the sacrificed captives as well as receive their blood.
The discs gripped by some chacmools may represent a mirror. Chacmools were placed in entrances in order to receive sacrificial offerings, including human blood and hearts. The aquatic imagery carved onto the underside of some of the figures symbolised that they were floating on water, on the frontier between the physical world and the supernatural realm. This suggests that chacmools acted as messengers between the mortal realm and that of the gods.
Costa Rican chacmools gripped sculpted bowls; these chacmools also served ceremonial purposes although the bowl was used to grind foodstuffs.
In contemporary culture
The short story "Chac Mool" by Mexican novelist Carlos Fuentes is found in his book Los días enmascarados (), published in 1954. A man named Filiberto buys a chacmool for his art collection, and discovers that the stone is slowly becoming flesh. The idol eventually becomes fully human, dominating his life, causing flooding and other disasters. Filiberto dies by drowning. His story is found in a diary describing the terrors brought on by the idol, and his plans to escape. According to the author, this short story was inspired by news reports from 1952 when the lending of a representation of the Maya rain deity to a Mexican exhibition in Europe had coincided with wet weather there. The short story was included in the 2008 anthology Sun, Stone, and Shadows.
In Henry Moore's early examples of monumental reclining figures, the artist relied on the cast of a chacmool sculpture he saw in Paris. Commenting on the major impact chacmool sculpted figures had on his early career, Moore stated that "Its stillness and alertness, a sense of readiness – and the whole presence of it, and the legs coming down like columns" were characteristics that inspired his creations.
References
General references
Desmond, Lawrence G. "Chacmool." In Davíd Carrasco (ed). The Oxford Encyclopedia of Mesoamerican Cultures. : Oxford University Press, 2001.
Further reading
External links
"Chacmool," by Lawrence G. Desmond, Peabody Museum, Harvard University
Maya civilization
Religious objects
Stone sculptures
Indigenous sculpture of the Americas
Aztec artifacts
Mesoamerican stone sculptures | Chacmool | [
"Physics"
] | 4,162 | [
"Religious objects",
"Physical objects",
"Matter"
] |
45,249 | https://en.wikipedia.org/wiki/User%20interface | In the industrial design field of human–computer interaction, a user interface (UI) is the space where interactions between humans and machines occur. The goal of this interaction is to allow effective operation and control of the machine from the human end, while the machine simultaneously feeds back information that aids the operators' decision-making process. Examples of this broad concept of user interfaces include the interactive aspects of computer operating systems, hand tools, heavy machinery operator controls and process controls. The design considerations applicable when creating user interfaces are related to, or involve such disciplines as, ergonomics and psychology.
Generally, the goal of user interface design is to produce a user interface that makes it easy, efficient, and enjoyable (user-friendly) to operate a machine in the way which produces the desired result (i.e. maximum usability). This generally means that the operator needs to provide minimal input to achieve the desired output, and also that the machine minimizes undesired outputs to the user.
User interfaces are composed of one or more layers, including a human–machine interface (HMI) that typically interfaces machines with physical input hardware (such as keyboards, mice, or game pads) and output hardware (such as computer monitors, speakers, and printers). A device that implements an HMI is called a human interface device (HID). User interfaces that dispense with the physical movement of body parts as an intermediary step between the brain and the machine use no input or output devices except electrodes alone; they are called brain–computer interfaces (BCIs) or brain–machine interfaces (BMIs).
Other terms for human–machine interfaces are man–machine interface (MMI) and, when the machine in question is a computer, human–computer interface. Additional UI layers may interact with one or more human senses, including: tactile UI (touch), visual UI (sight), auditory UI (sound), olfactory UI (smell), equilibria UI (balance), and gustatory UI (taste).
Composite user interfaces (CUIs) are UIs that interact with two or more senses. The most common CUI is a graphical user interface (GUI), which is composed of a tactile UI and a visual UI capable of displaying graphics. When sound is added to a GUI, it becomes a multimedia user interface (MUI). There are three broad categories of CUI: standard, virtual and augmented. Standard CUI use standard human interface devices like keyboards, mice, and computer monitors. When the CUI blocks out the real world to create a virtual reality, the CUI is virtual and uses a virtual reality interface. When the CUI does not block out the real world and creates augmented reality, the CUI is augmented and uses an augmented reality interface. When a UI interacts with all human senses, it is called a qualia interface, named after the theory of qualia. CUI may also be classified by how many senses they interact with as either an X-sense virtual reality interface or X-sense augmented reality interface, where X is the number of senses interfaced with. For example, a Smell-O-Vision is a 3-sense (3S) Standard CUI with visual display, sound and smells; when virtual reality interfaces interface with smells and touch it is said to be a 4-sense (4S) virtual reality interface; and when augmented reality interfaces interface with smells and touch it is said to be a 4-sense (4S) augmented reality interface.
Overview
The user interface or human–machine interface is the part of the machine that handles the human–machine interaction. Membrane switches, rubber keypads and touchscreens are examples of the physical part of the Human Machine Interface which we can see and touch.
In complex systems, the human–machine interface is typically computerized. The term human–computer interface refers to this kind of system. In the context of computing, the term typically extends as well to the software dedicated to control the physical elements used for human–computer interaction.
The engineering of human–machine interfaces is enhanced by considering ergonomics (human factors). The corresponding disciplines are human factors engineering (HFE) and usability engineering (UE) which is part of systems engineering.
Tools used for incorporating human factors in the interface design are developed based on knowledge of computer science, such as computer graphics, operating systems, programming languages. Nowadays, we use the expression graphical user interface for human–machine interface on computers, as nearly all of them are now using graphics.
Multimodal interfaces allow users to interact using more than one modality of user input.
Terminology
There is a difference between a user interface and an operator interface or a human–machine interface (HMI).
The term "user interface" is often used in the context of (personal) computer systems and electronic devices.
Where a network of equipment or computers are interlinked through an MES (Manufacturing Execution System)-or Host to display information.
A human–machine interface (HMI) is typically local to one machine or piece of equipment, and is the interface method between the human and the equipment/machine. An operator interface is the interface method by which multiple pieces of equipment, linked by a host control system, are accessed or controlled.
The system may expose several user interfaces to serve different kinds of users. For example, a computerized library database might provide two user interfaces, one for library patrons (limited set of functions, optimized for ease of use) and the other for library personnel (wide set of functions, optimized for efficiency).
The user interface of a mechanical system, a vehicle or an industrial installation is sometimes referred to as the human–machine interface (HMI). HMI is a modification of the original term MMI (man–machine interface). In practice, the abbreviation MMI is still frequently used although some may claim that MMI stands for something different now. Another abbreviation is HCI, but is more commonly used for human–computer interaction. Other terms used are operator interface console (OIC) and operator interface terminal (OIT). However it is abbreviated, the terms refer to the 'layer' that separates a human that is operating a machine from the machine itself. Without a clean and usable interface, humans would not be able to interact with information systems.
In science fiction, HMI is sometimes used to refer to what is better described as a direct neural interface. However, this latter usage is seeing increasing application in the real-life use of (medical) prostheses—the artificial extension that replaces a missing body part (e.g., cochlear implants).
In some circumstances, computers might observe the user and react according to their actions without specific commands. A means of tracking parts of the body is required, and sensors noting the position of the head, direction of gaze and so on have been used experimentally. This is particularly relevant to immersive interfaces.
History
The history of user interfaces can be divided into the following phases according to the dominant type of user interface:
1945–1968: Batch interface
In the batch era, computing power was extremely scarce and expensive. User interfaces were rudimentary. Users had to accommodate computers rather than the other way around; user interfaces were considered overhead, and software was designed to keep the processor at maximum utilization with as little overhead as possible.
The input side of the user interfaces for batch machines was mainly punched cards or equivalent media like paper tape. The output side added line printers to these media. With the limited exception of the system operator's console, human beings did not interact with batch machines in real time at all.
Submitting a job to a batch machine involved first preparing a deck of punched cards that described a program and its dataset. The program cards were not punched on the computer itself but on keypunches, specialized, typewriter-like machines that were notoriously bulky, unforgiving, and prone to mechanical failure. The software interface was similarly unforgiving, with very strict syntaxes designed to be parsed by the smallest possible compilers and interpreters.
Once the cards were punched, one would drop them in a job queue and wait. Eventually, operators would feed the deck to the computer, perhaps mounting magnetic tapes to supply another dataset or helper software. The job would generate a printout, containing final results or an abort notice with an attached error log. Successful runs might also write a result on magnetic tape or generate some data cards to be used in a later computation.
The turnaround time for a single job often spanned entire days. If one was very lucky, it might be hours; there was no real-time response. But there were worse fates than the card queue; some computers required an even more tedious and error-prone process of toggling in programs in binary code using console switches. The very earliest machines had to be partly rewired to incorporate program logic into themselves, using devices known as plugboards.
Early batch systems gave the currently running job the entire computer; program decks and tapes had to include what we would now think of as operating system code to talk to I/O devices and do whatever other housekeeping was needed. Midway through the batch period, after 1957, various groups began to experiment with so-called "load-and-go" systems. These used a monitor program which was always resident on the computer. Programs could call the monitor for services. Another function of the monitor was to do better error checking on submitted jobs, catching errors earlier and more intelligently and generating more useful feedback to the users. Thus, monitors represented the first step towards both operating systems and explicitly designed user interfaces.
1969–present: Command-line user interface
Command-line interfaces (CLIs) evolved from batch monitors connected to the system console. Their interaction model was a series of request-response transactions, with requests expressed as textual commands in a specialized vocabulary. Latency was far lower than for batch systems, dropping from days or hours to seconds. Accordingly, command-line systems allowed the user to change their mind about later stages of the transaction in response to real-time or near-real-time feedback on earlier results. Software could be exploratory and interactive in ways not possible before. But these interfaces still placed a relatively heavy mnemonic load on the user, requiring a serious investment of effort and learning time to master.
The earliest command-line systems combined teleprinters with computers, adapting a mature technology that had proven effective for mediating the transfer of information over wires between human beings. Teleprinters had originally been invented as devices for automatic telegraph transmission and reception; they had a history going back to 1902 and had already become well-established in newsrooms and elsewhere by 1920. In reusing them, economy was certainly a consideration, but psychology and the rule of least surprise mattered as well; teleprinters provided a point of interface with the system that was familiar to many engineers and users.
The widespread adoption of video-display terminals (VDTs) in the mid-1970s ushered in the second phase of command-line systems. These cut latency further, because characters could be thrown on the phosphor dots of a screen more quickly than a printer head or carriage can move. They helped quell conservative resistance to interactive programming by cutting ink and paper consumables out of the cost picture, and were to the first TV generation of the late 1950s and 60s even more iconic and comfortable than teleprinters had been to the computer pioneers of the 1940s.
Just as importantly, the existence of an accessible screen—a two-dimensional display of text that could be rapidly and reversibly modified—made it economical for software designers to deploy interfaces that could be described as visual rather than textual. The pioneering applications of this kind were computer games and text editors; close descendants of some of the earliest specimens, such as rogue(6), and vi(1), are still a live part of Unix tradition.
1985: SAA user interface or text-based user interface
In 1985, with the beginning of Microsoft Windows and other graphical user interfaces, IBM created what is called the Systems Application Architecture (SAA) standard which include the Common User Access (CUA) derivative. CUA successfully created what we know and use today in Windows, and most of the more recent DOS or Windows Console Applications will use that standard as well.
This defined that a pulldown menu system should be at the top of the screen, status bar at the bottom, shortcut keys should stay the same for all common functionality (F2 to Open for example would work in all applications that followed the SAA standard). This greatly helped the speed at which users could learn an application so it caught on quick and became an industry standard.
1968–present: Graphical user interface
1968 – Douglas Engelbart demonstrated NLS, a system which uses a mouse, pointers, hypertext, and multiple windows.
1970 – Researchers at Xerox Palo Alto Research Center (many from SRI) develop WIMP paradigm (Windows, Icons, Menus, Pointers)
1973 – Xerox Alto: commercial failure due to expense, poor user interface, and lack of programs
1979 – Steve Jobs and other Apple engineers visit Xerox PARC. Though Pirates of Silicon Valley dramatizes the events, Apple had already been working on developing a GUI, such as the Macintosh and Lisa projects, before the visit.
1981 – Xerox Star: focus on WYSIWYG. Commercial failure (25K sold) due to cost ($16K each), performance (minutes to save a file, couple of hours to recover from crash), and poor marketing
1982 – Rob Pike and others at Bell Labs designed Blit, which was released in 1984 by AT&T and Teletype as DMD 5620 terminal.
1984 – Apple Macintosh popularizes the GUI. Super Bowl commercial shown twice, was the most expensive commercial ever made at that time
1984 – MIT's X Window System: hardware-independent platform and networking protocol for developing GUIs on UNIX-like systems
1985 – Windows 1.0 – provided GUI interface to MS-DOS. No overlapping windows (tiled instead).
1985 – Microsoft and IBM start work on OS/2 meant to eventually replace MS-DOS and Windows
1986 – Apple threatens to sue Digital Research because their GUI desktop looked too much like Apple's Mac.
1987 – Windows 2.0 – Overlapping and resizable windows, keyboard and mouse enhancements
1987 – Macintosh II: first full-color Mac
1988 – OS/2 1.10 Standard Edition (SE) has GUI written by Microsoft, looks a lot like Windows 2
Interface design
Primary methods used in the interface design include prototyping and simulation.
Typical human–machine interface design consists of the following stages: interaction specification, interface software specification and prototyping:
Common practices for interaction specification include user-centered design, persona, activity-oriented design, scenario-based design, and resiliency design.
Common practices for interface software specification include use cases and constrain enforcement by interaction protocols (intended to avoid use errors).
Common practices for prototyping are based on libraries of interface elements (controls, decoration, etc.).
Principles of quality
In broad terms, interfaces generally regarded as user friendly, efficient, intuitive, etc. are typified by one or more particular qualities. For the purpose of example, a non-exhaustive list of such characteristics follows:
Clarity: The interface avoids ambiguity by making everything clear through language, flow, hierarchy and metaphors for visual elements.
Concision: However ironically, the over-clarification of information—for instance, by labelling the majority, if not the entirety, of items displayed on-screen at once, and regardless of whether or not the user would in fact require a visual indicator of some kind in order to identify a given item—can, and, under most normal circumstances, most likely will lead to the obfuscation of whatever information.
Familiarity: Even if someone uses an interface for the first time, certain elements can still be familiar. Real-life metaphors can be used to communicate meaning.
Responsiveness: A good interface should not feel sluggish. This means that the interface should provide good feedback to the user about what's happening and whether the user's input is being successfully processed.
Consistency: Keeping your interface consistent across your application is important because it allows users to recognize usage patterns.
Aesthetics: While you do not need to make an interface attractive for it to do its job, making something look good will make the time your users spend using your application more enjoyable; and happier users can only be a good thing.
Efficiency: Time is money, and a great interface should make the user more productive through shortcuts and good design.
Forgiveness: A good interface should not punish users for their mistakes but should instead provide the means to remedy them.
Principle of least astonishment
The principle of least astonishment (POLA) is a general principle in the design of all kinds of interfaces. It is based on the idea that human beings can only pay full attention to one thing at one time, leading to the conclusion that novelty should be minimized.
Principle of habit formation
If an interface is used persistently, the user will unavoidably develop habits for using the interface. The designer's role can thus be characterized as ensuring the user forms good habits. If the designer is experienced with other interfaces, they will similarly develop habits, and often make unconscious assumptions regarding how the user will interact with the interface.
A model of design criteria: User Experience Honeycomb
Peter Morville of Google designed the User Experience Honeycomb framework in 2004 when leading operations in user interface design. The framework was created to guide user interface design. It would act as a guideline for many web development students for a decade.
Usable: Is the design of the system easy and simple to use? The application should feel familiar, and it should be easy to use.
Useful: Does the application fulfill a need? A business's product or service needs to be useful.
Desirable: Is the design of the application sleek and to the point? The aesthetics of the system should be attractive, and easy to translate.
Findable: Are users able to quickly find the information they are looking for? Information needs to be findable and simple to navigate. A user should never have to hunt for your product or information.
Accessible: Does the application support enlarged text without breaking the framework? An application should be accessible to those with disabilities.
Credible: Does the application exhibit trustworthy security and company details? An application should be transparent, secure, and honest.
Valuable: Does the end-user think it's valuable? If all 6 criteria are met, the end-user will find value and trust in the application.
Types
Attentive user interfaces manage the user attention deciding when to interrupt the user, the kind of warnings, and the level of detail of the messages presented to the user.
Batch interfaces are non-interactive user interfaces, where the user specifies all the details of the batch job in advance to batch processing, and receives the output when all the processing is done. The computer does not prompt for further input after the processing has started.
Command line interfaces (CLIs) prompt the user to provide input by typing a command string with the computer keyboard and respond by outputting text to the computer monitor. Used by programmers and system administrators, in engineering and scientific environments, and by technically advanced personal computer users.
Conversational interfaces enable users to command the computer with plain text English (e.g., via text messages, or chatbots) or voice commands, instead of graphic elements. These interfaces often emulate human-to-human conversations.
Conversational interface agents attempt to personify the computer interface in the form of an animated person, robot, or other character (such as Microsoft's Clippy the paperclip), and present interactions in a conversational form.
Crossing-based interfaces are graphical user interfaces in which the primary task consists in crossing boundaries instead of pointing.
Direct manipulation interface is a general class of user interfaces that allow users to manipulate objects presented to them, using actions that correspond to the physical world, at least loosely.
Gesture interfaces are graphical user interfaces which accept input in a form of hand gestures, or mouse gestures sketched with a computer mouse or a stylus.
Graphical user interfaces (GUI) accept input via devices such as a computer keyboard and mouse and provide articulated graphical output on the computer monitor. There are at least two different principles widely used in GUI design: Object-oriented user interfaces (OOUIs) and application-oriented interfaces.
Hardware interfaces are the physical, spatial interfaces found on products in the real world from toasters, to car dashboards, to airplane cockpits. They are generally a mixture of knobs, buttons, sliders, switches, and touchscreens.
provide input to electronic or electro-mechanical devices by passing a finger through reproduced holographic images of what would otherwise be tactile controls of those devices, floating freely in the air, detected by a wave source and without tactile interaction.
Intelligent user interfaces are human–machine interfaces that aim to improve the efficiency, effectiveness, and naturalness of human–machine interaction by representing, reasoning, and acting on models of the user, domain, task, discourse, and media (e.g., graphics, natural language, gesture).
Motion tracking interfaces monitor the user's body motions and translate them into commands, currently being developed by Apple.
Multi-screen interfaces, employ multiple displays to provide a more flexible interaction. This is often employed in computer game interaction in both the commercial arcades and more recently the handheld markets.
Natural-language interfaces are used for search engines and on webpages. User types in a question and waits for a response.
Non-command user interfaces, which observe the user to infer their needs and intentions, without requiring that they formulate explicit commands.
Object-oriented user interfaces (OOUI) are based on object-oriented programming metaphors, allowing users to manipulate simulated objects and their properties.
Permission-driven user interfaces show or conceal menu options or functions depending on the user's level of permissions. The system is intended to improve the user experience by removing items that are unavailable to the user. A user who sees functions that are unavailable for use may become frustrated. It also provides an enhancement to security by hiding functional items from unauthorized persons.
Reflexive user interfaces where the users control and redefine the entire system via the user interface alone, for instance to change its command verbs. Typically, this is only possible with very rich graphic user interfaces.
Search interface is how the search box of a site is displayed, as well as the visual representation of the search results.
Tangible user interfaces, which place a greater emphasis on touch and physical environment or its element.
Task-focused interfaces are user interfaces which address the information overload problem of the desktop metaphor by making tasks, not files, the primary unit of interaction.
Text-based user interfaces (TUIs) are user interfaces which interact via text. TUIs include command-line interfaces and text-based WIMP environments.
Touchscreens are displays that accept input by touch of fingers or a stylus. Used in a growing amount of mobile devices and many types of point of sale, industrial processes and machines, self-service machines, etc.
Touch user interface are graphical user interfaces using a touchpad or touchscreen display as a combined input and output device. They supplement or replace other forms of output with haptic feedback methods. Used in computerized simulators, etc.
Voice user interfaces, which accept input and provide output by generating voice prompts. The user input is made by pressing keys or buttons, or responding verbally to the interface.
Web-based user interfaces or web user interfaces (WUI) that accept input and provide output by generating web pages viewed by the user using a web browser program.
Zero-input interfaces get inputs from a set of sensors instead of querying the user with input dialogs.
Zooming user interfaces are graphical user interfaces in which information objects are represented at different levels of scale and detail, and where the user can change the scale of the viewed area in order to show more detail.
Gallery
See also
Adaptive user interfaces
Brain–computer interface
Computer user satisfaction
Direct voice input
Distinguishable interfaces
Ergonomics and human factors – the study of designing objects to be better adapted to the shape of the human body
Flat design
History of the GUI
Icon design
Information architecture – organizing, naming, and labelling information structures
Information visualization – the use of sensory representations of abstract data to reinforce cognition
Interaction design
Interaction technique
Kinetic user interface
Knowledge visualization – the use of visual representations to transfer knowledge
Multiple frames interface
Natural user interfaces
Organic user interface
Post-WIMP
Tangible user interface
Unified Code for Units of Measure
Usability links
User assistance
User experience
User experience design
User interface design
User interface specification
Useware
Virtual artifact
Virtual user interface
References
External links
Conference series – covering a wide area of user interface publications
Chapter 2. History: A brief history of user interfaces
User interface techniques
Virtual reality
Human communication
Human–machine interaction | User interface | [
"Physics",
"Technology",
"Engineering",
"Biology"
] | 5,149 | [
"User interfaces",
"Human communication",
"Machines",
"Behavior",
"Physical systems",
"Interfaces",
"Human–machine interaction",
"Design",
"Human behavior"
] |
45,259 | https://en.wikipedia.org/wiki/Biosafety | Biosafety is the prevention of large-scale loss of biological integrity, focusing both on ecology and human health.
These prevention mechanisms include the conduction of regular reviews of biosafety in laboratory settings, as well as strict guidelines to follow. Biosafety is used to protect from harmful incidents. Many laboratories handling biohazards employ an ongoing risk management assessment and enforcement process for biosafety. Failures to follow such protocols can lead to increased risk of exposure to biohazards or pathogens. Human error and poor technique contribute to unnecessary exposure and compromise the best safeguards set into place for protection.
The international Cartagena Protocol on Biosafety deals primarily with the agricultural definition but many advocacy groups seek to expand it to include post-genetic threats: new molecules, artificial life forms, and even robots which may compete directly in the natural food chain.
Biosafety in agriculture, chemistry, medicine, exobiology and beyond will likely require the application of the precautionary principle, and a new definition focused on the biological nature of the threatened organism rather than the nature of the threat.
When biological warfare or new, currently hypothetical, threats (i.e., robots, new artificial bacteria) are considered, biosafety precautions are generally not sufficient. The new field of biosecurity addresses these complex threats.
Biosafety level refers to the stringency of biocontainment precautions deemed necessary by the Centers for Disease Control and Prevention (CDC) for laboratory work with infectious materials.
Typically, institutions that experiment with or create potentially harmful biological material will have a committee or board of supervisors that is in charge of the institution's biosafety. They create and monitor the biosafety standards that must be met by labs in order to prevent the accidental release of potentially destructive biological material. (note that in the US, several groups are involved, and efforts are being made to improve processes for government run labs, but there is no unifying regulatory authority for all labs.
Biosafety is related to several fields:
In ecology (referring to imported life forms from beyond ecoregion borders),
In agriculture (reducing the risk of alien viral or transgenic genes, genetic engineering or prions such as BSE/"MadCow", reducing the risk of food bacterial contamination)
In medicine (referring to organs or tissues from biological origin, or genetic therapy products, virus; levels of lab containment protocols measured as 1, 2, 3, 4 in rising order of danger),
In chemistry (i.e., nitrates in water, PCB levels affecting fertility)
In exobiology (i.e., NASA's policy for containing alien microbes that may exist on space samples. See planetary protection and interplanetary contamination), and
In synthetic biology (referring to the risks associated with this type of lab practice)
Hazards
Chemical hazards typically found in laboratory settings include carcinogens, toxins, irritants, corrosives, and sensitizers. Biological hazards include viruses, bacteria, fungi, prions, and biologically derived toxins, which may be present in body fluids and tissue, cell culture specimens, and laboratory animals. Routes of exposure for chemical and biological hazards include inhalation, ingestion, skin contact, and eye contact.
Physical hazards include ergonomic hazards, ionizing and non-ionizing radiation, and noise hazards. Additional safety hazards include burns and cuts from autoclaves, injuries from centrifuges, compressed gas leaks, cold burns from cryogens, electrical hazards, fires, injuries from machinery, and falls.
In synthetic biology
A complete understanding of experimental risks associated with synthetic biology is helping to enforce the knowledge and effectiveness of biosafety.
With the potential future creation of man-made unicellular organisms, some are beginning to consider the effect that these organisms will have on biomass already present. Scientists estimate that within the next few decades, organism design will be sophisticated enough to accomplish tasks such as creating biofuels and lowering the levels of harmful substances in the atmosphere. Scientist that favor the development of synthetic biology claim that the use of biosafety mechanisms such as suicide genes and nutrient dependencies will ensure the organisms cannot survive outside of the lab setting in which they were originally created. Organizations like the ETC Group argue that regulations should control the creation of organisms that could potentially harm existing life. They also argue that the development of these organisms will simply shift the consumption of petroleum to the utilization of biomass in order to create energy. These organisms can harm existing life by affecting the prey/predator food chain, reproduction between species, as well as competition against other species (species at risk, or act as an invasive species).
Synthetic vaccines are now being produced in the lab. These have caused a lot of excitement in the pharmaceutical industry as they will be cheaper to produce, allow quicker production, as well as enhance the knowledge of virology and immunology.
In medicine, healthcare settings and laboratories
Biosafety, in medicine and health care settings, specifically refers to proper handling of organs or tissues from biological origin, or genetic therapy products, viruses with respect to the environment, to ensure the safety of health care workers, researchers, lab staff, patients, and the general public. Laboratories are assigned a biosafety level numbered 1 through 4 based on their potential biohazard risk level. The employing authority, through the laboratory director, is responsible for ensuring that there is adequate surveillance of the health of laboratory personnel. The objective of such surveillance is to monitor for occupationally acquired diseases. The World Health Organization attributes human error and poor technique as the primary cause of mishandling of biohazardous materials.
Biosafety is also becoming a global concern and requires multilevel resources and international collaboration to monitor, prevent and correct accidents from unintended and malicious release and also to prevent that bioterrorists get their hands-on biologics sample to create biologic weapons of mass destruction. Even people outside of the health sector needs to be involved as in the case of the Ebola outbreak the impact that it had on businesses and travel required that private sectors, international banks together pledged more than $2 billion to combat the epidemic. The bureau of international Security and nonproliferation (ISN) is responsible for managing a broad range of U.S. nonproliferation policies, programs, agreements, and initiatives, and biological weapon is one their concerns
Biosafety has its risks and benefits. All stakeholders must try to find a balance between cost-effectiveness of safety measures and use evidence-based safety practices and recommendations, measure the outcomes and consistently reevaluate the potential benefits that biosafety represents for human health.
Biosafety level designations are based on a composite of the design features, construction, containment facilities, equipment, practices and operational procedures required for working with agents from the various risk groups.
Classification of biohazardous materials is subjective and the risk assessment is determined by the individuals most familiar with the specific characteristics of the organism. There are several factors taken into account when assessing an organism and the classification process.
Risk Group 1: (no or low individual and community risk) A microorganism that is unlikely to cause human or animal disease.
Risk Group 2 : (moderate individual risk, low community risk) A pathogen that can cause human or animal disease but is unlikely to be a serious hazard to laboratory workers, the community, livestock or the environment. Laboratory exposures may cause serious infection, but effective treatment and preventive measures are available and the risk of spread of infection is limited.
Risk Group 3 : (high individual risk, low community risk) A pathogen that usually causes serious human or animal disease but does not ordinarily spread from one infected individual to another. Effective treatment and preventive measures are available.
Risk Group 4 : (high individual and community risk) A pathogen that usually causes serious human or animal disease and that can be readily transmitted from one individual to another, directly or indirectly. Effective treatment and preventive measures are not usually available.
See World Health Organization Biosafety Laboratory Guidelines (4th edition, 2020): World Health Organization Biosafety Laboratory Guidelines
Investigations have shown that there are hundreds of unreported biosafety accidents, with laboratories self-policing the handling of biohazardous materials and lack of reporting. Poor record keeping, improper disposal, and mishandling biohazardous materials result in increased risks of biochemical contamination for both the public and environment.
Along with the precautions taken during the handling process of biohazardous materials, the World Health Organization recommends:
Staff training should always include information on safe methods for highly hazardous procedures that are commonly encountered by all laboratory personnel, and which involve:
Inhalation risks (i.e. aerosol production) when using loops, streaking agar plates,
pipetting, making smears, opening cultures, taking blood/serum samples, centrifuging, etc.
Ingestion risks when handling specimens, smears and cultures
Risks of percutaneous exposures when using syringes and needles
Bites and scratches when handling animals
Handling of blood and other potentially hazardous pathological materials
Decontamination and disposal of infectious material.
Biosafety management in laboratory
First of all the laboratory director, who holds immediate responsibility for the laboratory, is tasked with ensuring the development and adoption of a biosafety management plan as well as a safety or operations manual. Secondly, the laboratory supervisor, who reports to the laboratory director, is responsible for organizing regular training sessions on laboratory safety.
The third point, the personnel must be informed about any special hazards and be required to review the safety or operations manual and adhere to established practices and procedures. The laboratory supervisor is responsible for ensuring that all personnel have a clear understanding of these guidelines, and a copy of the safety or operations manual should be readily available within the laboratory. Finally, adequate medical assessment, monitoring, and treatment must be made available to all personnel when needed, and comprehensive medical records should be maintained.
Policy and practice in the United States
Legal information
In June 2009, the Trans-Federal Task Force on Optimizing Biosafety and Biocontainment Oversight recommended the formation of an agency to coordinate high safety risk level labs (3 and 4), and voluntary, non-punitive measures for incident reporting. However, it is unclear as to what changes may or may not have been implemented following their recommendations.
United States Code of Federal Regulations
The United States Code of Federal Regulations is the codification (law), or collection of laws specific to a specific to a jurisdiction that represent broad areas subject to federal regulation. Title 42 of the Code of Federal Regulations addresses laws concerning Public Health issues including biosafety which can be found under the citation 42 CFR 73 to 42 CFR 73.21 by accessing the US Code of Federal Regulations (CFR) website.
Title 42 Section 73 of the CFR addresses specific aspects of biosafety including occupational safety and health, transportation of biohazardous materials and safety plans for laboratories using potential biohazards. While biocontainment, as defined in the Biosafety in Microbiological and Biomedical Laboratories and Primary Containment for Biohazards: Selection, Installation and Use of Biosafety Cabinets manuals available at the Centers for Disease Control and Prevention website much of the design, implementation and monitoring of protocols are left up to state and local authorities.
The United States CFR states "An individual or entity required to register [as a user of biological agents] must develop and implement a written biosafety plan that is commensurate with the risk of the select agent or toxin" which is followed by three recommended sources for laboratory reference:
The CDC/NIH publication, "Biosafety in Microbiological and Biomedical Laboratories."
The Occupational Safety and Health Administration (OSHA) regulations in 29 CFR parts 1910.1200 and 1910.1450.
The "NIH Guidelines for Research Involving Recombinant DNA Molecules" (NIH Guidelines).
While clearly the needs of biocontainment and biosafety measures vary across government, academic and private industry laboratories, biological agents pose similar risks independent of their locale. Laws relating to biosafety are not easily accessible and there are few federal regulations that are readily available for a potential trainee to reference outside of the publications recommended in 42 CFR 73.12. Therefore, training is the responsibility of lab employers and is not consistent across various laboratory types thereby increasing the risk of accidental release of biological hazards that pose serious health threats to the humans, animals and the ecosystem as a whole.
Agency guidance
Many government agencies have made guidelines and recommendations in an effort to increase biosafety measures across laboratories in the United States. Agencies involved in producing policies surrounding biosafety within a hospital, pharmacy or clinical research laboratory include: the CDC, FDA, USDA, DHHS, DoT, EPA and potentially other local organizations including public health departments. The federal government does set some standards and recommendations for States to meet their standards, most of which fall under the Occupational Safety and Health Act of 1970. but currently, there is no single federal regulating agency directly responsible for ensuring the safety of biohazardous handling, storage, identification, clean-up and disposal. In addition to the CDC, the Environmental Protection Agency has some of the most accessible information on ecological impacts of biohazards, how to handle spills, reporting guidelines and proper disposal of agents dangerous to the environment. Many of these agencies have their own manuals and guidance documents relating to training and certain aspects of biosafety directly tied to their agency's scope, including transportation, storage and handling of blood borne pathogens (OSHA, IATA). The American Biological Safety Association (ABSA) has a list of such agencies and links to their websites, along with links to publications and guidance documents to assist in risk assessment, lab design and adherence to laboratory exposure control plans. Many of these agencies were members of the 2009 Task Force on BioSafety. There was also a formation of a Blue Ribbon Study Panel on Biodefense, but this is more concerned with national defense programs and biosecurity.
Ultimately states and local governments, as well as private industry labs, are left to make the final determinants for their own biosafety programs, which vary widely in scope and enforcement across the United States. Not all state programs address biosafety from all necessary perspectives, which should not just include personal safety, but also emphasize an full understanding among laboratory personnel of quality control and assurance, exposure potential impacts on the environment, and general public safety.
Toby Ord puts into question whether the current international conventions regarding biotechnology research and development regulation, and self-regulation by biotechnology companies and the scientific community are adequate.
State occupational safety plans are often focused on transportation, disposal, and risk assessment, allowing caveats for safety audits, but ultimately leaves the training in the hands of the employer. 22 states have approved Occupational Safety plans by OSHA that are audited annually for effectiveness. These plans apply to private and public sector workers, and not necessarily state/ government workers, and not all specifically have a comprehensive program for all aspects of biohazard management from start to finish. Sometimes biohazard management plans are limited only to workers in transportation specific job titles. The enforcement and training on such regulations can vary from lab to lab based on the State's plans for occupational health and safety. With the exception of DoD lab personnel, CDC lab personnel, First responders, and DoT employees, enforcement of training is inconsistent, and while training is required to be done, specifics on the breadth and frequency of refresher training does not seem consistent from state to state; penalties may never be assessed without larger regulating bodies being aware of non-compliance, and enforcement is limited.
Medical waste management in the United States
Medical waste management was identified as an issue in the 1980s, with the Medical Waste Tracking Act of 1988 becoming the new standard in biohazard waste disposal.
Although the Federal Government, EPA & DOT provide some oversight of regulated medical waste storage, transportation, and disposal the majority of biohazard medical waste is regulated at the state level. Each state is responsible for regulation and management of their own biohazardous waste with each state varying in their regulatory process. Record keeping of biohazardous waste also varies between states.
Medical healthcare centers, hospitals veterinary clinics, clinical laboratories and other facilities generate over one million tons of waste each year. Although the majority of this waste is as harmless as common household waste, as much as 15 percent of this waste poses a potential infection hazard, according to the Environmental Protection Agency (EPA). Medical waste is required to be rendered non-infectious before it can be disposed of. There are several different methods to treat and dispose of biohazardous waste. In the United States, the primary methods for treatment and disposal of biohazard, medical and sharps waste may include:
Incineration
Microwave
Autoclaves
Mechanical/Chemical Disinfection
Irradiation
Different forms of biohazardous wasted required different treatments for their proper waste management. This is determined largely be each states regulations.
Incidents of non-compliance and reform efforts
The United States Government has made it clear that biosafety is to be taken very seriously. In 2014, incidents with anthrax and Ebola pathogens in CDC laboratories prompted the CDC director Tom Frieden to issue a moratorium for research with these types of select agents. An investigation concluded that there was a lack of adherence to safety protocols and "inadequate safeguards" in place. This indicated a lack of proper training or reinforcement of training and supervision on regular basis for lab personnel.
Following these incidents, the CDC established an External Laboratory Safety Workgroup (ELSW), and suggestions have been made to reform effectiveness of the Federal Select Agent Program. The White House issued a report on national biosafety priorities in 2015, outlining next steps for a national biosafety and security program, and addressed biological safety needs for health research, national defense, and public safety.
In 2016, the Association of Public Health Laboratories (APHL) had a presentation at their annual meeting focused on improving biosafety culture. This same year, The UPMC Center for Health Security issued a case study report including reviews of ten different nations' current biosafety regulations, including the United States. Their goal was to "provide a foundation for identifying national-level biosafety norms and enable initial assessment of biosafety priorities necessary for developing effective national biosafety regulation and oversight."
See also
Biological hazard
Cartagena Protocol on Biosafety
Centers for Disease Control
European BioSafety Association
Interplanetary contamination
Quarantine
References
External links
WHO Biosafety Manual
CDC Biosafety pages
International Centre for Genetic Engineering and Biotechnology (ICGEB): Biosafety pages
Greenpeace safe trade campaign
American Biological Safety Association
Biosafety in Microbiological and Biomedical Laboratories
Genetic engineering
Bioethics
Safety
Biological hazards | Biosafety | [
"Chemistry",
"Technology",
"Engineering",
"Biology"
] | 3,913 | [
"Bioethics",
"Biological engineering",
"Genetic engineering",
"Ethics of science and technology",
"Molecular biology"
] |
45,262 | https://en.wikipedia.org/wiki/Biosecurity | Biosecurity refers to measures aimed at preventing the introduction or spread of harmful organisms (e.g. viruses, bacteria, plants, animals etc.) intentionally or unintentionally outside their native range or within new environments. In agriculture, these measures are aimed at protecting food crops and livestock from pests, invasive species, and other organisms not conducive to the welfare of the human population. The term includes biological threats to people, including those from pandemic diseases and bioterrorism. The definition has sometimes been broadened to embrace other concepts, and it is used for different purposes in different contexts.
The COVID-19 pandemic is a recent example of a threat for which biosecurity measures have been needed in all countries of the world.
Background and terminology
The term "biosecurity" has been defined differently by various disciplines. The term was first used by the agricultural and environmental communities to describe preventative measures against threats from naturally occurring diseases and pests, later expanded to introduced species. Australia and New Zealand, among other countries, had incorporated this definition within their legislation by 2010. New Zealand was the earliest adopter of a comprehensive approach with its Biosecurity Act 1993. In 2001, the US National Association of State Departments of Agriculture (NASDA) defined biosecurity as "the sum of risk management practices in defense against biological threats", and its main goal as "protect[ing] against the risk posed by disease and organisms".
In 2010, the World Health Organization (WHO) provided an information note describing biosecurity as a strategic and integrated approach to analysing and managing relevant risks to human, animal and plant life and health and associated risks for the environment. In another document, it describes the aim of biosecurity being "to enhance the ability to protect human health, agricultural production systems, and the people and industries that depend on them", with the overarching goal being "to prevent, control and/or manage risks to life and health as appropriate to the particular biosecurity sector".
Measures taken to counter biosecurity risks typically include compulsory terms of quarantine, and are put in place to minimise the risk of invasive pests or diseases arriving at a specific location that could damage crops and livestock as well as the wider environment.
In general, the term is today taken to include managing biological threats to people, industries or environment. These may be from foreign or endemic organisms, but they can also extend to pandemic diseases and the threat of bioterrorism, both of which pose threats to public health.
Laboratory biosafety and intentional harm
The definition has sometimes been broadened to embrace other concepts, and it is used for different purposes in different contexts. It can be defined as the "successful minimising of the risks that the biological sciences will be deliberately or accidentally misused in a way which causes harm for humans, animals, plants or the environment, including through awareness and understanding of the risks".
From the late 1990s, in response to the threat of biological terrorism, the term started to include the prevention of the theft of biological materials from research laboratories, called "laboratory biosecurity" by WHO. The term laboratory biosafety refers to the measures taken "to reduce the risk of accidental release of or exposure to infectious disease agents", whereas laboratory biosecurity is usually taken to mean "a set of systems and practices employed in legitimate bioscience facilities to reduce the risk that dangerous biological agents will be stolen and used maliciously". Joseph Kanabrocki (2017) source elaborates: "Biosafety focuses on protection of the researcher, their contacts and the environment via accidental release of a pathogen from containment, whether by direct release into the environment or by a laboratory-acquired infection. Conversely, biosecurity focuses on controlling access to pathogens of consequence and on the reliability of the scientists granted this access (thereby reducing the threat of an intentional release of a pathogen) and/or access to sensitive information related to a pathogen's virulence, host-range, transmissibility, resistance to medical countermeasures, and environmental stability, among other things".
In the US, the National Science Advisory Board on Biosecurity was created in 2004 to provide biosecurity oversight of "dual-use research", defined as "biological research with legitimate scientific purpose that may be misused to pose a biological threat to public health and/or national security". In 2006, the National Academy of Sciences defined biosecurity as "security against the inadvertent, inappropriate, or intentional malicious or malevolent use of potentially dangerous biological agents or biotechnology, including the development, production, stockpiling, or use of biological weapons as well as outbreaks of newly emergent and epidemic disease".
A number of nations have developed biological weapons for military use, and many civilian research projects in medicine have the potential to be used in military applications (dual-use research), so biosecurity protocols are used to prevent dangerous biological materials from falling into the hands of malevolent parties.
Laboratory program
Components of a laboratory biosecurity program include:
Physical security
Personnel security
Material control and accountability
Transport security
Information security
Program management
Biological Security
Animals and plants
Threats to animals and plants, in particular food crops, which may in turn threaten human health, are typically overseen by a government department of agriculture.
Animal biosecurity encompasses different means of prevention and containment of disease agents in a specific area. A critical element in animal biosecurity is biocontainment – the control of disease agents already present in a particular area and work to prevent transmission. Animal biosecurity may protect organisms from infectious agents or noninfectious agents such as toxins or pollutants, and can be executed in areas as large as a nation or as small as a local farm.
Animal biosecurity takes into account the epidemiological triad for disease occurrence: the individual host, the disease, and the environment in contributing to disease susceptibility. It aims to improve nonspecific immunity of the host to resist the introduction of an agent, or limit the risk that an agent will be sustained in an environment at adequate levels. Biocontainment works to improve specific immunity towards already present pathogens.
The aquaculture industry is also vulnerable to pathogenic organisms, including fungal, bacterial, or viral infections which can affect fish at different stages of their life cycle.
Human health
Direct threats to human health may come in the form of epidemics or pandemics, such as the 1918 Spanish flu pandemic and other influenza epidemics, MERS, SARS, or the COVID-19 pandemic, or they may be deliberate attacks (bioterrorism). The country/federal and/or state health departments are usually responsible for managing the control of outbreaks and transmission and the supply of information to the public.
Medical countermeasures
Medical countermeasures (MCMs) are products such as biologics and pharmaceutical drugs that can protect from or treat the effects of a chemical, biological, radiological, or nuclear (CBRN) attack or in the case of public health emergencies. MCMs can also be used for prevention and diagnosis of symptoms associated with CBRN attacks or threats.
In the US, the Food and Drug Administration (FDA) runs a program called the "FDA Medical Countermeasures Initiative" (MCMi), with programs funded by the federal government. It helps support "partner" agencies and organisations prepare for public health emergencies that could require MCMs.
International agreements and guidelines
Agricultural biosecurity and human health
Various international organisations, international bodies and legal instruments and agreements make up a worldwide governance framework for biosecurity.
Standard-setting organisations include the Codex Alimentarius Commission (CAC), the World Organisation for Animal Health (OIE) and the Commission on Phytosanitary Measures (CPM) develop standards pertinent to their focuses, which then become international reference points through the World Trade Organization (WTO)'s Agreement on the Application of Sanitary and Phytosanitary Measures (SPS Agreement), created in 1995. This agreement requires all members of the WTO to consider all import requests concerning agricultural products from other countries. Broadly, the measures covered by the agreement are those aimed at the protection of human, animal or plant life or health from certain risks.
Other important global and regional agreements include the International Health Regulations (IHR, 2005), the International Plant Protection Convention (IPPC), the Cartagena Protocol on Biosafety, the Codex Alimentarius, the Convention on Biological Diversity (CBD) and the General Agreement on Tariffs and Trade (GATT, 1947).
The UN Food and Agriculture Organization (FAO), the International Maritime Organization (IMO), the Organisation for Economic Co-operation and Development (OECD) and WHO are the most important organisations associated with biosecurity.
The IHR is a legally binding agreement on 196 nations, including all member states of WHO. Its purpose and scope is "to prevent, protect against, control, and provide a public health response to the international spread of disease in ways that are commensurate with and restricted to public health risks and that avoid unnecessary interference with international traffic and trade", "to help the international community prevent and respond to acute public health risks that have the potential to cross borders and threaten people worldwide".
Biological weapons
The Biological Weapons Convention was the first multilateral disarmament treaty banning the production of an entire category of weapons, being biological weapons.
UN Resolution 1540 (2004) "affirms that the proliferation of nuclear, chemical and biological weapons and their means of delivery constitutes a threat to international peace and security. The resolution obliges States, inter alia, to refrain from supporting by any means non-State actors from developing, acquiring, manufacturing, possessing, transporting, transferring or using nuclear, chemical or biological weapons and their means of delivery". Resolution 2325, reaffirming 1540, was adopted unanimously on 15 December 2016.
Laboratory safety
OECD Best Practice Guidelines for Biological Resource Centres, a consensus report created in 2001 after experts from OECD countries came together, calling upon "national governments to undertake actions to bring the BRC concept into being in concert with the international scientific community". BRCs are "repositories and providers of high-quality biological materials and information".
As international security issue
For a long time, health security or biosecurity issues were not considered as an international security issue, especially in the traditional view of international relations. However, some changes in trend have contributed to the inclusion of biosecurity (health security) in discussions of security. As time progressed, there was a movement towards securitisation. Non-traditional security issues such as climate change, organised crime, terrorism, and landmines came to be included in the definition of international security. There was a general realisation that the actors in the international system not only involved nation-states but also included international organisations, institutions, and individuals, which ensured the security of various actors within each nation became an important agenda. Biosecurity is one of the issues to be securitised under this trend. On 10 January 2000, the UN Security Council convened to discuss HIV/AIDS as a security issue in Africa and designated it a threat in the following month. The UNDP Millennium Development Goals also recognise health issues as international security issue.
Several instances of epidemics such as SARS increased awareness of health security (biosecurity). Several factors have rendered biosecurity issues more severe: there is a continuing advancement of biotechnology, which increases the possibility for malevolent use, evolution of infectious diseases, and globalising force which is making the world more interdependent and more susceptible to spread of epidemics.
Controversial experiments in synthetic biology, including the synthesis of poliovirus from its genetic sequence, and the modification of flu type H5N1 for airborne transmission in mammals, led to calls for tighter controls on the materials and information used to perform similar feats. Ideas include better enforcement by national governments and private entities concerning shipments and downloads of such materials, and registration or background check requirements for anyone handling such materials.
Challenges
Diseases caused by emerging viruses are a major threat to global public health. The proliferation of high biosafety level laboratories around the world has resulted in concern about the availability of targets for those that might be interested in stealing dangerous pathogens. The growth in containment laboratories is often in response to emerging diseases, and many new containment labs' main focus is to find ways to control these diseases. By strengthening national disease surveillance, prevention, control and response systems, the labs have improved international public health.
One of the major challenges of biosecurity is that harmful technology has become more available and accessible. Biomedical advances and the globalisation of scientific and technical expertise have made it possible to greatly improve public health; however, there is also the risk that these advances can make it easier for terrorists to produce biological weapons.
Communication between the citizen and law enforcement officials is important. Indicators of agro-terrorism at a food processing plant may include persons taking notes or photos of a business, theft of employee uniforms, employees changing working hours, or persons attempting to gain information about security measures and personnel. Unusual activity is best handled if reported to law enforcement personnel promptly. Communication between policymakers and life sciences scientists is also important.
The MENA (Middle East and North Africa) region, with its socio-political unrest, diverse cultures and societies, and recent biological weapons programs, faces particular challenges.
Future
Biosecurity requires the cooperation of scientists, technicians, policy makers, security engineers, and law enforcement officials.
The emerging nature of newer biosecurity threats means that small-scale risks can blow up rapidly, which makes the development of an effective policy challenging owing to the limitations on time and resources available for analysing threats and estimating the likelihood of their occurrence. It is likely that further synergies with other disciplines, such as virology or the detection of chemical contaminants, will develop over time.
Some uncertainties about the policy implementation for biosecurity remain for future. In order to carefully plan out preventive policies, policy makers need to be able to somewhat predict the probability and assess the risks; however, as the uncertain nature of the biosecurity issue goes it is largely difficult to predict and also involves a complex process as it requires a multidisciplinary approach. The policy choices they make to address an immediate threat could pose another threat in the future, facing an unintended trade-off.
Philosopher Toby Ord, in his 2020 book The Precipice: Existential Risk and the Future of Humanity, puts into question whether the current international conventions regarding biotechnology research and development regulation, and self-regulation by biotechnology companies and the scientific community are adequate.
American scientists have proposed various policy-based measures to reduce the large risks from life sciences research – such as pandemics through accident or misapplication. Risk management measures may include novel international guidelines, effective oversight, improvement of US policies to influence policies globally, and identification of gaps in biosecurity policies along with potential approaches to address them.
Researchers have also warned in 2024 of potential risks from mirror life, a hypothetical form of life whose molecular building blocks have inverted chirality. If mirror bacteria were synthesized, they may be able to evade immune systems and spread in the environment without natural predators. They noted that the technology to create mirror bacteria was still probably more than a decade away, but called for a ban on research aiming to create them.
Role of education
The advance of the life sciences and biotechnology has the potential to bring great benefits to humankind through responding to societal challenges. However, it is also possible that such advances could be exploited for hostile purposes, something evidenced in a small number of incidents of bioterrorism, particularly by the series of large-scale offensive biological warfare programs carried out by major states in the last century. Dealing with this challenge, which has been labelled the "dual-use dilemma", requires a number of different activities. However, one way of ensuring that the life sciences continue to generate significant benefits and do not become subject to misuse for hostile purposes is a process of engagement between scientists and the security community, and the development of strong ethical and normative frameworks to complement legal and regulatory measures that are developed by states.
See also
Biodefence
Biological Weapons Convention
Biorisk
Biosecurity in Australia
Biosecurity in New Zealand
Biosecurity in the United States
Biowar
Cyberbiosecurity
Food safety
Global health
Global Health Security Initiative (GHSI)
Good Agricultural Practices
Human security
International Health Regulations
Interplanetary contamination
Public health
Quarantine
Select agent
References
Further reading
General
Biosecurity Commons, a Wiki Database
– A peer-reviewed, open access electronic journal for cross-disciplinary research in all aspects of human or animal epidemics, pandemics, biosecurity, bioterrorism and CBRN, including prevention, governance, detection, mitigation and response.
Articles and books
Chen, Lincoln, Jennifer Leaning, and Vasant Narasimhan, eds. (2003). Global Health Challenges for Human Security Harvard University Press.
Hoyt, Kendall and Brooks, Stephen G. (2003). "A Double-Edged Sword: Globalization and Biosecurity". International Affairs, Vol. 23, No. 3.
Koblentz, Gregory D. (2012). "From biodefence to biosecurity: the Obama administration's strategy for countering biological threats". International Affairs, Vol. 88, Issue 1.
Lakoff, Andrew, and Sorensen, Georg. (October 2008). Biosecurity Interventions: Global Health and Security in Question, Columbia University Press, . (Details here.)
Paris, Roland. (2001). "Human Security: Paradigm Shift or Hot Air?". International Affairs, Vol. 26, No. 2.
Tadjbakhsh, Shahrbanou. and Chenoy, Anuradha. (2007). Human Security: Concepts and Implications. New York, Routledge. p. 42. (Also 2005 article here)
External links
Biosecurity at the FAO
Canadian Food Inspection Agency
OIE Biological Threat Reduction Strategy (World Organisation for Animal Health) | Biosecurity | [
"Environmental_science"
] | 3,802 | [
"Toxicology",
"Biosecurity"
] |
45,265 | https://en.wikipedia.org/wiki/De%20Bruijn%E2%80%93Newman%20constant | The de Bruijn–Newman constant, denoted by and named after Nicolaas Govert de Bruijn and Charles Michael Newman, is a mathematical constant defined via the zeros of a certain function , where is a real parameter and is a complex variable. More precisely,
,
where is the super-exponentially decaying function
and is the unique real number with the property that has only real zeros if and only if .
The constant is closely connected with Riemann hypothesis. Indeed, the Riemann hypothesis is equivalent to the conjecture that . Brad Rodgers and Terence Tao proved that , so the Riemann hypothesis is equivalent to . A simplified proof of the Rodgers–Tao result was later given by Alexander Dobner.
History
De Bruijn showed in 1950 that has only real zeros if , and moreover, that if has only real zeros for some , also has only real zeros if is replaced by any larger value. Newman proved in 1976 the existence of a constant for which the "if and only if" claim holds; and this then implies that is unique. Newman also conjectured that , which was proven forty years later, by Brad Rodgers and Terence Tao in 2018.
Upper bounds
De Bruijn's upper bound of was not improved until 2008, when Ki, Kim and Lee proved , making the inequality strict.
In December 2018, the 15th Polymath project improved the bound to . A manuscript of the Polymath work was submitted to arXiv in late April 2019, and was published in the journal Research In the Mathematical Sciences in August 2019.
This bound was further slightly improved in April 2020 by Platt and Trudgian to .
Historical bounds
References
External links
Mathematical constants
Analytic number theory | De Bruijn–Newman constant | [
"Mathematics"
] | 347 | [
"Analytic number theory",
"Mathematical objects",
"nan",
"Mathematical constants",
"Numbers",
"Number theory"
] |
45,270 | https://en.wikipedia.org/wiki/Measure%20space | A measure space is a basic object of measure theory, a branch of mathematics that studies generalized notions of volumes. It contains an underlying set, the subsets of this set that are feasible for measuring (the -algebra) and the method that is used for measuring (the measure). One important example of a measure space is a probability space.
A measurable space consists of the first two components without a specific measure.
Definition
A measure space is a triple where
is a set
is a -algebra on the set
is a measure on
In other words, a measure space consists of a measurable space together with a measure on it.
Example
Set . The -algebra on finite sets such as the one above is usually the power set, which is the set of all subsets (of a given set) and is denoted by Sticking with this convention, we set
In this simple case, the power set can be written down explicitly:
As the measure, define by
so (by additivity of measures) and (by definition of measures).
This leads to the measure space It is a probability space, since The measure corresponds to the Bernoulli distribution with which is for example used to model a fair coin flip.
Important classes of measure spaces
Most important classes of measure spaces are defined by the properties of their associated measures. This includes, in order of increasing generality:
Probability spaces, a measure space where the measure is a probability measure
Finite measure spaces, where the measure is a finite measure
-finite measure spaces, where the measure is a -finite measure
Another class of measure spaces are the complete measure spaces.
References
Measure theory
Space (mathematics) | Measure space | [
"Mathematics"
] | 331 | [
"Mathematical structures",
"Mathematical objects",
"Space (mathematics)"
] |
45,273 | https://en.wikipedia.org/wiki/Infinite%20loop | In computer programming, an infinite loop (or endless loop) is a sequence of instructions that, as written, will continue endlessly, unless an external intervention occurs, such as turning off power via a switch or pulling a plug. It may be intentional.
There is no general algorithm to determine whether a computer program contains an infinite loop or not; this is the halting problem.
Overview
This differs from "a type of computer program that runs the same instructions continuously until it is either stopped or interrupted". Consider the following pseudocode:
how_many = 0
while is_there_more_data() do
how_many = how_many + 1
end
display "the number of items counted = " how_many
The same instructions were run continuously until it was stopped or interrupted . . . by the FALSE returned at some point by the function is_there_more_data.
By contrast, the following loop will not end by itself:
birds = 1
fish = 2
while birds + fish > 1 do
birds = 3 - birds
fish = 3 - fish
end
birds will alternate being 1 or 2, while fish will alternate being 2 or 1. The loop will not stop unless an external intervention occurs ("pull the plug").
Details
An infinite loop is a sequence of instructions in a computer program which loops endlessly, either due to the loop having no terminating condition, having one that can never be met, or one that causes the loop to start over. In older operating systems with cooperative multitasking, infinite loops normally caused the entire system to become unresponsive. With the now-prevalent preemptive multitasking model, infinite loops usually cause the program to consume all available processor time, but can usually be terminated by a user. Busy wait loops are also sometimes called "infinite loops". Infinite loops are one possible cause for a computer hanging or freezing; others include thrashing, deadlock, and access violations.
Intended vs unintended looping
Looping is repeating a set of instructions until a specific condition is met. An infinite loop occurs when the condition will never be met due to some inherent characteristic of the loop.
Intentional looping
There are a few situations when this is desired behavior. For example, the games on cartridge-based game consoles typically have no exit condition in their main loop, as there is no operating system for the program to exit to; the loop runs until the console is powered off.
Modern interactive computers require that the computer constantly be monitoring for user input or device activity, so at some fundamental level there is an infinite processing idle loop that must continue until the device is turned off or reset. In the Apollo Guidance Computer, for example, this outer loop was contained in the Exec program, and if the computer had absolutely no other work to do, it would loop run a dummy job that would simply turn off the "computer activity" indicator light.
Modern computers also typically do not halt the processor or motherboard circuit-driving clocks when they crash. Instead they fall back to an error condition displaying messages to the operator (such as the blue screen of death), and enter an infinite loop waiting for the user to either respond to a prompt to continue, or reset the device.
Spinlocks
Spinlocks are low-level synchronization mechanisms used in concurrent programming to protect shared resources. Unlike traditional locks that put a thread to sleep when it can't acquire the lock, spinlocks repeatedly "spin" in an infinite loop until the lock becomes available. This intentional infinite looping is a deliberate design choice aimed at minimizing the time a thread spends waiting for the lock and avoiding the overhead of higher level synchronisation mechanisms such as mutexes.
Multi-threading
In multi-threaded programs some threads can be executing inside infinite loops without causing the entire program to be stuck in an infinite loop. If the main thread exits, all threads of the process are forcefully stopped, thus all execution ends and the process/program terminates. The threads inside the infinite loops can perform "housekeeping" tasks or they can be in a blocked state waiting for input (from socket/queue) and resume execution every time input is received.
Unintentional looping
Most often, the term is used for those situations when this is not the intended result; that is, when this is a bug. Such errors are most common by novice programmers, but can be made by experienced programmers also, because their causes can be quite subtle.
One common cause, for example, is that a programmer intends to iterate over sequence of nodes in a data structure such as a linked list or tree, executing the loop code once for each node. Improperly formed links can create a reference loop in the data structure, where one node links to another that occurs earlier in the sequence. This makes part of the data structure into a ring, causing naive code to loop forever.
While most infinite loops can be found by close inspection of the code, there is no general method to determine whether a given program will ever halt or will run forever; this is the undecidability of the halting problem.
Interruption
As long as the system is responsive, infinite loops can often be interrupted by sending a signal to the process (such as SIGINT in Unix), or an interrupt to the processor, causing the current process to be aborted. This can be done in a task manager, in a terminal with the Control-C command, or by using the kill command or system call. However, this does not always work, as the process may not be responding to signals or the processor may be in an uninterruptible state, such as in the Cyrix coma bug (caused by overlapping uninterruptible instructions in an instruction pipeline). In some cases other signals such as SIGKILL can work, as they do not require the process to be responsive, while in other cases the loop cannot be terminated short of system shutdown.
Language support
Infinite loops can be implemented using various control flow constructs. Most commonly, in unstructured programming this is jump back up (goto), while in structured programming this is an indefinite loop (while loop) set to never end, either by omitting the condition or explicitly setting it to true, as while (true) ....
Some languages have special constructs for infinite loops, typically by omitting the condition from an indefinite loop. Examples include Ada (loop ... end loop), Fortran (DO ... END DO), Go (for { ... }), Ruby (loop do ... end), and Rust (loop { ... }).
Examples of intentional infinite loops
A simple example (in C):
#include <stdio.h>
int main()
{
for (;;) // or equivalently, while (1)
printf("Infinite Loop\n");
return 0;
}
The form for (;;) for an infinite loop is traditional, appearing in the standard reference The C Programming Language, and is often punningly pronounced "forever".
This is a loop that will print "Infinite Loop" without halting.
A similar example in 1980s-era BASIC:
10 PRINT "INFINITE LOOP"
20 GOTO 10
A similar example in DOS batch files:
:A
echo Infinite Loop
goto :A
Here the loop is quite obvious, as the last line unconditionally sends execution back to the first.
An example in Java:
while (true) {
System.out.println("Infinite Loop");
}
The while loop never terminates because its condition is always true.
An example in Bourne Again Shell:
for ((;;)); do
echo "Infinite Loop"
done
An example in Rust:
loop {
println!("Infinite loop");
}
Examples of unintentional infinite loops
Mathematical errors
Here is one example of an infinite loop in Visual Basic:
dim x as integer
do while x < 5
x = 1
x = x + 1
loop
This creates a situation where x will never be greater than 5, since at the start of the loop code, x is assigned the value of 1 (regardless of any previous value) before it is changed to x + 1. Thus the loop will always result in x = 2 and will never break. This could be fixed by moving the x = 1 instruction outside the loop so that its initial value is set only once.
In some languages, programmer confusion about mathematical symbols may lead to an unintentional infinite loop. For example, here is a snippet in C:
#include <stdio.h>
int main(void)
{
int a = 0;
while (a < 10) {
printf("%d\n", a);
if (a = 5)
printf("a equals 5!\n");
a++;
}
return 0;
}
The expected output is the numbers 0 through 9, with an interjected "a equals 5!" between 5 and 6. However, in the line "if (a = 5)" above, the = (assignment) operator was confused with the == (equality test) operator. Instead, this will assign the value of 5 to a at this point in the program. Thus, a will never be able to advance to 10, and this loop cannot terminate.
Rounding errors
Unexpected behavior in evaluating the terminating condition can also cause this problem. Here is an example in C:
float x = 0.1;
while (x != 1.1) {
printf("x = %22.20f\n", x);
x += 0.1;
}
On some systems, this loop will execute ten times as expected, but on other systems it will never terminate. The problem is that the loop terminating condition (x != 1.1) tests for exact equality of two floating point values, and the way floating point values are represented in many computers will make this test fail, because they cannot represent the value 0.1 exactly, thus introducing rounding errors on each increment (cf. box).
The same can happen in Python:
x = 0.1
while x != 1:
print(x)
x += 0.1
Because of the likelihood of tests for equality or not-equality failing unexpectedly, it is safer to use greater-than or less-than tests when dealing with floating-point values. For example, instead of testing whether x equals 1.1, one might test whether (x <= 1.0), or (x < 1.1), either of which would be certain to exit after a finite number of iterations. Another way to fix this particular example would be to use an integer as a loop index, counting the number of iterations that have been performed.
A similar problem occurs frequently in numerical analysis: in order to compute a certain result, an iteration is intended to be carried out until the error is smaller than a chosen tolerance. However, because of rounding errors during the iteration, the specified tolerance can never be reached, resulting in an infinite loop.
Multi-party loops
An infinite loop may be caused by several entities interacting. Consider a server that always replies with an error message if it does not understand the request. Even if there is no possibility for an infinite loop within the server itself, a system comprising two of them (A and B) may loop endlessly: if A receives a message of unknown type from B, then A replies with an error message to B; if B does not understand the error message, it replies to A with its own error message; if A does not understand the error message from B, it sends yet another error message, and so on.
One common example of such situation is an email loop. An example of an email loop is if someone receives mail from a no reply inbox, but their auto-response is on. They will reply to the no reply inbox, triggering the "this is a no reply inbox" response. This will be sent to the user, who then sends an auto reply to the no-reply inbox, and so on and so forth.
Pseudo-infinite loops
A pseudo-infinite loop is a loop that appears infinite but is really just a very long loop.
Very large numbers
An example in bash:
for x in $(seq 1000000000); do
#loop code
done
Impossible termination condition
An example for loop in C:
unsigned int i;
for (i = 1; i != 0; i++) {
/* loop code */
}
It appears that this will go on indefinitely, but in fact the value of i will eventually reach the maximum value storable in an unsigned int and adding 1 to that number will wrap-around to 0, breaking the loop. The actual limit of i depends on the details of the system and compiler used. With arbitrary-precision arithmetic, this loop would continue until the computer's memory could no longer hold i. If i was a signed integer, rather than an unsigned integer, overflow would be undefined. In this case, the compiler could optimize the code into an infinite loop.
Infinite recursion
Infinite recursion is a special case of an infinite loop that is caused by recursion.
The following example in Visual Basic for Applications (VBA) returns a stack overflow error:
Sub Test1()
Call Test1
End Sub
Break statement
A "while (true)" loop looks infinite at first glance, but there may be a way to escape the loop through a break statement or return statement.
Example in PHP:
while (true) {
if ($foo->bar()) {
return;
}
}
Alderson loop
Alderson loop is a rare slang or jargon term for an infinite loop where there is an exit condition available, but inaccessible in an implementation of the code, typically due to a programmer error. These are most common and visible while debugging user interface code.
A C-like pseudocode example of an Alderson loop, where the program is supposed to sum numbers given by the user until zero is given, but where the wrong operator is used:
int sum = 0;
int i;
while (true) {
printf("Input a number to add to the sum or 0 to quit");
i = getUserInput();
if (i * 0) { // if i times 0 is true, add i to the sum. Note: ZERO means FALSE, Non-Zero means TRUE. "i * 0" is ZERO (FALSE)!
sum += i; // sum never changes because (i * 0) is 0 for any i; it would change if we had != in the condition instead of *
}
if (sum > 100) {
break; // terminate the loop; exit condition exists but is never reached because sum is never added to
}
}
The term allegedly received its name from a programmer (whose last name is Alderson) who in 1996 had coded a modal dialog box in Microsoft Access without either an OK or Cancel button, thereby disabling the entire program whenever the box came up.
See also
Cycle detection
Divergence (computer science)
Fork bomb (an infinite loop is one of two key components)
Infinite regress
References
External links
Make an infinite loop in several languages, on programming-idioms.org.
Control flow
Iteration in programming
Programming language comparisons
Recursion
Software bugs
Articles with example BASIC code
Articles with example C code
Articles with example Java code
Articles with example PHP code
Articles with example Python (programming language) code
Articles with example Rust code | Infinite loop | [
"Mathematics",
"Technology"
] | 3,204 | [
"Mathematical logic",
"Recursion",
"Computing comparisons",
"Programming language comparisons"
] |
45,294 | https://en.wikipedia.org/wiki/Atmospheric%20entry | Atmospheric entry (sometimes listed as Vimpact or Ventry) is the movement of an object from outer space into and through the gases of an atmosphere of a planet, dwarf planet, or natural satellite. Atmospheric entry may be uncontrolled entry, as in the entry of astronomical objects, space debris, or bolides. It may be controlled entry (or reentry) of a spacecraft that can be navigated or follow a predetermined course. Methods for controlled atmospheric entry, descent, and landing of spacecraft are collectively termed as EDL.
Objects entering an atmosphere experience atmospheric drag, which puts mechanical stress on the object, and aerodynamic heating—caused mostly by compression of the air in front of the object, but also by drag. These forces can cause loss of mass (ablation) or even complete disintegration of smaller objects, and objects with lower compressive strength can explode.
Objects have reentered with speeds ranging from 7.8 km/s for low Earth orbit to around 12.5 km/s for the Stardust probe. They have high kinetic energies, and atmospheric dissipation is the only way of expending this, as it is highly impractical to use retrorockets for the entire reentry procedure. Crewed space vehicles must be slowed to subsonic speeds before parachutes or air brakes may be deployed.
Ballistic warheads and expendable vehicles do not require slowing at reentry, and in fact, are made streamlined so as to maintain their speed. Furthermore, slow-speed returns to Earth from near-space such as high-altitude parachute jumps from balloons do not require heat shielding because the gravitational acceleration of an object starting at relative rest from within the atmosphere itself (or not far above it) cannot create enough velocity to cause significant atmospheric heating.
For Earth, atmospheric entry occurs by convention at the Kármán line at an altitude of above the surface, while at Venus atmospheric entry occurs at and at Mars atmospheric entry occurs at about . Uncontrolled objects reach high velocities while accelerating through space toward the Earth under the influence of Earth's gravity, and are slowed by friction upon encountering Earth's atmosphere. Meteors are also often travelling quite fast relative to the Earth simply because their own orbital path is different from that of the Earth before they encounter Earth's gravity well. Most objects enter at hypersonic speeds due to their sub-orbital (e.g., intercontinental ballistic missile reentry vehicles), orbital (e.g., the Soyuz), or unbounded (e.g., meteors) trajectories. Various advanced technologies have been developed to enable atmospheric reentry and flight at extreme velocities. An alternative method of controlled atmospheric entry is buoyancy which is suitable for planetary entry where thick atmospheres, strong gravity, or both factors complicate high-velocity hyperbolic entry, such as the atmospheres of Venus, Titan and the giant planets.
History
The concept of the ablative heat shield was described as early as 1920 by Robert Goddard: "In the case of meteors, which enter the atmosphere with speeds as high as per second, the interior of the meteors remains cold, and the erosion is due, to a large extent, to chipping or cracking of the suddenly heated surface. For this reason, if the outer surface of the apparatus were to consist of layers of a very infusible hard substance with layers of a poor heat conductor between, the surface would not be eroded to any considerable extent, especially as the velocity of the apparatus would not be nearly so great as that of the average meteor."
Practical development of reentry systems began as the range, and reentry velocity of ballistic missiles increased. For early short-range missiles, like the V-2, stabilization and aerodynamic stress were important issues (many V-2s broke apart during reentry), but heating was not a serious problem. Medium-range missiles like the Soviet R-5, with a range, required ceramic composite heat shielding on separable reentry vehicles (it was no longer possible for the entire rocket structure to survive reentry). The first ICBMs, with ranges of , were only possible with the development of modern ablative heat shields and blunt-shaped vehicles.
In the United States, this technology was pioneered by H. Julian Allen and A. J. Eggers Jr. of the National Advisory Committee for Aeronautics (NACA) at Ames Research Center. In 1951, they made the counterintuitive discovery that a blunt shape (high drag) made the most effective heat shield. From simple engineering principles, Allen and Eggers showed that the heat load experienced by an entry vehicle was inversely proportional to the drag coefficient; i.e., the greater the drag, the less the heat load. If the reentry vehicle is made blunt, air cannot "get out of the way" quickly enough, and acts as an air cushion to push the shock wave and heated shock layer forward (away from the vehicle). Since most of the hot gases are no longer in direct contact with the vehicle, the heat energy would stay in the shocked gas and simply move around the vehicle to later dissipate into the atmosphere.
The Allen and Eggers discovery, though initially treated as a military secret, was eventually published in 1958.
Terminology, definitions and jargon
When atmospheric entry is part of a spacecraft landing or recovery, particularly on a planetary body other than Earth, entry is part of a phase referred to as entry, descent, and landing, or EDL. When the atmospheric entry returns to the same body that the vehicle had launched from, the event is referred to as reentry (almost always referring to Earth entry).
The fundamental design objective in atmospheric entry of a spacecraft is to dissipate the energy of a spacecraft that is traveling at hypersonic speed as it enters an atmosphere such that equipment, cargo, and any passengers are slowed and land near a specific destination on the surface at zero velocity while keeping stresses on the spacecraft and any passengers within acceptable limits. This may be accomplished by propulsive or aerodynamic (vehicle characteristics or parachute) means, or by some combination.
Entry vehicle shapes
There are several basic shapes used in designing entry vehicles:
Sphere or spherical section
The simplest axisymmetric shape is the sphere or spherical section. This can either be a complete sphere or a spherical section forebody with a converging conical afterbody. The aerodynamics of a sphere or spherical section are easy to model analytically using Newtonian impact theory. Likewise, the spherical section's heat flux can be accurately modeled with the Fay–Riddell equation. The static stability of a spherical section is assured if the vehicle's center of mass is upstream from the center of curvature (dynamic stability is more problematic). Pure spheres have no lift. However, by flying at an angle of attack, a spherical section has modest aerodynamic lift thus providing some cross-range capability and widening its entry corridor. In the late 1950s and early 1960s, high-speed computers were not yet available and computational fluid dynamics was still embryonic. Because the spherical section was amenable to closed-form analysis, that geometry became the default for conservative design. Consequently, crewed capsules of that era were based upon the spherical section.
Pure spherical entry vehicles were used in the early Soviet Vostok and Voskhod capsules and in Soviet Mars and Venera descent vehicles. The Apollo command module used a spherical section forebody heat shield with a converging conical afterbody. It flew a lifting entry with a hypersonic trim angle of attack of −27° (0° is blunt-end first) to yield an average L/D (lift-to-drag ratio) of 0.368. The resultant lift achieved a measure of cross-range control by offsetting the vehicle's center of mass from its axis of symmetry, allowing the lift force to be directed left or right by rolling the capsule on its longitudinal axis. Other examples of the spherical section geometry in crewed capsules are Soyuz/Zond, Gemini, and Mercury. Even these small amounts of lift allow trajectories that have very significant effects on peak g-force, reducing it from 8–9 g for a purely ballistic (slowed only by drag) trajectory to 4–5 g, as well as greatly reducing the peak reentry heat.
Sphere-cone
The sphere-cone is a spherical section with a frustum or blunted cone attached. The sphere-cone's dynamic stability is typically better than that of a spherical section. The vehicle enters sphere-first. With a sufficiently small half-angle and properly placed center of mass, a sphere-cone can provide aerodynamic stability from Keplerian entry to surface impact. (The half-angle is the angle between the cone's axis of rotational symmetry and its outer surface, and thus half the angle made by the cone's surface edges.)
The original American sphere-cone aeroshell was the Mk-2 RV (reentry vehicle), which was developed in 1955 by the General Electric Corp. The Mk-2's design was derived from blunt-body theory and used a radiatively cooled thermal protection system (TPS) based upon a metallic heat shield (the different TPS types are later described in this article). The Mk-2 had significant defects as a weapon delivery system, i.e., it loitered too long in the upper atmosphere due to its lower ballistic coefficient and also trailed a stream of vaporized metal making it very visible to radar. These defects made the Mk-2 overly susceptible to anti-ballistic missile (ABM) systems. Consequently, an alternative sphere-cone RV to the Mk-2 was developed by General Electric.
This new RV was the Mk-6 which used a non-metallic ablative TPS, a nylon phenolic. This new TPS was so effective as a reentry heat shield that significantly reduced bluntness was possible. However, the Mk-6 was a huge RV with an entry mass of 3,360 kg, a length of 3.1 m and a half-angle of 12.5°. Subsequent advances in nuclear weapon and ablative TPS design allowed RVs to become significantly smaller with a further reduced bluntness ratio compared to the Mk-6. Since the 1960s, the sphere-cone has become the preferred geometry for modern ICBM RVs with typical half-angles being between 10° and 11°.
Reconnaissance satellite RVs (recovery vehicles) also used a sphere-cone shape and were the first American example of a non-munition entry vehicle (Discoverer-I, launched on 28 February 1959). The sphere-cone was later used for space exploration missions to other celestial bodies or for return from open space; e.g., Stardust probe. Unlike with military RVs, the advantage of the blunt body's lower TPS mass remained with space exploration entry vehicles like the Galileo Probe with a half-angle of 45° or the Viking aeroshell with a half-angle of 70°. Space exploration sphere-cone entry vehicles have landed on the surface or entered the atmospheres of Mars, Venus, Jupiter, and Titan.
Biconic
The biconic is a sphere-cone with an additional frustum attached. The biconic offers a significantly improved L/D ratio. A biconic designed for Mars aerocapture typically has an L/D of approximately 1.0 compared to an L/D of 0.368 for the Apollo-CM. The higher L/D makes a biconic shape better suited for transporting people to Mars due to the lower peak deceleration. Arguably, the most significant biconic ever flown was the Advanced Maneuverable Reentry Vehicle (AMaRV). Four AMaRVs were made by the McDonnell Douglas Corp. and represented a significant leap in RV sophistication. Three AMaRVs were launched by Minuteman-1 ICBMs on 20 December 1979, 8 October 1980 and 4 October 1981. AMaRV had an entry mass of approximately 470 kg, a nose radius of 2.34 cm, a forward-frustum half-angle of 10.4°, an inter-frustum radius of 14.6 cm, aft-frustum half-angle of 6°, and an axial length of 2.079 meters. No accurate diagram or picture of AMaRV has ever appeared in the open literature. However, a schematic sketch of an AMaRV-like vehicle along with trajectory plots showing hairpin turns has been published.
AMaRV's attitude was controlled through a split body flap (also called a split-windward flap) along with two yaw flaps mounted on the vehicle's sides. Hydraulic actuation was used for controlling the flaps. AMaRV was guided by a fully autonomous navigation system designed for evading anti-ballistic missile (ABM) interception. The McDonnell Douglas DC-X (also a biconic) was essentially a scaled-up version of AMaRV. AMaRV and the DC-X also served as the basis for an unsuccessful proposal for what eventually became the Lockheed Martin X-33.
Non-axisymmetric shapes
Non-axisymmetric shapes have been used for crewed entry vehicles. One example is the winged orbit vehicle that uses a delta wing for maneuvering during descent much like a conventional glider. This approach has been used by the American Space Shuttle, the Soviet Buran and the in-development Starship. The lifting body is another entry vehicle geometry and was used with the X-23 PRIME (Precision Recovery Including Maneuvering Entry) vehicle.
Entry heating
Objects entering an atmosphere from space at high velocities relative to the atmosphere will cause very high levels of heating. Atmospheric entry heating comes principally from two sources:
convection of hot gas flow past the surface of the body and catalytic chemical recombination reactions between the surface and atmospheric gases; and
radiation from the energetic shock layer that forms in the front and sides of the body
As velocity increases, both convective and radiative heating increase, but at different rates. At very high speeds, radiative heating will dominate the convective heat fluxes, as radiative heating is proportional to the eighth power of velocity, while convective heating is proportional to the third power of velocity. Radiative heating thus predominates early in atmospheric entry, while convection predominates in the later phases.
During certain intensity of ionization, a radio-blackout with the spacecraft is produced.
While NASA's Earth entry interface is at , the main heating during controlled entry takes place at altitudes of , peaking at .
Shock layer gas physics
At typical reentry temperatures, the air in the shock layer is both ionized and dissociated. This chemical dissociation necessitates various physical models to describe the shock layer's thermal and chemical properties. There are four basic physical models of a gas that are important to aeronautical engineers who design heat shields:
Perfect gas model
Almost all aeronautical engineers are taught the perfect (ideal) gas model during their undergraduate education. Most of the important perfect gas equations along with their corresponding tables and graphs are shown in NACA Report 1135. Excerpts from NACA Report 1135 often appear in the appendices of thermodynamics textbooks and are familiar to most aeronautical engineers who design supersonic aircraft.
The perfect gas theory is elegant and extremely useful for designing aircraft but assumes that the gas is chemically inert. From the standpoint of aircraft design, air can be assumed to be inert for temperatures less than at one atmosphere pressure. The perfect gas theory begins to break down at 550 K and is not usable at temperatures greater than . For temperatures greater than 2,000 K, a heat shield designer must use a real gas model.
Real (equilibrium) gas model
An entry vehicle's pitching moment can be significantly influenced by real-gas effects. Both the Apollo command module and the Space Shuttle were designed using incorrect pitching moments determined through inaccurate real-gas modelling. The Apollo-CM's trim-angle angle of attack was higher than originally estimated, resulting in a narrower lunar return entry corridor. The actual aerodynamic center of the Columbia was upstream from the calculated value due to real-gas effects. On Columbias maiden flight (STS-1), astronauts John Young and Robert Crippen had some anxious moments during reentry when there was concern about losing control of the vehicle.
An equilibrium real-gas model assumes that a gas is chemically reactive, but also assumes all chemical reactions have had time to complete and all components of the gas have the same temperature (this is called thermodynamic equilibrium). When air is processed by a shock wave, it is superheated by compression and chemically dissociates through many different reactions. Direct friction upon the reentry object is not the main cause of shock-layer heating. It is caused mainly from isentropic heating of the air molecules within the compression wave. Friction based entropy increases of the molecules within the wave also account for some heating. The distance from the shock wave to the stagnation point on the entry vehicle's leading edge is called shock wave stand off. An approximate rule of thumb for shock wave standoff distance is 0.14 times the nose radius. One can estimate the time of travel for a gas molecule from the shock wave to the stagnation point by assuming a free stream velocity of 7.8 km/s and a nose radius of 1 meter, i.e., time of travel is about 18 microseconds. This is roughly the time required for shock-wave-initiated chemical dissociation to approach chemical equilibrium in a shock layer for a 7.8 km/s entry into air during peak heat flux. Consequently, as air approaches the entry vehicle's stagnation point, the air effectively reaches chemical equilibrium thus enabling an equilibrium model to be usable. For this case, most of the shock layer between the shock wave and leading edge of an entry vehicle is chemically reacting and not in a state of equilibrium. The Fay–Riddell equation, which is of extreme importance towards modeling heat flux, owes its validity to the stagnation point being in chemical equilibrium. The time required for the shock layer gas to reach equilibrium is strongly dependent upon the shock layer's pressure. For example, in the case of the Galileo probe's entry into Jupiter's atmosphere, the shock layer was mostly in equilibrium during peak heat flux due to the very high pressures experienced (this is counterintuitive given the free stream velocity was 39 km/s during peak heat flux).
Determining the thermodynamic state of the stagnation point is more difficult under an equilibrium gas model than a perfect gas model. Under a perfect gas model, the ratio of specific heats (also called isentropic exponent, adiabatic index, gamma, or kappa) is assumed to be constant along with the gas constant. For a real gas, the ratio of specific heats can wildly oscillate as a function of temperature. Under a perfect gas model there is an elegant set of equations for determining thermodynamic state along a constant entropy stream line called the isentropic chain. For a real gas, the isentropic chain is unusable and a Mollier diagram would be used instead for manual calculation. However, graphical solution with a Mollier diagram is now considered obsolete with modern heat shield designers using computer programs based upon a digital lookup table (another form of Mollier diagram) or a chemistry based thermodynamics program. The chemical composition of a gas in equilibrium with fixed pressure and temperature can be determined through the Gibbs free energy method. Gibbs free energy is simply the total enthalpy of the gas minus its total entropy times temperature. A chemical equilibrium program normally does not require chemical formulas or reaction-rate equations. The program works by preserving the original elemental abundances specified for the gas and varying the different molecular combinations of the elements through numerical iteration until the lowest possible Gibbs free energy is calculated (a Newton–Raphson method is the usual numerical scheme). The data base for a Gibbs free energy program comes from spectroscopic data used in defining partition functions. Among the best equilibrium codes in existence is the program Chemical Equilibrium with Applications (CEA) which was written by Bonnie J. McBride and Sanford Gordon at NASA Lewis (now renamed "NASA Glenn Research Center"). Other names for CEA are the "Gordon and McBride Code" and the "Lewis Code". CEA is quite accurate up to 10,000 K for planetary atmospheric gases, but unusable beyond 20,000 K (double ionization is not modelled). CEA can be downloaded from the Internet along with full documentation and will compile on Linux under the G77 Fortran compiler.
Real (non-equilibrium) gas model
A non-equilibrium real gas model is the most accurate model of a shock layer's gas physics, but is more difficult to solve than an equilibrium model. The simplest non-equilibrium model is the Lighthill-Freeman model developed in 1958. The Lighthill-Freeman model initially assumes a gas made up of a single diatomic species susceptible to only one chemical formula and its reverse; e.g., N2 = N + N and N + N = N2 (dissociation and recombination). Because of its simplicity, the Lighthill-Freeman model is a useful pedagogical tool, but is too simple for modelling non-equilibrium air. Air is typically assumed to have a mole fraction composition of 0.7812 molecular nitrogen, 0.2095 molecular oxygen and 0.0093 argon. The simplest real gas model for air is the five species model, which is based upon N2, O2, NO, N, and O. The five species model assumes no ionization and ignores trace species like carbon dioxide.
When running a Gibbs free energy equilibrium program, the iterative process from the originally specified molecular composition to the final calculated equilibrium composition is essentially random and not time accurate. With a non-equilibrium program, the computation process is time accurate and follows a solution path dictated by chemical and reaction rate formulas. The five species model has 17 chemical formulas (34 when counting reverse formulas). The Lighthill-Freeman model is based upon a single ordinary differential equation and one algebraic equation. The five species model is based upon 5 ordinary differential equations and 17 algebraic equations. Because the 5 ordinary differential equations are tightly coupled, the system is numerically "stiff" and difficult to solve. The five species model is only usable for entry from low Earth orbit where entry velocity is approximately . For lunar return entry of 11 km/s, the shock layer contains a significant amount of ionized nitrogen and oxygen. The five-species model is no longer accurate and a twelve-species model must be used instead.
Atmospheric entry interface velocities on a Mars–Earth trajectory are on the order of .
Modeling high-speed Mars atmospheric entry—which involves a carbon dioxide, nitrogen and argon atmosphere—is even more complex requiring a 19-species model.
An important aspect of modelling non-equilibrium real gas effects is radiative heat flux. If a vehicle is entering an atmosphere at very high speed (hyperbolic trajectory, lunar return) and has a large nose radius then radiative heat flux can dominate TPS heating. Radiative heat flux during entry into an air or carbon dioxide atmosphere typically comes from asymmetric diatomic molecules; e.g., cyanogen (CN), carbon monoxide, nitric oxide (NO), single ionized molecular nitrogen etc. These molecules are formed by the shock wave dissociating ambient atmospheric gas followed by recombination within the shock layer into new molecular species. The newly formed diatomic molecules initially have a very high vibrational temperature that efficiently transforms the vibrational energy into radiant energy; i.e., radiative heat flux. The whole process takes place in less than a millisecond which makes modelling a challenge. The experimental measurement of radiative heat flux (typically done with shock tubes) along with theoretical calculation through the unsteady Schrödinger equation are among the more esoteric aspects of aerospace engineering. Most of the aerospace research work related to understanding radiative heat flux was done in the 1960s, but largely discontinued after conclusion of the Apollo Program. Radiative heat flux in air was just sufficiently understood to ensure Apollo's success. However, radiative heat flux in carbon dioxide (Mars entry) is still barely understood and will require major research.
Frozen gas model
The frozen gas model describes a special case of a gas that is not in equilibrium. The name "frozen gas" can be misleading. A frozen gas is not "frozen" like ice is frozen water. Rather a frozen gas is "frozen" in time (all chemical reactions are assumed to have stopped). Chemical reactions are normally driven by collisions between molecules. If gas pressure is slowly reduced such that chemical reactions can continue then the gas can remain in equilibrium. However, it is possible for gas pressure to be so suddenly reduced that almost all chemical reactions stop. For that situation the gas is considered frozen.
The distinction between equilibrium and frozen is important because it is possible for a gas such as air to have significantly different properties (speed-of-sound, viscosity etc.) for the same thermodynamic state; e.g., pressure and temperature. Frozen gas can be a significant issue in the wake behind an entry vehicle. During reentry, free stream air is compressed to high temperature and pressure by the entry vehicle's shock wave. Non-equilibrium air in the shock layer is then transported past the entry vehicle's leading side into a region of rapidly expanding flow that causes freezing. The frozen air can then be entrained into a trailing vortex behind the entry vehicle. Correctly modelling the flow in the wake of an entry vehicle is very difficult. Thermal protection shield (TPS) heating in the vehicle's afterbody is usually not very high, but the geometry and unsteadiness of the vehicle's wake can significantly influence aerodynamics (pitching moment) and particularly dynamic stability.
Thermal protection systems
A thermal protection system, or TPS, is the barrier that protects a spacecraft during the searing heat of atmospheric reentry. Multiple approaches for the thermal protection of spacecraft are in use, among them ablative heat shields, passive cooling, and active cooling of spacecraft surfaces. In general they can be divided into two categories: ablative TPS and reusable TPS.
Ablative TPS are required when space craft reach a relatively low altitude before slowing down. Spacecraft like the space shuttle are designed to slow down at high altitude so that they can use reuseable TPS. (see: Space Shuttle thermal protection system).
Thermal protection systems are tested in high enthalpy ground testing or plasma wind tunnels that reproduce the combination of high enthalpy and high stagnation pressure using Induction plasma or DC plasma.
Ablative
The ablative heat shield functions by lifting the hot shock layer gas away from the heat shield's outer wall (creating a cooler boundary layer). The boundary layer comes from blowing of gaseous reaction products from the heat shield material and provides protection against all forms of heat flux. The overall process of reducing the heat flux experienced by the heat shield's outer wall by way of a boundary layer is called blockage. Ablation occurs at two levels in an ablative TPS: the outer surface of the TPS material chars, melts, and sublimes, while the bulk of the TPS material undergoes pyrolysis and expels product gases. The gas produced by pyrolysis is what drives blowing and causes blockage of convective and catalytic heat flux. Pyrolysis can be measured in real time using thermogravimetric analysis, so that the ablative performance can be evaluated. Ablation can also provide blockage against radiative heat flux by introducing carbon into the shock layer thus making it optically opaque. Radiative heat flux blockage was the primary thermal protection mechanism of the Galileo Probe TPS material (carbon phenolic).
Early research on ablation technology in the USA was centered at NASA's Ames Research Center located at Moffett Field, California. Ames Research Center was ideal, since it had numerous wind tunnels capable of generating varying wind velocities. Initial experiments typically mounted a mock-up of the ablative material to be analyzed within a hypersonic wind tunnel. Testing of ablative materials occurs at the Ames Arc Jet Complex. Many spacecraft thermal protection systems have been tested in this facility, including the Apollo, space shuttle, and Orion heat shield materials.
Carbon phenolic
Carbon phenolic was originally developed as a rocket nozzle throat material (used in the Space Shuttle Solid Rocket Booster) and for reentry-vehicle nose tips.
The thermal conductivity of a particular TPS material is usually proportional to the material's density. Carbon phenolic is a very effective ablative material, but also has high density which is undesirable.
The NASA Galileo Probe used carbon phenolic for its TPS material.
If the heat flux experienced by an entry vehicle is insufficient to cause pyrolysis then the TPS material's conductivity could allow heat flux conduction into the TPS bondline material thus leading to TPS failure. Consequently, for entry trajectories causing lower heat flux, carbon phenolic is sometimes inappropriate and lower-density TPS materials such as the following examples can be better design choices:
Super light-weight ablator
SLA in SLA-561V stands for super light-weight ablator. SLA-561V is a proprietary ablative made by Lockheed Martin that has been used as the primary TPS material on all of the 70° sphere-cone entry vehicles sent by NASA to Mars other than the Mars Science Laboratory (MSL). SLA-561V begins significant ablation at a heat flux of approximately 110 W/cm2, but will fail for heat fluxes greater than 300 W/cm2. The MSL aeroshell TPS is currently designed to withstand a peak heat flux of 234 W/cm2. The peak heat flux experienced by the Viking 1 aeroshell which landed on Mars was 21 W/cm2. For Viking 1, the TPS acted as a charred thermal insulator and never experienced significant ablation. Viking 1 was the first Mars lander and based upon a very conservative design. The Viking aeroshell had a base diameter of 3.54 meters (the largest used on Mars until Mars Science Laboratory). SLA-561V is applied by packing the ablative material into a honeycomb core that is pre-bonded to the aeroshell's structure thus enabling construction of a large heat shield.
Phenolic-impregnated carbon ablator
Phenolic-impregnated carbon ablator (PICA), a carbon fiber preform impregnated in phenolic resin, is a modern TPS material and has the advantages of low density (much lighter than carbon phenolic) coupled with efficient ablative ability at high heat flux. It is a good choice for ablative applications such as high-peak-heating conditions found on sample-return missions or lunar-return missions. PICA's thermal conductivity is lower than other high-heat-flux-ablative materials, such as conventional carbon phenolics.
PICA was patented by NASA Ames Research Center in the 1990s and was the primary TPS material for the Stardust aeroshell. The Stardust sample-return capsule was the fastest man-made object ever to reenter Earth's atmosphere, at 28,000 mph (ca. 12.5 km/s) at 135 km altitude. This was faster than the Apollo mission capsules and 70% faster than the Shuttle. PICA was critical for the viability of the Stardust mission, which returned to Earth in 2006. Stardust's heat shield (0.81 m base diameter) was made of one monolithic piece sized to withstand a nominal peak heating rate of 1.2 kW/cm2. A PICA heat shield was also used for the Mars Science Laboratory entry into the Martian atmosphere.
PICA-X
An improved and easier to produce version called PICA-X was developed by SpaceX in 2006–2010 for the Dragon space capsule. The first reentry test of a PICA-X heat shield was on the Dragon C1 mission on 8 December 2010. The PICA-X heat shield was designed, developed and fully qualified by a small team of a dozen engineers and technicians in less than four years.<ref name="N+SX_picaX">
{{cite web
|last=Chambers
|first=Andrew
|title=NASA + SpaceX Work Together
|url=http://www.nasa.gov/offices/oce/appel/ask/issues/40/40s_space-x_prt.htm
|publisher=NASA
|access-date=2011-02-16
|author2=Dan Rasky
|date=2010-11-14
|quote=SpaceX undertook the design and manufacture of the reentry heat shield; it brought speed and efficiency that allowed the heat shield to be designed, developed, and qualified in less than four years.'''
|url-status=dead
|archive-url=https://web.archive.org/web/20110416170908/http://www.nasa.gov/offices/oce/appel/ask/issues/40/40s_space-x_prt.htm
|archive-date=2011-04-16
}}</ref>
PICA-X is ten times less expensive to manufacture than the NASA PICA heat shield material.
PICA-3
A second enhanced version of PICA—called PICA-3—was developed by SpaceX during the mid-2010s. It was first flight tested on the Crew Dragon spacecraft in 2019 during the flight demonstration mission, in April 2019, and put into regular service on that spacecraft in 2020.
HARLEM
PICA and most other ablative TPS materials are either proprietary or classified, with formulations and manufacturing processes not disclosed in the open literature. This limits the ability of researchers to study these materials and hinders the development of thermal protection systems. Thus, the High Enthalpy Flow Diagnostics Group (HEFDiG) at the University of Stuttgart has developed an open carbon-phenolic ablative material, called the HEFDiG Ablation-Research Laboratory Experiment Material (HARLEM), from commercially available materials. HARLEM is prepared by impregnating a preform of a carbon fiber porous monolith (such as Calcarb rigid carbon insulation) with a solution of resole phenolic resin and polyvinylpyrrolidone in ethylene glycol, heating to polymerize the resin and then removing the solvent under vacuum. The resulting material is cured and machined to the desired shape.
SIRCA
Silicone-impregnated reusable ceramic ablator (SIRCA) was also developed at NASA Ames Research Center and was used on the Backshell Interface Plate (BIP) of the Mars Pathfinder and Mars Exploration Rover (MER) aeroshells. The BIP was at the attachment points between the aeroshell's backshell (also called the afterbody or aft cover) and the cruise ring (also called the cruise stage). SIRCA was also the primary TPS material for the unsuccessful Deep Space 2 (DS/2) Mars impactor probes with their aeroshells. SIRCA is a monolithic, insulating material that can provide thermal protection through ablation. It is the only TPS material that can be machined to custom shapes and then applied directly to the spacecraft. There is no post-processing, heat treating, or additional coatings required (unlike Space Shuttle tiles). Since SIRCA can be machined to precise shapes, it can be applied as tiles, leading edge sections, full nose caps, or in any number of custom shapes or sizes. , SIRCA had been demonstrated in backshell interface applications, but not yet as a forebody TPS material.
AVCOAT
AVCOAT is a NASA-specified ablative heat shield, a glass-filled epoxy–novolac system.
NASA originally used it for the Apollo command module in the 1960s, and then utilized the material for its next-generation beyond low Earth orbit Orion crew module, which first flew in a December 2014 test and then operationally in November 2022. The Avcoat to be used on Orion has been reformulated to meet environmental legislation that has been passed since the end of Apollo.
Thermal soak
Thermal soak is a part of almost all TPS schemes. For example, an ablative heat shield loses most of its thermal protection effectiveness when the outer wall temperature drops below the minimum necessary for pyrolysis. From that time to the end of the heat pulse, heat from the shock layer convects into the heat shield's outer wall and would eventually conduct to the payload. This outcome can be prevented by ejecting the heat shield (with its heat soak) prior to the heat conducting to the inner wall.
Refractory insulation
Refractory insulation keeps the heat in the outermost layer of the spacecraft surface, where it is conducted away by the air. The temperature of the surface rises to incandescent levels, so the material must have a very high melting point, and the material must also exhibit very low thermal conductivity. Materials with these properties tend to be brittle, delicate, and difficult to fabricate in large sizes, so they are generally fabricated as relatively small tiles that are then attached to the structural skin of the spacecraft. There is a tradeoff between toughness and thermal conductivity: less conductive materials are generally more brittle. The space shuttle used multiple types of tiles. Tiles are also used on the Boeing X-37, Dream Chaser, and Starship's upper stage.
Because insulation cannot be perfect, some heat energy is stored in the insulation and in the underlying material ("thermal soaking") and must be dissipated after the spacecraft exits the high-temperature flight regime. Some of this heat will re-radiate through the surface or will be carried off the surface by convection, but some will heat the spacecraft structure and interior, which may require active cooling after landing.
Typical Space Shuttle TPS tiles (LI-900) have remarkable thermal protection properties. An LI-900 tile exposed to a temperature of 1,000 K on one side will remain merely warm to the touch on the other side. However, they are relatively brittle and break easily, and cannot survive in-flight rain.
Passively cooled
In some early ballistic missile RVs (e.g., the Mk-2 and the sub-orbital Mercury spacecraft), radiatively cooled TPS were used to initially absorb heat flux during the heat pulse, and, then, after the heat pulse, radiate and convect the stored heat back into the atmosphere. However, the earlier version of this technique required a considerable quantity of metal TPS (e.g., titanium, beryllium, copper, etc.). Modern designers prefer to avoid this added mass by using ablative and thermal-soak TPS instead.
Thermal protection systems relying on emissivity use high emissivity coatings (HECs) to facilitate radiative cooling, while an underlying porous ceramic layer serves to protect the structure from high surface temperatures. High thermally stable emissivity values coupled with low thermal conductivity are key to the functionality of such systems.
Radiatively cooled TPS can be found on modern entry vehicles, but reinforced carbon–carbon (RCC) (also called carbon–carbon) is normally used instead of metal. RCC was the TPS material on the Space Shuttle's nose cone and wing leading edges, and was also proposed as the leading-edge material for the X-33. Carbon is the most refractory material known, with a one-atmosphere sublimation temperature of for graphite. This high temperature made carbon an obvious choice as a radiatively cooled TPS material. Disadvantages of RCC are that it is currently expensive to manufacture, is heavy, and lacks robust impact resistance.
Some high-velocity aircraft, such as the SR-71 Blackbird and Concorde, deal with heating similar to that experienced by spacecraft, but at much lower intensity, and for hours at a time. Studies of the SR-71's titanium skin revealed that the metal structure was restored to its original strength through annealing due to aerodynamic heating. In the case of the Concorde, the aluminium nose was permitted to reach a maximum operating temperature of (approximately warmer than the normally sub-zero, ambient air); the metallurgical implications (loss of temper) that would be associated with a higher peak temperature were the most significant factors determining the top speed of the aircraft.
A radiatively cooled TPS for an entry vehicle is often called a hot-metal TPS. Early TPS designs for the Space Shuttle called for a hot-metal TPS based upon a nickel superalloy (dubbed René 41) and titanium shingles. This Shuttle TPS concept was rejected, because it was believed a silica tile-based TPS would involve lower development and manufacturing costs. A nickel superalloy-shingle TPS was again proposed for the unsuccessful X-33 single-stage-to-orbit (SSTO) prototype.
Recently, newer radiatively cooled TPS materials have been developed that could be superior to RCC. Known as Ultra-High Temperature Ceramics, they were developed for the prototype vehicle Slender Hypervelocity Aerothermodynamic Research Probe (SHARP). These TPS materials are based on zirconium diboride and hafnium diboride. SHARP TPS have suggested performance improvements allowing for sustained Mach 7 flight at sea level, Mach 11 flight at altitudes, and significant improvements for vehicles designed for continuous hypersonic flight. SHARP TPS materials enable sharp leading edges and nose cones to greatly reduce drag for airbreathing combined-cycle-propelled spaceplanes and lifting bodies. SHARP materials have exhibited effective TPS characteristics from zero to more than , with melting points over . They are structurally stronger than RCC, and, thus, do not require structural reinforcement with materials such as Inconel. SHARP materials are extremely efficient at reradiating absorbed heat, thus eliminating the need for additional TPS behind and between the SHARP materials and conventional vehicle structure. NASA initially funded (and discontinued) a multi-phase R&D program through the University of Montana in 2001 to test SHARP materials on test vehicles.
Actively cooled
Various advanced reusable spacecraft and hypersonic aircraft designs have been proposed to employ heat shields made from temperature-resistant metal alloys that incorporate a refrigerant or cryogenic fuel circulating through them.
Such a TPS concept was proposed for the X-30 National Aerospace Plane (NASP) in the mid-80s. The NASP was supposed to have been a scramjet powered hypersonic aircraft, but failed in development.
In 2005 and 2012, two unmanned lifting body craft with actively cooled hulls were launched as a part of the German Sharp Edge Flight Experiment (SHEFEX).
In early 2019, SpaceX was developing an actively cooled heat shield for its Starship spacecraft where a part of the thermal protection system will be a transpirationally cooled outer-skin design for the reentering spaceship.SpaceX CEO Elon Musk explains Starship's "transpiring" steel heat shield in Q&A , Eric Ralph, Teslarati News, 23 January 2019, accessed 23 March 2019 However, SpaceX abandoned this approach in favor of a modern version of heat shield tiles later in 2019.
The Stoke Space Nova second stage, announced in October 2023 and not yet flying, uses a regeneratively cooled (by liquid hydrogen) heat shield.
In the early 1960s various TPS systems were proposed to use water or other cooling liquid sprayed into the shock layer, or passed through channels in the heat shield. Advantages included the possibility of more all-metal designs which would be cheaper to develop, be more rugged, and eliminate the need for classified and unknown technology. The disadvantages are increased weight and complexity, and lower reliability. The concept has never been flown, but a similar technology (the plug nozzle) did undergo extensive ground testing.
Propulsive entry
Fuel permitting, nothing prevents a vehicle from entering the atmosphere with a retrograde engine burn, which has the double effect of slowing the vehicle down much faster than atmospheric drag alone would, and forcing the compressed hot air away from the vehicle's body. During reentry, the first stage of the SpaceX Falcon 9 performs an entry burn to rapidly decelerate from its initial hypersonic speed.
High-drag suborbital entry
In 2004, aircraft designer Burt Rutan demonstrated the feasibility of a shape-changing airfoil for reentry with the sub-orbital SpaceShipOne. The wings on this craft rotate upward into the feathered configuration that provides a shuttlecock effect. Thus SpaceShipOne achieves much more aerodynamic drag on reentry while not experiencing significant thermal loads.
The configuration increases drag, as the craft is now less streamlined and results in more atmospheric gas particles hitting the spacecraft at higher altitudes than otherwise. The aircraft thus slows down more in higher atmospheric layers which is the key to efficient reentry. Secondly, the aircraft will automatically orient itself in this state to a high drag attitude.
However, the velocity attained by SpaceShipOne prior to reentry is much lower than that of an orbital spacecraft, and engineers, including Rutan, recognize that a feathered reentry technique is not suitable for return from orbit.
On 4 May 2011, the first test on the SpaceShipTwo of the feathering mechanism was made during a glideflight after release
from the White Knight Two. Premature deployment of the feathering system was responsible for the 2014 VSS Enterprise crash, in which the aircraft disintegrated, killing the co-pilot.
The feathered reentry was first described by Dean Chapman of NACA in 1958. In the section of his report on Composite Entry, Chapman described a solution to the problem using a high-drag device:
Inflatable heat shield entry
Deceleration for atmospheric reentry, especially for higher-speed Mars-return missions, benefits from maximizing "the drag area of the entry system. The larger the diameter of the aeroshell, the bigger the payload can be." An inflatable aeroshell provides one alternative for enlarging the drag area with a low-mass design.
Russia
Such an inflatable shield/aerobrake was designed for the penetrators of Mars 96 mission. Since the mission failed due to the launcher malfunction, the NPO Lavochkin and DASA/ESA have designed a mission for Earth orbit. The Inflatable Reentry and Descent Technology (IRDT) demonstrator was launched on Soyuz-Fregat on 8 February 2000. The inflatable shield was designed as a cone with two stages of inflation. Although the second stage of the shield failed to inflate, the demonstrator survived the orbital reentry and was recovered.Inflatable Reentry and Descent Technology (IRDT) Factsheet, ESA, September, 2005 The subsequent missions flown on the Volna rocket failed due to launcher failure.
NASA IRVE
NASA launched an inflatable heat shield experimental spacecraft on 17 August 2009 with the successful first test flight of the Inflatable Re-entry Vehicle Experiment (IRVE). The heat shield had been vacuum-packed into a payload shroud and launched on a Black Brant 9 sounding rocket from NASA's Wallops Flight Facility on Wallops Island, Virginia. "Nitrogen inflated the heat shield, made of several layers of silicone-coated [Kevlar] fabric, to a mushroom shape in space several minutes after liftoff." The rocket apogee was at an altitude of where it began its descent to supersonic speed. Less than a minute later the shield was released from its cover to inflate at an altitude of . The inflation of the shield took less than 90 seconds.
NASA HIAD
Following the success of the initial IRVE experiments, NASA developed the concept into the more ambitious Hypersonic Inflatable Aerodynamic Decelerator (HIAD). The current design is shaped like a shallow cone, with the structure built up as a stack of circular inflated tubes of gradually increasing major diameter. The forward (convex) face of the cone is covered with a flexible thermal protection system robust enough to withstand the stresses of atmospheric entry (or reentry).
In 2012, a HIAD was tested as Inflatable Reentry Vehicle Experiment 3 (IRVE-3) using a sub-orbital sounding rocket, and worked.
See also Low-Density Supersonic Decelerator, a NASA project with tests in 2014 and 2015 of a 6 m diameter SIAD-R.
LOFTID
A inflatable reentry vehicle, Low-Earth Orbit Flight Test of an Inflatable Decelerator (LOFTID), was launched in November 2022, inflated in orbit, reentered faster than Mach 25, and was successfully recovered on November 10.
Entry vehicle design considerations
There are four critical parameters considered when designing a vehicle for atmospheric entry:
Peak heat flux
Heat load
Peak deceleration
Peak dynamic pressure
Peak heat flux and dynamic pressure selects the TPS material. Heat load selects the thickness of the TPS material stack. Peak deceleration is of major importance for crewed missions. The upper limit for crewed return to Earth from low Earth orbit (LEO) or lunar return is 10g. For Martian atmospheric entry after long exposure to zero gravity, the upper limit is 4g. Peak dynamic pressure can also influence the selection of the outermost TPS material if spallation is an issue. The reentry vehicle's design parameters may be assessed through numerical simulation, including simplifications of the vehicle's dynamics, such as the planar reentry equations and heat flux correlations.
Starting from the principle of conservative design, the engineer typically considers two worst-case trajectories, the undershoot and overshoot trajectories. The overshoot trajectory is typically defined as the shallowest-allowable entry velocity angle prior to atmospheric skip-off. The overshoot trajectory has the highest heat load and sets the TPS thickness. The undershoot trajectory is defined by the steepest allowable trajectory. For crewed missions the steepest entry angle is limited by the peak deceleration. The undershoot trajectory also has the highest peak heat flux and dynamic pressure. Consequently, the undershoot trajectory is the basis for selecting the TPS material. There is no "one size fits all" TPS material. A TPS material that is ideal for high heat flux may be too conductive (too dense) for a long duration heat load. A low-density TPS material might lack the tensile strength to resist spallation if the dynamic pressure is too high. A TPS material can perform well for a specific peak heat flux, but fail catastrophically for the same peak heat flux if the wall pressure is significantly increased (this happened with NASA's R-4 test spacecraft). Older TPS materials tend to be more labor-intensive and expensive to manufacture compared to modern materials. However, modern TPS materials often lack the flight history of the older materials (an important consideration for a risk-averse designer).
Based upon Allen and Eggers discovery, maximum aeroshell bluntness (maximum drag) yields minimum TPS mass. Maximum bluntness (minimum ballistic coefficient) also yields a minimal terminal velocity at maximum altitude (very important for Mars EDL, but detrimental for military RVs). However, there is an upper limit to bluntness imposed by aerodynamic stability considerations based upon shock wave detachment. A shock wave will remain attached to the tip of a sharp cone if the cone's half-angle is below a critical value. This critical half-angle can be estimated using perfect gas theory (this specific aerodynamic instability occurs below hypersonic speeds). For a nitrogen atmosphere (Earth or Titan), the maximum allowed half-angle is approximately 60°. For a carbon dioxide atmosphere (Mars or Venus), the maximum-allowed half-angle is approximately 70°. After shock wave detachment, an entry vehicle must carry significantly more shocklayer gas around the leading edge stagnation point (the subsonic cap). Consequently, the aerodynamic center moves upstream thus causing aerodynamic instability. It is incorrect to reapply an aeroshell design intended for Titan entry (Huygens probe in a nitrogen atmosphere) for Mars entry (Beagle 2 in a carbon dioxide atmosphere). Prior to being abandoned, the Soviet Mars lander program achieved one successful landing (Mars 3), on the second of three entry attempts (the others were Mars 2 and Mars 6). The Soviet Mars landers were based upon a 60° half-angle aeroshell design.
A 45° half-angle sphere-cone is typically used for atmospheric probes (surface landing not intended) even though TPS mass is not minimized. The rationale for a 45° half-angle is to have either aerodynamic stability from entry-to-impact (the heat shield is not jettisoned) or a short-and-sharp heat pulse followed by prompt heat shield jettison. A 45° sphere-cone design was used with the DS/2 Mars impactor and Pioneer Venus probes.
Atmospheric entry accidents
Not all atmospheric reentries have been completely successful:
Voskhod 2 – The service module failed to detach for some time, but the crew survived.
Soyuz 5 – The service module failed to detach, but the crew survived.
Apollo 15 - One of the three ringsail parachutes failed during the ocean landing, likely damaged as the spacecraft vented excess control fuel. The spacecraft was designed to land safely with only two parachutes, and the crew were uninjured.
Mars Polar Lander – Failed during EDL. The failure was believed to be the consequence of a software error. The precise cause is unknown for lack of real-time telemetry.
Space Shuttle Columbia STS-1 – a combination of launch damage, protruding gap filler, and tile installation error resulted in serious damage to the orbiter, only some of which the crew was aware. Had the crew known the extent of the damage before attempting reentry, they would have flown the shuttle to a safe altitude and then bailed out. Nevertheless, reentry was successful, and the orbiter proceeded to a normal landing.
Space Shuttle Atlantis STS-27 – Insulation from the starboard solid rocket booster nose cap struck the orbiter during launch, causing significant tile damage. This dislodged one tile completely, over an aluminum mounting plate for a TACAN antenna. The antenna sustained extreme heat damage, but prevented the hot gas from penetrating the vehicle body.
Genesis – The parachute failed to deploy due to a G-switch having been installed backwards (a similar error delayed parachute deployment for the Galileo Probe). Consequently, the Genesis entry vehicle crashed into the desert floor. The payload was damaged, but most scientific data were recoverable.
Soyuz TMA-11 – The Soyuz propulsion module failed to separate properly; fallback ballistic reentry was executed that subjected the crew to accelerations of about . The crew survived.
Starship IFT-3: The SpaceX Starship's third integrated test flight was supposed to end with a hard splashdown in the Indian Ocean. However, approximately 48.5 minutes after launch, at an altitude of 65km, contact with the spacecraft was lost, indicating that it burned up on reentry. This was caused by excessive vehicle rolling due to clogged vents on the vehicle.
Some reentries have resulted in significant disasters:
Soyuz 1 – The attitude control system failed while still in orbit and later parachutes got entangled during the emergency landing sequence (entry, descent, and landing (EDL) failure). Lone cosmonaut Vladimir Mikhailovich Komarov died.
Soyuz 11 – During tri-module separation, a valve seal was opened by the shock, depressurizing the descent module; the crew of three asphyxiated in space minutes before reentry.
Space Shuttle Columbia STS-107 – The failure of a reinforced carbon–carbon panel on a wing leading edge caused by debris impact at launch led to breakup of the orbiter on reentry resulting in the deaths of all seven crew members.
Uncontrolled and unprotected entries
Of satellites that reenter, approximately 10–40% of the mass of the object may reach the surface of the Earth. On average, about one catalogued object reentered per day .
Because the Earth's surface is predominantly water, most objects that survive reentry land in one of the world's oceans. The estimated chance that a given person would get hit and injured during their lifetime is around 1 in a trillion.
On January 24, 1978, the Soviet Kosmos 954 () reentered and crashed near Great Slave Lake in the Northwest Territories of Canada. The satellite was nuclear-powered and left radioactive debris near its impact site.
On July 11, 1979, the US Skylab space station () reentered and spread debris across the Australian Outback. The reentry was a major media event largely due to the Cosmos 954 incident, but not viewed as much as a potential disaster since it did not carry toxic nuclear or hydrazine fuel. NASA had originally hoped to use a Space Shuttle mission to either extend its life or enable a controlled reentry, but delays in the Shuttle program, plus unexpectedly high solar activity, made this impossible.
On February 7, 1991, the Soviet Salyut 7 space station (), with the Kosmos 1686 module () attached, reentered and scattered debris over the town of Capitán Bermúdez, Argentina. The station had been boosted to a higher orbit in August 1986 in an attempt to keep it up until 1994, but in a scenario similar to Skylab, the planned Buran shuttle was cancelled and high solar activity caused it to come down sooner than expected.
On September 7, 2011, NASA announced the impending uncontrolled reentry of the Upper Atmosphere Research Satellite () and noted that there was a small risk to the public. The decommissioned satellite reentered the atmosphere on September 24, 2011, and some pieces are presumed to have crashed into the South Pacific Ocean over a debris field long.
On April 1, 2018, the Chinese Tiangong-1 space station () reentered over the Pacific Ocean, halfway between Australia and South America. The China Manned Space Engineering Office had intended to control the reentry, but lost telemetry and control in March 2017.
On May 11, 2020, the core stage of Chinese Long March 5B (COSPAR ID 2020-027C) weighing roughly ) made an uncontrolled reentry over the Atlantic Ocean, near West African coast. Few pieces of rocket debris reportedly survived reentry and fell over at least two villages in Ivory Coast.
On May 8, 2021, the core stage of Chinese Long March 5B (COSPAR ID 2021-0035B) weighing ) made an uncontrolled reentry, just west of the Maldives in the Indian Ocean (approximately 72.47°E longitude and 2.65°N latitude). Witnesses reported rocket debris as far away as the Arabian peninsula.
Deorbit disposal
Salyut 1, the world's first space station, was deliberately de-orbited into the Pacific Ocean in 1971 following the Soyuz 11 accident. Its successor, Salyut 6, was de-orbited in a controlled manner as well.
On June 4, 2000, the Compton Gamma Ray Observatory was deliberately de-orbited after one of its gyroscopes failed. The debris that did not burn up fell harmlessly into the Pacific Ocean. The observatory was still operational, but the failure of another gyroscope would have made de-orbiting much more difficult and dangerous. With some controversy, NASA decided in the interest of public safety that a controlled crash was preferable to letting the craft come down at random.
In 2001, the Russian Mir'' space station was deliberately de-orbited, and broke apart in the fashion expected by the command center during atmospheric reentry. Mir entered the Earth's atmosphere on March 23, 2001, near Nadi, Fiji, and fell into the South Pacific Ocean.
On February 21, 2008, a disabled U.S. spy satellite, USA-193, was hit at an altitude of approximately with an SM-3 missile fired from the U.S. Navy cruiser off the coast of Hawaii. The satellite was inoperative, having failed to reach its intended orbit when it was launched in 2006. Due to its rapidly deteriorating orbit it was destined for uncontrolled reentry within a month. U.S. Department of Defense expressed concern that the fuel tank containing highly toxic hydrazine might survive reentry to reach the Earth's surface intact. Several governments including those of Russia, China, and Belarus protested the action as a thinly-veiled demonstration of US anti-satellite capabilities. China had previously caused an international incident when it tested an anti-satellite missile in 2007.
Environmental impact
Atmospheric entry has a measurable impact on Earth's atmosphere, particularly the stratosphere.
Atmospheric entry by spacecrafts accounted for 3% of all atmospheric entries by 2021, but in a scenario in which the number of satellites since 2019 are doubled, artificial entries would make 40% of all entries, which would cause atmospheric aerosols to be 94% artificial. The impact of spacecrafts burning up in the atmosphere during artificial atmospheric entry is different to meteors due to the spacecrafts' generally larger size and different composition. The atmospheric pollutants produced by artificial atmospheric burning-up have been traced in the atmosphere and identified as reacting and possibly negatively impacting the composition of the atmosphere and particularly the ozone layer.
Considering space sustainability in regard to atmospheric impact of re-entry is by 2022 just developing and has been identified in 2024 as suffering from "atmosphere-blindness", causing global environmental injustice. This is identified as a result of the current end-of life spacecraft management, which favors the station keeping practice of controlled re-entry. This is mainly done to prevent the dangers from uncontrolled atmospheric entries and space debris.
Suggested alternatives are the use of less polluting materials and by in-orbit servicing and potentially in-space recycling.
Gallery
See also
References
Further reading
A revised version of this classic text has been reissued as an inexpensive paperback: reissued in 2004
External links
Aerocapture Mission Analysis Tool (AMAT) provides preliminary mission analysis and simulation capability for atmospheric entry vehicles at various Solar System destinations.
Center for Orbital and Reentry Debris Studies (The Aerospace Corporation)
Apollo Atmospheric Entry Phase, 1968, NASA Mission Planning and Analysis Division, Project Apollo. video (25:14).
Buran's heat shield
Encyclopedia Astronautica article on the history of space rescue crafts, including some reentry craft designs.
Aerospace engineering
Flight phases | Atmospheric entry | [
"Engineering"
] | 13,109 | [
"Atmospheric entry",
"Aerospace engineering"
] |
45,303 | https://en.wikipedia.org/wiki/Thread%20%28computing%29 | In computer science, a thread of execution is the smallest sequence of programmed instructions that can be managed independently by a scheduler, which is typically a part of the operating system. In many cases, a thread is a component of a process.
The multiple threads of a given process may be executed concurrently (via multithreading capabilities), sharing resources such as memory, while different processes do not share these resources. In particular, the threads of a process share its executable code and the values of its dynamically allocated variables and non-thread-local global variables at any given time.
The implementation of threads and processes differs between operating systems.
History
Threads made an early appearance under the name of "tasks" in IBM's batch processing operating system, OS/360, in 1967. It provided users with three available configurations of the OS/360 control system, of which Multiprogramming with a Variable Number of Tasks (MVT) was one. Saltzer (1966) credits Victor A. Vyssotsky with the term "thread".
The use of threads in software applications became more common in the early 2000s as CPUs began to utilize multiple cores. Applications wishing to take advantage of multiple cores for performance advantages were required to employ concurrency to utilize the multiple cores.
Related concepts
Scheduling can be done at the kernel level or user level, and multitasking can be done preemptively or cooperatively. This yields a variety of related concepts.
Processes
At the kernel level, a process contains one or more kernel threads, which share the process's resources, such as memory and file handles – a process is a unit of resources, while a thread is a unit of scheduling and execution. Kernel scheduling is typically uniformly done preemptively or, less commonly, cooperatively. At the user level a process such as a runtime system can itself schedule multiple threads of execution. If these do not share data, as in Erlang, they are usually analogously called processes, while if they share data they are usually called (user) threads, particularly if preemptively scheduled. Cooperatively scheduled user threads are known as fibers; different processes may schedule user threads differently. User threads may be executed by kernel threads in various ways (one-to-one, many-to-one, many-to-many). The term "light-weight process" variously refers to user threads or to kernel mechanisms for scheduling user threads onto kernel threads.
A process is a "heavyweight" unit of kernel scheduling, as creating, destroying, and switching processes is relatively expensive. Processes own resources allocated by the operating system. Resources include memory (for both code and data), file handles, sockets, device handles, windows, and a process control block. Processes are isolated by process isolation, and do not share address spaces or file resources except through explicit methods such as inheriting file handles or shared memory segments, or mapping the same file in a shared way – see interprocess communication. Creating or destroying a process is relatively expensive, as resources must be acquired or released. Processes are typically preemptively multitasked, and process switching is relatively expensive, beyond basic cost of context switching, due to issues such as cache flushing (in particular, process switching changes virtual memory addressing, causing invalidation and thus flushing of an untagged translation lookaside buffer (TLB), notably on x86).
Kernel threads
A kernel thread is a "lightweight" unit of kernel scheduling. At least one kernel thread exists within each process. If multiple kernel threads exist within a process, then they share the same memory and file resources. Kernel threads are preemptively multitasked if the operating system's process scheduler is preemptive. Kernel threads do not own resources except for a stack, a copy of the registers including the program counter, and thread-local storage (if any), and are thus relatively cheap to create and destroy. Thread switching is also relatively cheap: it requires a context switch (saving and restoring registers and stack pointer), but does not change virtual memory and is thus cache-friendly (leaving TLB valid). The kernel can assign one or more software threads to each core in a CPU (it being able to assign itself multiple software threads depending on its support for multithreading), and can swap out threads that get blocked. However, kernel threads take much longer than user threads to be swapped.
User threads
Threads are sometimes implemented in userspace libraries, thus called user threads. The kernel is unaware of them, so they are managed and scheduled in userspace. Some implementations base their user threads on top of several kernel threads, to benefit from multi-processor machines (M:N model). User threads as implemented by virtual machines are also called green threads.
As user thread implementations are typically entirely in userspace, context switching between user threads within the same process is extremely efficient because it does not require any interaction with the kernel at all: a context switch can be performed by locally saving the CPU registers used by the currently executing user thread or fiber and then loading the registers required by the user thread or fiber to be executed. Since scheduling occurs in userspace, the scheduling policy can be more easily tailored to the requirements of the program's workload.
However, the use of blocking system calls in user threads (as opposed to kernel threads) can be problematic. If a user thread or a fiber performs a system call that blocks, the other user threads and fibers in the process are unable to run until the system call returns. A typical example of this problem is when performing I/O: most programs are written to perform I/O synchronously. When an I/O operation is initiated, a system call is made, and does not return until the I/O operation has been completed. In the intervening period, the entire process is "blocked" by the kernel and cannot run, which starves other user threads and fibers in the same process from executing.
A common solution to this problem (used, in particular, by many green threads implementations) is providing an I/O API that implements an interface that blocks the calling thread, rather than the entire process, by using non-blocking I/O internally, and scheduling another user thread or fiber while the I/O operation is in progress. Similar solutions can be provided for other blocking system calls. Alternatively, the program can be written to avoid the use of synchronous I/O or other blocking system calls (in particular, using non-blocking I/O, including lambda continuations and/or async/await primitives).
Fibers
Fibers are an even lighter unit of scheduling which are cooperatively scheduled: a running fiber must explicitly "yield" to allow another fiber to run, which makes their implementation much easier than kernel or user threads. A fiber can be scheduled to run in any thread in the same process. This permits applications to gain performance improvements by managing scheduling themselves, instead of relying on the kernel scheduler (which may not be tuned for the application). Some research implementations of the OpenMP parallel programming model implement their tasks through fibers. Closely related to fibers are coroutines, with the distinction being that coroutines are a language-level construct, while fibers are a system-level construct.
Threads vs processes
Threads differ from traditional multitasking operating-system processes in several ways:
processes are typically independent, while threads exist as subsets of a process
processes carry considerably more state information than threads, whereas multiple threads within a process share process state as well as memory and other resources
processes have separate address spaces, whereas threads share their address space
processes interact only through system-provided inter-process communication mechanisms
context switching between threads in the same process typically occurs faster than context switching between processes
Systems such as Windows NT and OS/2 are said to have cheap threads and expensive processes; in other operating systems there is not so great a difference except in the cost of an address-space switch, which on some architectures (notably x86) results in a translation lookaside buffer (TLB) flush.
Advantages and disadvantages of threads vs processes include:
Lower resource consumption of threads: using threads, an application can operate using fewer resources than it would need when using multiple processes.
Simplified sharing and communication of threads: unlike processes, which require a message passing or shared memory mechanism to perform inter-process communication (IPC), threads can communicate through data, code and files they already share.
Thread crashes a process: due to threads sharing the same address space, an illegal operation performed by a thread can crash the entire process; therefore, one misbehaving thread can disrupt the processing of all the other threads in the application.
Scheduling
Preemptive vs cooperative scheduling
Operating systems schedule threads either preemptively or cooperatively. Multi-user operating systems generally favor preemptive multithreading for its finer-grained control over execution time via context switching. However, preemptive scheduling may context-switch threads at moments unanticipated by programmers, thus causing lock convoy, priority inversion, or other side-effects. In contrast, cooperative multithreading relies on threads to relinquish control of execution, thus ensuring that threads run to completion. This can cause problems if a cooperatively multitasked thread blocks by waiting on a resource or if it starves other threads by not yielding control of execution during intensive computation.
Single- vs multi-processor systems
Until the early 2000s, most desktop computers had only one single-core CPU, with no support for hardware threads, although threads were still used on such computers because switching between threads was generally still quicker than full-process context switches. In 2002, Intel added support for simultaneous multithreading to the Pentium 4 processor, under the name hyper-threading; in 2005, they introduced the dual-core Pentium D processor and AMD introduced the dual-core Athlon 64 X2 processor.
Systems with a single processor generally implement multithreading by time slicing: the central processing unit (CPU) switches between different software threads. This context switching usually occurs frequently enough that users perceive the threads or tasks as running in parallel (for popular server/desktop operating systems, maximum time slice of a thread, when other threads are waiting, is often limited to 100–200ms). On a multiprocessor or multi-core system, multiple threads can execute in parallel, with every processor or core executing a separate thread simultaneously; on a processor or core with hardware threads, separate software threads can also be executed concurrently by separate hardware threads.
Threading models
1:1 (kernel-level threading)
Threads created by the user in a 1:1 correspondence with schedulable entities in the kernel are the simplest possible threading implementation. OS/2 and Win32 used this approach from the start, while on Linux the GNU C Library implements this approach (via the NPTL or older LinuxThreads). This approach is also used by Solaris, NetBSD, FreeBSD, macOS, and iOS.
M:1 (user-level threading)
An M:1 model implies that all application-level threads map to one kernel-level scheduled entity; the kernel has no knowledge of the application threads. With this approach, context switching can be done very quickly and, in addition, it can be implemented even on simple kernels which do not support threading. One of the major drawbacks, however, is that it cannot benefit from the hardware acceleration on multithreaded processors or multi-processor computers: there is never more than one thread being scheduled at the same time. For example: If one of the threads needs to execute an I/O request, the whole process is blocked and the threading advantage cannot be used. The GNU Portable Threads uses User-level threading, as does State Threads.
M:N (hybrid threading)
M:N maps some number of application threads onto some number of kernel entities, or "virtual processors." This is a compromise between kernel-level ("1:1") and user-level ("N:1") threading. In general, "M:N" threading systems are more complex to implement than either kernel or user threads, because changes to both kernel and user-space code are required. In the M:N implementation, the threading library is responsible for scheduling user threads on the available schedulable entities; this makes context switching of threads very fast, as it avoids system calls. However, this increases complexity and the likelihood of priority inversion, as well as suboptimal scheduling without extensive (and expensive) coordination between the userland scheduler and the kernel scheduler.
Hybrid implementation examples
Scheduler activations used by older versions of the NetBSD native POSIX threads library implementation (an M:N model as opposed to a 1:1 kernel or userspace implementation model)
Light-weight processes used by older versions of the Solaris operating system
Marcel from the PM2 project.
The OS for the Tera-Cray MTA-2
The Glasgow Haskell Compiler (GHC) for the language Haskell uses lightweight threads which are scheduled on operating system threads.
History of threading models in Unix systems
SunOS 4.x implemented light-weight processes or LWPs. NetBSD 2.x+, and DragonFly BSD implement LWPs as kernel threads (1:1 model). SunOS 5.2 through SunOS 5.8 as well as NetBSD 2 to NetBSD 4 implemented a two level model, multiplexing one or more user level threads on each kernel thread (M:N model). SunOS 5.9 and later, as well as NetBSD 5 eliminated user threads support, returning to a 1:1 model. FreeBSD 5 implemented M:N model. FreeBSD 6 supported both 1:1 and M:N, users could choose which one should be used with a given program using /etc/libmap.conf. Starting with FreeBSD 7, the 1:1 became the default. FreeBSD 8 no longer supports the M:N model.
Single-threaded vs multithreaded programs
In computer programming, single-threading is the processing of one instruction at a time. In the formal analysis of the variables' semantics and process state, the term single threading can be used differently to mean "backtracking within a single thread", which is common in the functional programming community.
Multithreading is mainly found in multitasking operating systems. Multithreading is a widespread programming and execution model that allows multiple threads to exist within the context of one process. These threads share the process's resources, but are able to execute independently. The threaded programming model provides developers with a useful abstraction of concurrent execution. Multithreading can also be applied to one process to enable parallel execution on a multiprocessing system.
Multithreading libraries tend to provide a function call to create a new thread, which takes a function as a parameter. A concurrent thread is then created which starts running the passed function and ends when the function returns. The thread libraries also offer data synchronization functions.
Threads and data synchronization
Threads in the same process share the same address space. This allows concurrently running code to couple tightly and conveniently exchange data without the overhead or complexity of an IPC. When shared between threads, however, even simple data structures become prone to race conditions if they require more than one CPU instruction to update: two threads may end up attempting to update the data structure at the same time and find it unexpectedly changing underfoot. Bugs caused by race conditions can be very difficult to reproduce and isolate.
To prevent this, threading application programming interfaces (APIs) offer synchronization primitives such as mutexes to lock data structures against concurrent access. On uniprocessor systems, a thread running into a locked mutex must sleep and hence trigger a context switch. On multi-processor systems, the thread may instead poll the mutex in a spinlock. Both of these may sap performance and force processors in symmetric multiprocessing (SMP) systems to contend for the memory bus, especially if the granularity of the locking is too fine.
Other synchronization APIs include condition variables, critical sections, semaphores, and monitors.
Thread pools
A popular programming pattern involving threads is that of thread pools where a set number of threads are created at startup that then wait for a task to be assigned. When a new task arrives, it wakes up, completes the task and goes back to waiting. This avoids the relatively expensive thread creation and destruction functions for every task performed and takes thread management out of the application developer's hand and leaves it to a library or the operating system that is better suited to optimize thread management.
Multithreaded programs vs single-threaded programs pros and cons
Multithreaded applications have the following advantages vs single-threaded ones:
Responsiveness: multithreading can allow an application to remain responsive to input. In a one-thread program, if the main execution thread blocks on a long-running task, the entire application can appear to freeze. By moving such long-running tasks to a worker thread that runs concurrently with the main execution thread, it is possible for the application to remain responsive to user input while executing tasks in the background. On the other hand, in most cases multithreading is not the only way to keep a program responsive, with non-blocking I/O and/or Unix signals being available for obtaining similar results.
Parallelization: applications looking to use multicore or multi-CPU systems can use multithreading to split data and tasks into parallel subtasks and let the underlying architecture manage how the threads run, either concurrently on one core or in parallel on multiple cores. GPU computing environments like CUDA and OpenCL use the multithreading model where dozens to hundreds of threads run in parallel across data on a large number of cores. This, in turn, enables better system utilization, and (provided that synchronization costs don't eat the benefits up), can provide faster program execution.
Multithreaded applications have the following drawbacks:
Synchronization complexity and related bugs: when using shared resources typical for threaded programs, the programmer must be careful to avoid race conditions and other non-intuitive behaviors. In order for data to be correctly manipulated, threads will often need to rendezvous in time in order to process the data in the correct order. Threads may also require mutually exclusive operations (often implemented using mutexes) to prevent common data from being read or overwritten in one thread while being modified by another. Careless use of such primitives can lead to deadlocks, livelocks or races over resources. As Edward A. Lee has written: "Although threads seem to be a small step from sequential computation, in fact, they represent a huge step. They discard the most essential and appealing properties of sequential computation: understandability, predictability, and determinism. Threads, as a model of computation, are wildly non-deterministic, and the job of the programmer becomes one of pruning that nondeterminism."
Being untestable. In general, multithreaded programs are non-deterministic, and as a result, are untestable. In other words, a multithreaded program can easily have bugs which never manifest on a test system, manifesting only in production. This can be alleviated by restricting inter-thread communications to certain well-defined patterns (such as message-passing).
Synchronization costs. As thread context switch on modern CPUs can cost up to 1 million CPU cycles, it makes writing efficient multithreading programs difficult. In particular, special attention has to be paid to avoid inter-thread synchronization from being too frequent.
Programming language support
Many programming languages support threading in some capacity.
IBM PL/I(F) included support for multithreading (called multitasking) as early as in the late 1960s, and this was continued in the Optimizing Compiler and later versions. The IBM Enterprise PL/I compiler introduced a new model "thread" API. Neither version was part of the PL/I standard.
Many implementations of C and C++ support threading, and provide access to the native threading APIs of the operating system. A standardized interface for thread implementation is POSIX Threads (Pthreads), which is a set of C-function library calls. OS vendors are free to implement the interface as desired, but the application developer should be able to use the same interface across multiple platforms. Most Unix platforms, including Linux, support Pthreads. Microsoft Windows has its own set of thread functions in the process.h interface for multithreading, like beginthread.
Some higher level (and usually cross-platform) programming languages, such as Java, Python, and .NET Framework languages, expose threading to developers while abstracting the platform specific differences in threading implementations in the runtime. Several other programming languages and language extensions also try to abstract the concept of concurrency and threading from the developer fully (Cilk, OpenMP, Message Passing Interface (MPI)). Some languages are designed for sequential parallelism instead (especially using GPUs), without requiring concurrency or threads (Ateji PX, CUDA).
A few interpreted programming languages have implementations (e.g., Ruby MRI for Ruby, CPython for Python) which support threading and concurrency but not parallel execution of threads, due to a global interpreter lock (GIL). The GIL is a mutual exclusion lock held by the interpreter that can prevent the interpreter from simultaneously interpreting the application's code on two or more threads at once. This effectively limits the parallelism on multiple core systems. It also limits performance for processor-bound threads (which require the processor), but doesn't effect I/O-bound or network-bound ones as much. Other implementations of interpreted programming languages, such as Tcl using the Thread extension, avoid the GIL limit by using an Apartment model where data and code must be explicitly "shared" between threads. In Tcl each thread has one or more interpreters.
In programming models such as CUDA designed for data parallel computation, an array of threads run the same code in parallel using only its ID to find its data in memory. In essence, the application must be designed so that each thread performs the same operation on different segments of memory so that they can operate in parallel and use the GPU architecture.
Hardware description languages such as Verilog have a different threading model that supports extremely large numbers of threads (for modeling hardware).
See also
Clone (Linux system call)
Communicating sequential processes
Computer multitasking
Multi-core (computing)
Multithreading (computer hardware)
Non-blocking algorithm
Priority inversion
Protothreads
Simultaneous multithreading
Thread pool pattern
Thread safety
Win32 Thread Information Block
References
Further reading
David R. Butenhof: Programming with POSIX Threads, Addison-Wesley,
Bradford Nichols, Dick Buttlar, Jacqueline Proulx Farell: Pthreads Programming, O'Reilly & Associates,
Paul Hyde: Java Thread Programming, Sams,
Jim Beveridge, Robert Wiener: Multithreading Applications in Win32, Addison-Wesley,
Uresh Vahalia: Unix Internals: the New Frontiers, Prentice Hall,
Concurrent computing | Thread (computing) | [
"Technology"
] | 4,838 | [
"Computing platforms",
"Concurrent computing",
"IT infrastructure"
] |
45,305 | https://en.wikipedia.org/wiki/Clifford%20algebra | In mathematics, a Clifford algebra is an algebra generated by a vector space with a quadratic form, and is a unital associative algebra with the additional structure of a distinguished subspace. As -algebras, they generalize the real numbers, complex numbers, quaternions and several other hypercomplex number systems. The theory of Clifford algebras is intimately connected with the theory of quadratic forms and orthogonal transformations. Clifford algebras have important applications in a variety of fields including geometry, theoretical physics and digital image processing. They are named after the English mathematician William Kingdon Clifford (1845–1879).
The most familiar Clifford algebras, the orthogonal Clifford algebras, are also referred to as (pseudo-)Riemannian Clifford algebras, as distinct from symplectic Clifford algebras.
Introduction and basic properties
A Clifford algebra is a unital associative algebra that contains and is generated by a vector space over a field , where is equipped with a quadratic form . The Clifford algebra is the "freest" unital associative algebra generated by subject to the condition
where the product on the left is that of the algebra, and the on the right is the algebra's multiplicative identity (not to be confused with the multiplicative identity of ). The idea of being the "freest" or "most general" algebra subject to this identity can be formally expressed through the notion of a universal property, as done below.
When is a finite-dimensional real vector space and is nondegenerate, may be identified by the label , indicating that has an orthogonal basis with elements with , with , and where indicates that this is a Clifford algebra over the reals; i.e. coefficients of elements of the algebra are real numbers. This basis may be found by orthogonal diagonalization.
The free algebra generated by may be written as the tensor algebra , that is, the direct sum of the tensor product of copies of over all . Therefore one obtains a Clifford algebra as the quotient of this tensor algebra by the two-sided ideal generated by elements of the form for all elements . The product induced by the tensor product in the quotient algebra is written using juxtaposition (e.g. ). Its associativity follows from the associativity of the tensor product.
The Clifford algebra has a distinguished subspace , being the image of the embedding map. Such a subspace cannot in general be uniquely determined given only a -algebra that is isomorphic to the Clifford algebra.
If is invertible in the ground field , then one can rewrite the fundamental identity above in the form
where
is the symmetric bilinear form associated with , via the polarization identity.
Quadratic forms and Clifford algebras in characteristic form an exceptional case in this respect. In particular, if it is not true that a quadratic form necessarily or uniquely determines a symmetric bilinear form that satisfies , Many of the statements in this article include the condition that the characteristic is not , and are false if this condition is removed.
As a quantization of the exterior algebra
Clifford algebras are closely related to exterior algebras. Indeed, if then the Clifford algebra is just the exterior algebra . Whenever is invertible in the ground field , there exists a canonical linear isomorphism between and . That is, they are naturally isomorphic as vector spaces, but with different multiplications (in the case of characteristic two, they are still isomorphic as vector spaces, just not naturally). Clifford multiplication together with the distinguished subspace is strictly richer than the exterior product since it makes use of the extra information provided by .
The Clifford algebra is a filtered algebra; the associated graded algebra is the exterior algebra.
More precisely, Clifford algebras may be thought of as quantizations (cf. quantum group) of the exterior algebra, in the same way that the Weyl algebra is a quantization of the symmetric algebra.
Weyl algebras and Clifford algebras admit a further structure of a *-algebra, and can be unified as even and odd terms of a superalgebra, as discussed in CCR and CAR algebras.
Universal property and construction
Let be a vector space over a field , and let be a quadratic form on . In most cases of interest the field is either the field of real numbers , or the field of complex numbers , or a finite field.
A Clifford algebra is a pair , where is a unital associative algebra over and is a linear map that satisfies for all in , defined by the following universal property: given any unital associative algebra over and any linear map such that
(where denotes the multiplicative identity of ), there is a unique algebra homomorphism such that the following diagram commutes (i.e. such that ):
The quadratic form may be replaced by a (not necessarily symmetric) bilinear form that has the property , in which case an equivalent requirement on is
When the characteristic of the field is not , this may be replaced by what is then an equivalent requirement,
where the bilinear form may additionally be restricted to being symmetric without loss of generality.
A Clifford algebra as described above always exists and can be constructed as follows: start with the most general algebra that contains , namely the tensor algebra , and then enforce the fundamental identity by taking a suitable quotient. In our case we want to take the two-sided ideal in generated by all elements of the form
for all
and define as the quotient algebra
The ring product inherited by this quotient is sometimes referred to as the Clifford product to distinguish it from the exterior product and the scalar product.
It is then straightforward to show that contains and satisfies the above universal property, so that is unique up to a unique isomorphism; thus one speaks of "the" Clifford algebra . It also follows from this construction that is injective. One usually drops the and considers as a linear subspace of .
The universal characterization of the Clifford algebra shows that the construction of is in nature. Namely, can be considered as a functor from the category of vector spaces with quadratic forms (whose morphisms are linear maps that preserve the quadratic form) to the category of associative algebras. The universal property guarantees that linear maps between vector spaces (that preserve the quadratic form) extend uniquely to algebra homomorphisms between the associated Clifford algebras.
Basis and dimension
Since comes equipped with a quadratic form , in characteristic not equal to there exist bases for that are orthogonal. An orthogonal basis is one such that for a symmetric bilinear form
for , and
The fundamental Clifford identity implies that for an orthogonal basis
for , and
This makes manipulation of orthogonal basis vectors quite simple. Given a product of distinct orthogonal basis vectors of , one can put them into a standard order while including an overall sign determined by the number of pairwise swaps needed to do so (i.e. the signature of the ordering permutation).
If the dimension of over is and is an orthogonal basis of , then is free over with a basis
The empty product () is defined as being the multiplicative identity element. For each value of there are basis elements, so the total dimension of the Clifford algebra is
Examples: real and complex Clifford algebras
The most important Clifford algebras are those over real and complex vector spaces equipped with nondegenerate quadratic forms.
Each of the algebras and is isomorphic to or , where is a full matrix ring with entries from , , or . For a complete classification of these algebras see Classification of Clifford algebras.
Real numbers
Clifford algebras are also sometimes referred to as geometric algebras, most often over the real numbers.
Every nondegenerate quadratic form on a finite-dimensional real vector space is equivalent to the standard diagonal form:
where is the dimension of the vector space. The pair of integers is called the signature of the quadratic form. The real vector space with this quadratic form is often denoted The Clifford algebra on is denoted The symbol means either or , depending on whether the author prefers positive-definite or negative-definite spaces.
A standard basis for consists of mutually orthogonal vectors, of which square to and of which square to . Of such a basis, the algebra will therefore have vectors that square to and vectors that square to .
A few low-dimensional cases are:
is naturally isomorphic to since there are no nonzero vectors.
is a two-dimensional algebra generated by that squares to , and is algebra-isomorphic to , the field of complex numbers.
is a four-dimensional algebra spanned by . The latter three elements all square to and anticommute, and so the algebra is isomorphic to the quaternions .
is an 8-dimensional algebra isomorphic to the direct sum , the split-biquaternions.
Complex numbers
One can also study Clifford algebras on complex vector spaces. Every nondegenerate quadratic form on a complex vector space of dimension is equivalent to the standard diagonal form
Thus, for each dimension , up to isomorphism there is only one Clifford algebra of a complex vector space with a nondegenerate quadratic form. We will denote the Clifford algebra on with the standard quadratic form by .
For the first few cases one finds that
, the complex numbers
, the bicomplex numbers
, the biquaternions
where denotes the algebra of matrices over .
Examples: constructing quaternions and dual quaternions
Quaternions
In this section, Hamilton's quaternions are constructed as the even subalgebra of the Clifford algebra .
Let the vector space be real three-dimensional space , and the quadratic form be the usual quadratic form. Then, for in we have the bilinear form (or scalar product)
Now introduce the Clifford product of vectors and given by
Denote a set of orthogonal unit vectors of as , then the Clifford product yields the relations
and
The general element of the Clifford algebra is given by
The linear combination of the even degree elements of defines the even subalgebra with the general element
The basis elements can be identified with the quaternion basis elements as
which shows that the even subalgebra is Hamilton's real quaternion algebra.
To see this, compute
and
Finally,
Dual quaternions
In this section, dual quaternions are constructed as the even subalgebra of a Clifford algebra of real four-dimensional space with a degenerate quadratic form.
Let the vector space be real four-dimensional space and let the quadratic form be a degenerate form derived from the Euclidean metric on For in introduce the degenerate bilinear form
This degenerate scalar product projects distance measurements in onto the hyperplane.
The Clifford product of vectors and is given by
Note the negative sign is introduced to simplify the correspondence with quaternions.
Denote a set of mutually orthogonal unit vectors of as , then the Clifford product yields the relations
and
The general element of the Clifford algebra has 16 components. The linear combination of the even degree elements defines the even subalgebra with the general element
The basis elements can be identified with the quaternion basis elements and the dual unit as
This provides the correspondence of with dual quaternion algebra.
To see this, compute
and
The exchanges of and alternate signs an even number of times, and show the dual unit commutes with the quaternion basis elements .
Examples: in small dimension
Let be any field of characteristic not .
Dimension 1
For , if has diagonalization , that is there is a non-zero vector such that , then is algebra-isomorphic to a -algebra generated by an element that satisfies , the quadratic algebra .
In particular, if (that is, is the zero quadratic form) then is algebra-isomorphic to the dual numbers algebra over .
If is a non-zero square in , then .
Otherwise, is isomorphic to the quadratic field extension of .
Dimension 2
For , if has diagonalization with non-zero and (which always exists if is non-degenerate), then is isomorphic to a -algebra generated by elements and that satisfies , and .
Thus is isomorphic to the (generalized) quaternion algebra . We retrieve Hamilton's quaternions when , since .
As a special case, if some in satisfies , then .
Properties
Relation to the exterior algebra
Given a vector space , one can construct the exterior algebra , whose definition is independent of any quadratic form on . It turns out that if does not have characteristic then there is a natural isomorphism between and considered as vector spaces (and there exists an isomorphism in characteristic two, which may not be natural). This is an algebra isomorphism if and only if . One can thus consider the Clifford algebra as an enrichment (or more precisely, a quantization, cf. the Introduction) of the exterior algebra on with a multiplication that depends on (one can still define the exterior product independently of ).
The easiest way to establish the isomorphism is to choose an orthogonal basis for and extend it to a basis for as described above. The map is determined by
Note that this works only if the basis is orthogonal. One can show that this map is independent of the choice of orthogonal basis and so gives a natural isomorphism.
If the characteristic of is , one can also establish the isomorphism by antisymmetrizing. Define functions by
where the sum is taken over the symmetric group on elements, . Since is alternating, it induces a unique linear map . The direct sum of these maps gives a linear map between and . This map can be shown to be a linear isomorphism, and it is natural.
A more sophisticated way to view the relationship is to construct a filtration on . Recall that the tensor algebra has a natural filtration: , where contains sums of tensors with order . Projecting this down to the Clifford algebra gives a filtration on . The associated graded algebra
is naturally isomorphic to the exterior algebra . Since the associated graded algebra of a filtered algebra is always isomorphic to the filtered algebra as filtered vector spaces (by choosing complements of in for all ), this provides an isomorphism (although not a natural one) in any characteristic, even two.
Grading
In the following, assume that the characteristic is not .
Clifford algebras are -graded algebras (also known as superalgebras). Indeed, the linear map on defined by (reflection through the origin) preserves the quadratic form and so by the universal property of Clifford algebras extends to an algebra automorphism
Since is an involution (i.e. it squares to the identity) one can decompose into positive and negative eigenspaces of
where
Since is an automorphism it follows that:
where the bracketed superscripts are read modulo 2. This gives the structure of a -graded algebra. The subspace forms a subalgebra of , called the even subalgebra. The subspace is called the odd part of (it is not a subalgebra). -grading plays an important role in the analysis and application of Clifford algebras. The automorphism is called the main involution or grade involution. Elements that are pure in this -grading are simply said to be even or odd.
Remark. The Clifford algebra is not a -graded algebra, but is -filtered, where is the subspace spanned by all products of at most elements of .
The degree of a Clifford number usually refers to the degree in the -grading.
The even subalgebra of a Clifford algebra is itself isomorphic to a Clifford algebra. If is the orthogonal direct sum of a vector of nonzero norm and a subspace , then is isomorphic to , where is the form restricted to . In particular over the reals this implies that:
In the negative-definite case this gives an inclusion , which extends the sequence
Likewise, in the complex case, one can show that the even subalgebra of is isomorphic to .
Antiautomorphisms
In addition to the automorphism , there are two antiautomorphisms that play an important role in the analysis of Clifford algebras. Recall that the tensor algebra comes with an antiautomorphism that reverses the order in all products of vectors:
Since the ideal is invariant under this reversal, this operation descends to an antiautomorphism of called the transpose or reversal operation, denoted by . The transpose is an antiautomorphism: . The transpose operation makes no use of the -grading so we define a second antiautomorphism by composing and the transpose. We call this operation Clifford conjugation denoted
Of the two antiautomorphisms, the transpose is the more fundamental.
Note that all of these operations are involutions. One can show that they act as on elements that are pure in the -grading. In fact, all three operations depend on only the degree modulo . That is, if is pure with degree then
where the signs are given by the following table:
{| class=wikitable
!
| || || || || …
|-
!
| || || || ||
|-
!
| || || || ||
|-
!
| || || || ||
|}
Clifford scalar product
When the characteristic is not , the quadratic form on can be extended to a quadratic form on all of (which we also denoted by ). A basis-independent definition of one such extension is
where denotes the scalar part of (the degree- part in the -grading). One can show that
where the are elements of – this identity is not true for arbitrary elements of .
The associated symmetric bilinear form on is given by
One can check that this reduces to the original bilinear form when restricted to . The bilinear form on all of is nondegenerate if and only if it is nondegenerate on .
The operator of left (respectively right) Clifford multiplication by the transpose of an element is the adjoint of left (respectively right) Clifford multiplication by with respect to this inner product. That is,
and
Structure of Clifford algebras
In this section we assume that characteristic is not , the vector space is finite-dimensional and that the associated symmetric bilinear form of is nondegenerate.
A central simple algebra over is a matrix algebra over a (finite-dimensional) division algebra with center . For example, the central simple algebras over the reals are matrix algebras over either the reals or the quaternions.
If has even dimension then is a central simple algebra over .
If has even dimension then the even subalgebra is a central simple algebra over a quadratic extension of or a sum of two isomorphic central simple algebras over .
If has odd dimension then is a central simple algebra over a quadratic extension of or a sum of two isomorphic central simple algebras over .
If has odd dimension then the even subalgebra is a central simple algebra over .
The structure of Clifford algebras can be worked out explicitly using the following result. Suppose that has even dimension and a non-singular bilinear form with discriminant , and suppose that is another vector space with a quadratic form. The Clifford algebra of is isomorphic to the tensor product of the Clifford algebras of and , which is the space with its quadratic form multiplied by . Over the reals, this implies in particular that
These formulas can be used to find the structure of all real Clifford algebras and all complex Clifford algebras; see the classification of Clifford algebras.
Notably, the Morita equivalence class of a Clifford algebra (its representation theory: the equivalence class of the category of modules over it) depends on only the signature . This is an algebraic form of Bott periodicity.
Lipschitz group
The class of Lipschitz groups ( Clifford groups or Clifford–Lipschitz groups) was discovered by Rudolf Lipschitz.
In this section we assume that is finite-dimensional and the quadratic form is nondegenerate.
An action on the elements of a Clifford algebra by its group of units may be defined in terms of a twisted conjugation: twisted conjugation by maps , where is the main involution defined above.
The Lipschitz group is defined to be the set of invertible elements that stabilize the set of vectors under this action, meaning that for all in we have:
This formula also defines an action of the Lipschitz group on the vector space that preserves the quadratic form , and so gives a homomorphism from the Lipschitz group to the orthogonal group. The Lipschitz group contains all elements of for which is invertible in , and these act on by the corresponding reflections that take to . (In characteristic these are called orthogonal transvections rather than reflections.)
If is a finite-dimensional real vector space with a non-degenerate quadratic form then the Lipschitz group maps onto the orthogonal group of with respect to the form (by the Cartan–Dieudonné theorem) and the kernel consists of the nonzero elements of the field . This leads to exact sequences
Over other fields or with indefinite forms, the map is not in general onto, and the failure is captured by the spinor norm.
Spinor norm
In arbitrary characteristic, the spinor norm is defined on the Lipschitz group by
It is a homomorphism from the Lipschitz group to the group of non-zero elements of . It coincides with the quadratic form of when is identified with a subspace of the Clifford algebra. Several authors define the spinor norm slightly differently, so that it differs from the one here by a factor of , , or on . The difference is not very important in characteristic other than 2.
The nonzero elements of have spinor norm in the group ( of squares of nonzero elements of the field . So when is finite-dimensional and non-singular we get an induced map from the orthogonal group of to the group , also called the spinor norm. The spinor norm of the reflection about , for any vector , has image in , and this property uniquely defines it on the orthogonal group. This gives exact sequences:
Note that in characteristic the group has just one element.
From the point of view of Galois cohomology of algebraic groups, the spinor norm is a connecting homomorphism on cohomology. Writing for the algebraic group of square roots of 1 (over a field of characteristic not it is roughly the same as a two-element group with trivial Galois action), the short exact sequence
yields a long exact sequence on cohomology, which begins
The 0th Galois cohomology group of an algebraic group with coefficients in is just the group of -valued points: , and , which recovers the previous sequence
where the spinor norm is the connecting homomorphism .
Spin and pin groups
In this section we assume that is finite-dimensional and its bilinear form is non-singular.
The pin group is the subgroup of the Lipschitz group of elements of spinor norm , and similarly the spin group is the subgroup of elements of Dickson invariant in . When the characteristic is not , these are the elements of determinant . The spin group usually has index in the pin group.
Recall from the previous section that there is a homomorphism from the Lipschitz group onto the orthogonal group. We define the special orthogonal group to be the image of . If does not have characteristic this is just the group of elements of the orthogonal group of determinant . If does have characteristic , then all elements of the orthogonal group have determinant , and the special orthogonal group is the set of elements of Dickson invariant .
There is a homomorphism from the pin group to the orthogonal group. The image consists of the elements of spinor norm . The kernel consists of the elements and , and has order unless has characteristic . Similarly there is a homomorphism from the Spin group to the special orthogonal group of .
In the common case when is a positive or negative definite space over the reals, the spin group maps onto the special orthogonal group, and is simply connected when has dimension at least . Further the kernel of this homomorphism consists of and . So in this case the spin group, , is a double cover of . Note, however, that the simple connectedness of the spin group is not true in general: if is for and both at least then the spin group is not simply connected. In this case the algebraic group is simply connected as an algebraic group, even though its group of real valued points is not simply connected. This is a rather subtle point, which completely confused the authors of at least one standard book about spin groups.
Spinors
Clifford algebras , with even, are matrix algebras that have a complex representation of dimension . By restricting to the group we get a complex representation of the Pin group of the same dimension, called the spin representation. If we restrict this to the spin group then it splits as the sum of two half spin representations (or Weyl representations) of dimension .
If is odd then the Clifford algebra is a sum of two matrix algebras, each of which has a representation of dimension , and these are also both representations of the pin group . On restriction to the spin group these become isomorphic, so the spin group has a complex spinor representation of dimension .
More generally, spinor groups and pin groups over any field have similar representations whose exact structure depends on the structure of the corresponding Clifford algebras: whenever a Clifford algebra has a factor that is a matrix algebra over some division algebra, we get a corresponding representation of the pin and spin groups over that division algebra.
For examples over the reals see the article on spinors.
Real spinors
To describe the real spin representations, one must know how the spin group sits inside its Clifford algebra. The pin group, is the set of invertible elements in that can be written as a product of unit vectors:
Comparing with the above concrete realizations of the Clifford algebras, the pin group corresponds to the products of arbitrarily many reflections: it is a cover of the full orthogonal group . The spin group consists of those elements of that are products of an even number of unit vectors. Thus by the Cartan–Dieudonné theorem Spin is a cover of the group of proper rotations .
Let be the automorphism that is given by the mapping acting on pure vectors. Then in particular, is the subgroup of whose elements are fixed by . Let
(These are precisely the elements of even degree in .) Then the spin group lies within .
The irreducible representations of restrict to give representations of the pin group. Conversely, since the pin group is generated by unit vectors, all of its irreducible representation are induced in this manner. Thus the two representations coincide. For the same reasons, the irreducible representations of the spin coincide with the irreducible representations of .
To classify the pin representations, one need only appeal to the classification of Clifford algebras. To find the spin representations (which are representations of the even subalgebra), one can first make use of either of the isomorphisms (see above)
and realize a spin representation in signature as a pin representation in either signature or .
Applications
Differential geometry
One of the principal applications of the exterior algebra is in differential geometry where it is used to define the bundle of differential forms on a smooth manifold. In the case of a (pseudo-)Riemannian manifold, the tangent spaces come equipped with a natural quadratic form induced by the metric. Thus, one can define a Clifford bundle in analogy with the exterior bundle. This has a number of important applications in Riemannian geometry. Perhaps more important is the link to a spin manifold, its associated spinor bundle and manifolds.
Physics
Clifford algebras have numerous important applications in physics. Physicists usually consider a Clifford algebra to be an algebra that has a basis that is generated by the matrices , called Dirac matrices, which have the property that
where is the matrix of a quadratic form of signature (or corresponding to the two equivalent choices of metric signature). These are exactly the defining relations for the Clifford algebra , whose complexification is , which, by the classification of Clifford algebras, is isomorphic to the algebra of complex matrices . However, it is best to retain the notation , since any transformation that takes the bilinear form to the canonical form is not a Lorentz transformation of the underlying spacetime.
The Clifford algebra of spacetime used in physics thus has more structure than . It has in addition a set of preferred transformations – Lorentz transformations. Whether complexification is necessary to begin with depends in part on conventions used and in part on how much one wants to incorporate straightforwardly, but complexification is most often necessary in quantum mechanics where the spin representation of the Lie algebra sitting inside the Clifford algebra conventionally requires a complex Clifford algebra. For reference, the spin Lie algebra is given by
This is in the convention, hence fits in .
The Dirac matrices were first written down by Paul Dirac when he was trying to write a relativistic first-order wave equation for the electron, and give an explicit isomorphism from the Clifford algebra to the algebra of complex matrices. The result was used to define the Dirac equation and introduce the Dirac operator. The entire Clifford algebra shows up in quantum field theory in the form of Dirac field bilinears.
The use of Clifford algebras to describe quantum theory has been advanced among others by Mario Schönberg, by David Hestenes in terms of geometric calculus, by David Bohm and Basil Hiley and co-workers in form of a hierarchy of Clifford algebras, and by Elio Conte et al.
Computer vision
Clifford algebras have been applied in the problem of action recognition and classification in computer vision. Rodriguez et al propose a Clifford embedding to generalize traditional MACH filters to video (3D spatiotemporal volume), and vector-valued data such as optical flow. Vector-valued data is analyzed using the Clifford Fourier Transform. Based on these vectors action filters are synthesized in the Clifford Fourier domain and recognition of actions is performed using Clifford correlation. The authors demonstrate the effectiveness of the Clifford embedding by recognizing actions typically performed in classic feature films and sports broadcast television.
Generalizations
While this article focuses on a Clifford algebra of a vector space over a field, the definition extends without change to a module over any unital, associative, commutative ring.
Clifford algebras may be generalized to a form of degree higher than quadratic over a vector space.
See also
Algebra of physical space
Cayley–Dickson construction
Classification of Clifford algebras
Clifford analysis
Clifford module
Complex spin structure
Dirac operator
Exterior algebra
Fierz identity
Gamma matrices
Generalized Clifford algebra
Geometric algebra
Higher-dimensional gamma matrices
Hypercomplex number
Octonion
Paravector
Quaternion
Spin group
Spin structure
Spinor
Spinor bundle
Notes
Citations
References
, section IX.9.
. An advanced textbook on Clifford algebras and their applications to differential geometry.
; ibid II (1883) 46; ibid III (1884) 7–9. Summarized in The Collected Mathematics Papers of James Joseph Sylvester (Cambridge University Press, 1909) v III. online and further.
Further reading
External links
Planetmath entry on Clifford algebras
A history of Clifford algebras (unverified)
John Baez on Clifford algebras
Clifford Algebra: A Visual Introduction
Clifford Algebra Explorer : A Pedagogical Tool
Ring theory
Quadratic forms | Clifford algebra | [
"Mathematics"
] | 6,506 | [
"Fields of abstract algebra",
"Quadratic forms",
"Ring theory",
"Number theory"
] |
45,307 | https://en.wikipedia.org/wiki/Atomic%20electron%20transition | In atomic physics and chemistry, an atomic electron transition (also called an atomic transition, quantum jump, or quantum leap) is an electron changing from one energy level to another within an atom or artificial atom. The time scale of a quantum jump has not been measured experimentally. However, the Franck–Condon principle binds the upper limit of this parameter to the order of attoseconds.
Electrons jumping to energy levels of smaller n emit electromagnetic radiation in the form of a photon. Electrons can also absorb passing photons, which drives a quantum jump to a level of higher n. The larger the energy separation between the electron's initial and final state, the shorter the photons' wavelength.
History
Danish physicist Niels Bohr first theorized that electrons can perform quantum jumps in 1913. Soon after, James Franck and Gustav Ludwig Hertz proved experimentally that atoms have quantized energy states.
The observability of quantum jumps was predicted by Hans Dehmelt in 1975, and they were first observed using trapped ions of barium at University of Hamburg and mercury at NIST in 1986.
Theory
An atom interacts with the oscillating electric field:
with amplitude , angular frequency , and polarization vector . Note that the actual phase is . However, in many cases, the variation of is small over the atom (or equivalently, the radiation wavelength is much greater than the size of an atom) and this term can be ignored. This is called the dipole approximation. The atom can also interact with the oscillating magnetic field produced by the radiation, although much more weakly.
The Hamiltonian for this interaction, analogous to the energy of a classical dipole in an electric field, is . The stimulated transition rate can be calculated using time-dependent perturbation theory; however, the result can be summarized using Fermi's golden rule:
The dipole matrix element can be decomposed into the product of the radial integral and the angular integral. The angular integral is zero unless the selection rules for the atomic transition are satisfied.
Recent discoveries
In 2019, it was demonstrated in an experiment with a superconducting artificial atom consisting of two strongly-hybridized transmon qubits placed inside a readout resonator cavity at 15 mK, that the evolution of some jumps is continuous, coherent, deterministic, and reversible. On the other hand, other quantum jumps are inherently unpredictable.
See also
Burst noise
Ensemble interpretation
Fluorescence
Glowing pickle demonstration
Molecular electronic transition, for molecules
Phosphorescence
Quantum jump
Spontaneous emission
Stimulated emission
References
External links
Part 2
"There are no quantum jumps, nor are there particles!" by H. D. Zeh, Physics Letters A172, 189 (1993).
"Surface plasmon at a metal-dielectric interface with an epsilon-near-zero transition layer" by Kevin Roccapriore et al., Physical Review B 103, L161404 (2021).
Atomic physics
Electron states | Atomic electron transition | [
"Physics",
"Chemistry"
] | 612 | [
"Electron",
"Quantum mechanics",
"Atomic physics",
" molecular",
"Atomic",
"Electron states",
" and optical physics"
] |
45,317 | https://en.wikipedia.org/wiki/Flange | A flange is a protruded ridge, lip or rim, either external or internal, that serves to increase strength (as the flange of a steel beam such as an I-beam or a T-beam); for easy attachment/transfer of contact force with another object (as the flange on the end of a pipe, steam cylinder, etc., or on the lens mount of a camera); or for stabilizing and guiding the movements of a machine or its parts (as the inside flange of a rail car or tram wheel, which keep the wheels from running off the rails). Flanges are often attached using bolts in the pattern of a bolt circle.
Flanges play a pivotal role in piping systems by allowing easy access for maintenance, inspection, and modification. They provide a means to connect or disconnect pipes and equipment without the need for welding, which simplifies installation and reduces downtime during repairs or upgrades. Additionally, flanges facilitate the alignment of pipes, ensuring a proper fit and minimizing stress on the system.
Plumbing or piping
A flange can also be a plate or ring to form a rim at the end of a pipe when fastened to the pipe (for example, a closet flange). A blind flange is a plate for covering or closing the end of a pipe. A flange joint is a connection of pipes, where the connecting pieces have flanges by which the parts are bolted together.
Although the word 'flange' generally refers to the actual raised rim or lip of a fitting, many flanged plumbing fittings are themselves known as flanges.
Common flanges used in plumbing are the Surrey flange or Danzey flange, York flange, Sussex flange and Essex flange.
Surrey and York flanges fit to the top of the hot water tank allowing all the water to be taken without disturbance to the tank. They are often used to ensure an even flow of water to showers. An Essex flange requires a hole to be drilled in the side of the tank.
There is also a Warix flange which is the same as a York flange but the shower output is on the top of the flange and the vent on the side. The York and Warix flange have female adapters so that they fit onto a male tank, whereas the Surrey flange connects to a female tank.
A closet flange provides the mount for a toilet.
Pipe flanges
Piping components can be bolted together between flanges. Flanges are used to connect pipes with each other, to valves, to fittings, and to specialty items such as strainers and pressure vessels. A cover plate can be connected to create a "blind flange". Flanges are joined by bolting, and sealing is often completed with the use of gaskets or other methods. Mechanical means to mitigate effects of leaks, like spray guards or specific spray flanges, may be included. Industries where flammable, volatile, toxic or corrosive substances are being processed have greater need of special protection at flanged connections.
Flange guards can provide that added level of protection to ensure safety.
There are many different flange standards to be found worldwide. To allow easy functionality and interchangeability, these are designed to have standardised dimensions. Common world standards include ASA/ASME (USA), PN/DIN (European), BS10 (British/Australian), and JIS/KS (Japanese/Korean). In the USA, the standard is ASME B16.5 (ANSI stopped publishing B16.5 in 1996). ASME B16.5 covers flanges up to 24 inches size and up to pressure rating of Class 2500. Flanges larger than 24 inches are covered in ASME B16.47.
In most cases, standards are interchangeable, as most local standards have been aligned to ISO standards; however, some local standards still differ. For example, an ASME flange will not mate against an ISO flange. Further, many of the flanges in each standard are divided into "pressure classes", allowing flanges to be capable of taking different pressure ratings. Again these are not generally interchangeable (e.g. an ASME 150 will not mate with an ASME 300).
These pressure classes also have differing pressure and temperature ratings for different materials. Unique pressure classes for piping can also be developed for a process plant or power generating station; these may be specific to the corporation, engineering procurement and construction (EPC) contractor, or the process plant owner. The ASME pressure classes for flat-face flanges are Class 125 and Class 250. The classes for ring-joint, tongue and groove, and raised-face flanges are Class 150, Class 300, Class 400 (unusual), Class 600, Class 900, Class 1500, and Class 2500.
The flange faces are also made to standardized dimensions and are typically "flat face", "raised face", "tongue and groove", or "ring joint" styles, although other obscure styles are possible.
Flange designs are available as "weld neck", "slip-on", "lap joint", "socket weld", "threaded", and also "blind".
Types of flanges
Flanges come in various types, each designed to meet specific requirements based on factors such as pressure, temperature, and application. Some common types include:
Weld Neck Flanges: Weld neck flanges feature a long tapered hub that provides reinforcement to the connection, making them suitable for high-pressure and high-temperature applications.
Slip-On Flanges: Slip-on flanges have a slightly larger diameter than the pipe they connect to and are slipped over the pipe before welding. They are commonly used in low-pressure and non-critical applications.
Socket Weld Flanges: Socket weld flanges have a recessed area (socket) into which the pipe end fits, allowing for fillet welding. They are suitable for small-bore piping systems and applications with moderate pressure and temperature requirements.
Blind Flanges: Blind flanges are solid discs used to close the end of a piping system or vessel. They are often used for pressure testing or as a permanent seal when a pipe end needs to be closed off.
ASME standards (U.S.)
Pipe flanges that are made to standards called out by ASME B16.5 or ASME B16.47, and MSS SP-44. They are typically made from forged materials and have machined surfaces. ASME B16.5 refers to nominal pipe sizes (NPS) from " to 24". B16.47 covers NPSs from 26" to 60". Each specification further delineates flanges into pressure classes: 150, 300, 400, 600, 900, 1500 and 2500 for B16.5, and B16.47 delineates its flanges into pressure classes 75, 150, 300, 400, 600, 900. However these classes do not correspond to maximum pressures in psi. Instead, the maximum pressure depends on the material of the flange and the temperature. For example, the maximum pressure for a Class 150 flange is 285 psi, and for a Class 300 flange it is 740 psi (both are for ASTM a105 carbon steel and temperatures below 100 °F).
The gasket type and bolt type are generally specified by the standard(s); however, sometimes the standards refer to the ASME Boiler and Pressure Vessel Code (B&PVC) for details (see ASME Code Section VIII Division 1 – Appendix 2). These flanges are recognized by ASME Pipe Codes such as ASME B31.1 Power Piping, and ASME B31.3 Process Piping.
Materials for flanges are usually under ASME designation: SA-105 (Specification for Carbon Steel Forgings for Piping Applications), SA-266 (Specification for Carbon Steel Forgings for Pressure Vessel Components), or SA-182 (Specification for Forged or Rolled Alloy-Steel Pipe Flanges, Forged Fittings, and Valves and Parts for High-Temperature Service). In addition, there are many "industry standard" flanges that in some circumstance may be used on ASME work.
The product range includes SORF, SOFF, BLRF, BLFF, WNRF (XS, XXS, STD and Schedule 20, 40, 80), WNFF (XS, XXS, STD and Schedule 20, 40, 80), SWRF (XS and STD), SWFF (XS and STD), Threaded RF, Threaded FF and LJ, with sizes from 1/2" to 16". The bolting material used for flange connection is stud bolts mated with two nut (washer when required). In petrochemical industries, ASTM A193 B7 STUD and ASTM A193 B16 stud bolts are used as these have high tensile strength.
European dimensions (EN / DIN) Hygienic Flange STC DIN11853-2
Most countries in Europe mainly install flanges according to standard DIN EN 1092-1 (forged stainless or steel flanges). Similar to the ASME flange standard, the EN 1092-1 standard has the basic flange forms, such as weld neck flange, blind flange, lapped flange, threaded flange (thread ISO7-1 instead of NPT), weld on collar, pressed collars, and adapter flange such as flange coupling GD press fittings. The different forms of flanges within the EN 1092-1 (European Norm/Euronorm) is indicated within the flange name through the type.
Similar to ASME flanges, EN1092-1 steel and stainless flanges, have several different versions of raised or none raised faces. According to the European form the seals are indicated by different form:
Furthermore, for sanitary applications such as in the food and beverage and pharmaceutical industries, sanitary flanges according to DIN 11853-2 STC are utilized. The primary distinction between sanitary flanges according to DIN 11853-2 and DIN/EN flanges lies in the restricted dead-room and the interior polishing according to hygienic levels of H1 to H4. Usually the flange traders that hold the Standard DIN EN 1092-1 such as Hage Fittings, do not hold Sanitary flanges as the storage requirements are different. Sanitary flanges are more delicate and need to stay clean. Usually the O-Rings, according to DIn 11853, are made out of FPM or EPDM.
Other countries
Flanges in the rest of the world are manufactured according to the ISO standards for materials, pressure ratings, etc. to which local standards including DIN, BS, and others, have been aligned.
Compact flanges
As the size of a compact flange increases it becomes relatively increasingly heavy and complex resulting in high procurement, installation and maintenance costs.
Large flange diameters in particular are difficult to work with, and inevitably require more space and have a more challenging handling and installation procedure, particularly on remote installations such as oil rigs.
The design of the flange face includes two independent seals. The first seal is created by application of seal seating stress at the flange heel, but it is not straight forward to ensure the function of this seal.
Theoretically, the heel contact will be maintained for pressure values up to 1.8 times the flange rating at room temperature.
Theoretically, the flange also remains in contact along its outer circumference at the flange faces for all allowable load levels that it is designed for.
The main seal is the IX seal ring. The seal ring force is provided by the elastic stored energy in the stressed seal ring. Any heel leakage will give internal pressure acting on the seal ring inside intensifying the sealing action. This however requires the IX ring to be retained in the theoretical location in the ring groove which is difficult to ensure and verify during installation.
The design aims at preventing exposure to oxygen and other corrosive agents. Thus, this prevents corrosion of the flange faces, the stressed length of the bolts and the seal ring. This however depends on the outer dust rim to remain in satisfactory contact and that the inside fluid is not corrosive in case of leaking into the bolt circle void.
Applications of compact flanges
The initial cost of the theoretical higher performance compact flange is inevitably higher than a regular flange due to the closer tolerances and significantly more sophisticated design and installation requirements.
By way of example, compact flanges are often used across the following applications: subsea oil and gas or riser, cold work and cryogenics, gas injection, high temperature, and nuclear applications.
Train wheels
Most trains and trams stay on their tracks primarily due to the conical geometry of their wheels. They also have a flange on one side to keep the wheels, and hence the train, running on the rails when the limits of the geometry-based alignment are reached, either due to some emergency or defect, or simply because the curve radius is so small that self-steering normally provided by the coned wheel tread is no longer effective.
Vacuum flanges
A vacuum flange is a flange at the end of a tube used to connect vacuum chambers, tubing and vacuum pumps to each other.
Microwave
In microwave telecommunications, a flange is a type of cable joint that allows different types of waveguide to connect.
Several different microwave RF flange types exist, such as CAR, CBR, OPC, PAR, PBJ, PBR, PDR, UAR, UBR, UDR, icp and UPX.
Ski boots
Ski boots use flanges at the toe or heel to connect to the binding of the ski. The size and shape for flanges on alpine skiing boots is standardized in ISO 5355. Traditional telemark and cross country boots use the 75 mm Nordic Norm, but the toe flange is informally known as the "duckbill". New cross country bindings eliminate the flange entirely and use a steel bar embedded within the sole instead.
See also
Casing head
Closet flange
Victaulic
Swivel
References
Further reading
ASME B16.5: Standard Pipe Flanges up to and including 24 inches nominal
ASME B16.47: Standard Pipe Flanges above 24 inches
ASME Section II (Materials), Part A – Ferrous Material Specifications
ASME B16.47 Standard Pipe Flanges Yaang Pipe Industry
ANSI Flange Torque Lookup Tool
Piping
Plumbing
Mechanical engineering
Structural engineering
Train wheels | Flange | [
"Physics",
"Chemistry",
"Engineering"
] | 3,076 | [
"Structural engineering",
"Applied and interdisciplinary physics",
"Building engineering",
"Chemical engineering",
"Plumbing",
"Construction",
"Civil engineering",
"Mechanical engineering",
"Piping"
] |
45,337 | https://en.wikipedia.org/wiki/Nash%20equilibrium | In game theory, the Nash equilibrium is the most commonly-used solution concept for non-cooperative games. A Nash equilibrium is a situation where no player could gain by changing their own strategy (holding all other players' strategies fixed). The idea of Nash equilibrium dates back to the time of Cournot, who in 1838 applied it to his model of competition in an oligopoly.
If each player has chosen a strategy an action plan based on what has happened so far in the game and no one can increase one's own expected payoff by changing one's strategy while the other players keep theirs unchanged, then the current set of strategy choices constitutes a Nash equilibrium.
If two players Alice and Bob choose strategies A and B, (A, B) is a Nash equilibrium if Alice has no other strategy available that does better than A at maximizing her payoff in response to Bob choosing B, and Bob has no other strategy available that does better than B at maximizing his payoff in response to Alice choosing A. In a game in which Carol and Dan are also players, (A, B, C, D) is a Nash equilibrium if A is Alice's best response to (B, C, D), B is Bob's best response to (A, C, D), and so forth.
Nash showed that there is a Nash equilibrium, possibly in mixed strategies, for every finite game.
Applications
Game theorists use Nash equilibrium to analyze the outcome of the strategic interaction of several decision makers. In a strategic interaction, the outcome for each decision-maker depends on the decisions of the others as well as their own. The simple insight underlying Nash's idea is that one cannot predict the choices of multiple decision makers if one analyzes those decisions in isolation. Instead, one must ask what each player would do taking into account what the player expects the others to do. Nash equilibrium requires that one's choices be consistent: no players wish to undo their decision given what the others are deciding.
The concept has been used to analyze hostile situations such as wars and arms races (see prisoner's dilemma), and also how conflict may be mitigated by repeated interaction (see tit-for-tat). It has also been used to study to what extent people with different preferences can cooperate (see battle of the sexes), and whether they will take risks to achieve a cooperative outcome (see stag hunt). It has been used to study the adoption of technical standards, and also the occurrence of bank runs and currency crises (see coordination game). Other applications include traffic flow (see Wardrop's principle), how to organize auctions (see auction theory), the outcome of efforts exerted by multiple parties in the education process, regulatory legislation such as environmental regulations (see tragedy of the commons), natural resource management, analysing strategies in marketing, penalty kicks in football (see matching pennies), robot navigation in crowds, energy systems, transportation systems, evacuation problems and wireless communications.
History
Nash equilibrium is named after American mathematician John Forbes Nash Jr. The same idea was used in a particular application in 1838 by Antoine Augustin Cournot in his theory of oligopoly. In Cournot's theory, each of several firms choose how much output to produce to maximize its profit. The best output for one firm depends on the outputs of the others. A Cournot equilibrium occurs when each firm's output maximizes its profits given the output of the other firms, which is a pure-strategy Nash equilibrium. Cournot also introduced the concept of best response dynamics in his analysis of the stability of equilibrium. Cournot did not use the idea in any other applications, however, or define it generally.
The modern concept of Nash equilibrium is instead defined in terms of mixed strategies, where players choose a probability distribution over possible pure strategies (which might put 100% of the probability on one pure strategy; such pure strategies are a subset of mixed strategies). The concept of a mixed-strategy equilibrium was introduced by John von Neumann and Oskar Morgenstern in their 1944 book The Theory of Games and Economic Behavior, but their analysis was restricted to the special case of zero-sum games. They showed that a mixed-strategy Nash equilibrium will exist for any zero-sum game with a finite set of actions. The contribution of Nash in his 1951 article "Non-Cooperative Games" was to define a mixed-strategy Nash equilibrium for any game with a finite set of actions and prove that at least one (mixed-strategy) Nash equilibrium must exist in such a game. The key to Nash's ability to prove existence far more generally than von Neumann lay in his definition of equilibrium. According to Nash, "an equilibrium point is an n-tuple such that each player's mixed strategy maximizes [their] payoff if the strategies of the others are held fixed. Thus each player's strategy is optimal against those of the others." Putting the problem in this framework allowed Nash to employ the Kakutani fixed-point theorem in his 1950 paper to prove existence of equilibria. His 1951 paper used the simpler Brouwer fixed-point theorem for the same purpose.
Game theorists have discovered that in some circumstances Nash equilibrium makes invalid predictions or fails to make a unique prediction. They have proposed many solution concepts ('refinements' of Nash equilibria) designed to rule out implausible Nash equilibria. One particularly important issue is that some Nash equilibria may be based on threats that are not 'credible'. In 1965 Reinhard Selten proposed subgame perfect equilibrium as a refinement that eliminates equilibria which depend on non-credible threats. Other extensions of the Nash equilibrium concept have addressed what happens if a game is repeated, or what happens if a game is played in the absence of complete information. However, subsequent refinements and extensions of Nash equilibrium share the main insight on which Nash's concept rests: the equilibrium is a set of strategies such that each player's strategy is optimal given the choices of the others.
Definitions
Nash equilibrium
A strategy profile is a set of strategies, one for each player. Informally, a strategy profile is a Nash equilibrium if no player can do better by unilaterally changing their strategy. To see what this means, imagine that each player is told the strategies of the others. Suppose then that each player asks themselves: "Knowing the strategies of the other players, and treating the strategies of the other players as set in stone, can I benefit by changing my strategy?"
For instance if a player prefers "Yes", then that set of strategies is not a Nash equilibrium. But if every player prefers not to switch (or is indifferent between switching and not) then the strategy profile is a Nash equilibrium. Thus, each strategy in a Nash equilibrium is a best response to the other players' strategies in that equilibrium.
Formally, let be the set of all possible strategies for player , where . Let be a strategy profile, a set consisting of one strategy for each player, where denotes the strategies of all the players except . Let be player is payoff as a function of the strategies. The strategy profile is a Nash equilibrium if
A game can have more than one Nash equilibrium. Even if the equilibrium is unique, it might be weak: a player might be indifferent among several strategies given the other players' choices. It is unique and called a strict Nash equilibrium if the inequality is strict so one strategy is the unique best response:
The strategy set can be different for different players, and its elements can be a variety of mathematical objects. Most simply, a player might choose between two strategies, e.g. Or the strategy set might be a finite set of conditional strategies responding to other players, e.g. Or it might be an infinite set, a continuum or unbounded, e.g. such that is a non-negative real number. Nash's existing proofs assume a finite strategy set, but the concept of Nash equilibrium does not require it.
Variants
Pure/mixed equilibrium
A game can have a pure-strategy or a mixed-strategy Nash equilibrium. In the latter, not every player always plays the same strategy. Instead, there is a probability distribution over different strategies.
Strict/non-strict equilibrium
Suppose that in the Nash equilibrium, each player asks themselves: "Knowing the strategies of the other players, and treating the strategies of the other players as set in stone, would I suffer a loss by changing my strategy?"
If every player's answer is "Yes", then the equilibrium is classified as a strict Nash equilibrium.
If instead, for some player, there is exact equality between the strategy in Nash equilibrium and some other strategy that gives exactly the same payout (i.e. the player is indifferent between switching and not), then the equilibrium is classified as a weak or non-strict Nash equilibrium.
Equilibria for coalitions
The Nash equilibrium defines stability only in terms of individual player deviations. In cooperative games such a concept is not convincing enough. Strong Nash equilibrium allows for deviations by every conceivable coalition. Formally, a strong Nash equilibrium is a Nash equilibrium in which no coalition, taking the actions of its complements as given, can cooperatively deviate in a way that benefits all of its members. However, the strong Nash concept is sometimes perceived as too "strong" in that the environment allows for unlimited private communication. In fact, strong Nash equilibrium has to be Pareto efficient. As a result of these requirements, strong Nash is too rare to be useful in many branches of game theory. However, in games such as elections with many more players than possible outcomes, it can be more common than a stable equilibrium.
A refined Nash equilibrium known as coalition-proof Nash equilibrium (CPNE) occurs when players cannot do better even if they are allowed to communicate and make "self-enforcing" agreement to deviate. Every correlated strategy supported by iterated strict dominance and on the Pareto frontier is a CPNE. Further, it is possible for a game to have a Nash equilibrium that is resilient against coalitions less than a specified size, k. CPNE is related to the theory of the core.
Existence
Nash's existence theorem
Nash proved that if mixed strategies (where a player chooses probabilities of using various pure strategies) are allowed, then every game with a finite number of players in which each player can choose from finitely many pure strategies has at least one Nash equilibrium, which might be a pure strategy for each player or might be a probability distribution over strategies for each player.
Nash equilibria need not exist if the set of choices is infinite and non-compact. For example:
A game where two players simultaneously name a number and the player naming the larger number wins does not have a NE, as the set of choices is not compact because it is unbounded.
Each of two players chooses a real number strictly less than 5 and the winner is whoever has the biggest number; no biggest number strictly less than 5 exists (if the number could equal 5, the Nash equilibrium would have both players choosing 5 and tying the game). Here, the set of choices is not compact because it is not closed.
However, a Nash equilibrium exists if the set of choices is compact with each player's payoff continuous in the strategies of all the players.
Rosen's existence theorem
Rosen extended Nash's existence theorem in several ways. He considers an n-player game, in which the strategy of each player i is a vector si in the Euclidean space Rmi. Denote m:=m1+...+mn; so a strategy-tuple is a vector in Rm. Part of the definition of a game is a subset S of Rm such that the strategy-tuple must be in S. This means that the actions of players may potentially be constrained based on actions of other players. A common special case of the model is when S is a Cartesian product of convex sets S1,...,Sn, such that the strategy of player i must be in Si. This represents the case that the actions of each player i are constrained independently of other players' actions. If the following conditions hold:
T is convex, closed and bounded;
Each payoff function ui is continuous in the strategies of all players, and concave in si for every fixed value of s−i.
Then a Nash equilibrium exists. The proof uses the Kakutani fixed-point theorem. Rosen also proves that, under certain technical conditions which include strict concavity, the equilibrium is unique.
Nash's result refers to the special case in which each Si is a simplex (representing all possible mixtures of pure strategies), and the payoff functions of all players are bilinear functions of the strategies.
Rationality
The Nash equilibrium may sometimes appear non-rational in a third-person perspective. This is because a Nash equilibrium is not necessarily Pareto optimal.
Nash equilibrium may also have non-rational consequences in sequential games because players may "threaten" each other with threats they would not actually carry out. For such games the subgame perfect Nash equilibrium may be more meaningful as a tool of analysis.
Examples
Coordination game
The coordination game is a classic two-player, two-strategy game, as shown in the example payoff matrix to the right. There are two pure-strategy equilibria, (A,A) with payoff 4 for each player and (B,B) with payoff 2 for each. The combination (B,B) is a Nash equilibrium because if either player unilaterally changes their strategy from B to A, their payoff will fall from 2 to 1.
A famous example of a coordination game is the stag hunt. Two players may choose to hunt a stag or a rabbit, the stag providing more meat (4 utility units, 2 for each player) than the rabbit (1 utility unit). The caveat is that the stag must be cooperatively hunted, so if one player attempts to hunt the stag, while the other hunts the rabbit, the stag hunter will totally fail, for a payoff of 0, whereas the rabbit hunter will succeed, for a payoff of 1. The game has two equilibria, (stag, stag) and (rabbit, rabbit), because a player's optimal strategy depends on their expectation on what the other player will do. If one hunter trusts that the other will hunt the stag, they should hunt the stag; however if they think the other will hunt the rabbit, they too will hunt the rabbit. This game is used as an analogy for social cooperation, since much of the benefit that people gain in society depends upon people cooperating and implicitly trusting one another to act in a manner corresponding with cooperation.
Driving on a road against an oncoming car, and having to choose either to swerve on the left or to swerve on the right of the road, is also a coordination game. For example, with payoffs 10 meaning no crash and 0 meaning a crash, the coordination game can be defined with the following payoff matrix:
In this case there are two pure-strategy Nash equilibria, when both choose to either drive on the left or on the right. If we admit mixed strategies (where a pure strategy is chosen at random, subject to some fixed probability), then there are three Nash equilibria for the same case: two we have seen from the pure-strategy form, where the probabilities are (0%, 100%) for player one, (0%, 100%) for player two; and (100%, 0%) for player one, (100%, 0%) for player two respectively. We add another where the probabilities for each player are (50%, 50%).
Network traffic
An application of Nash equilibria is in determining the expected flow of traffic in a network. Consider the graph on the right. If we assume that there are "cars" traveling from to , what is the expected distribution of traffic in the network?
This situation can be modeled as a "game", where every traveler has a choice of 3 strategies and where each strategy is a route from to (one of , , or ). The "payoff" of each strategy is the travel time of each route. In the graph on the right, a car travelling via experiences travel time of , where is the number of cars traveling on edge . Thus, payoffs for any given strategy depend on the choices of the other players, as is usual. However, the goal, in this case, is to minimize travel time, not maximize it. Equilibrium will occur when the time on all paths is exactly the same. When that happens, no single driver has any incentive to switch routes, since it can only add to their travel time. For the graph on the right, if, for example, 100 cars are travelling from to , then equilibrium will occur when 25 drivers travel via , 50 via , and 25 via . Every driver now has a total travel time of 3.75 (to see this, a total of 75 cars take the edge, and likewise, 75 cars take the edge).
Notice that this distribution is not, actually, socially optimal. If the 100 cars agreed that 50 travel via and the other 50 through , then travel time for any single car would actually be 3.5, which is less than 3.75. This is also the Nash equilibrium if the path between and is removed, which means that adding another possible route can decrease the efficiency of the system, a phenomenon known as Braess's paradox.
Competition game
This can be illustrated by a two-player game in which both players simultaneously choose an integer from 0 to 3 and they both win the smaller of the two numbers in points. In addition, if one player chooses a larger number than the other, then they have to give up two points to the other.
This game has a unique pure-strategy Nash equilibrium: both players choosing 0 (highlighted in light red). Any other strategy can be improved by a player switching their number to one less than that of the other player. In the adjacent table, if the game begins at the green square, it is in player 1's interest to move to the purple square and it is in player 2's interest to move to the blue square. Although it would not fit the definition of a competition game, if the game is modified so that the two players win the named amount if they both choose the same number, and otherwise win nothing, then there are 4 Nash equilibria: (0,0), (1,1), (2,2), and (3,3).
Nash equilibria in a payoff matrix
There is an easy numerical way to identify Nash equilibria on a payoff matrix. It is especially helpful in two-person games where players have more than two strategies. In this case formal analysis may become too long. This rule does not apply to the case where mixed (stochastic) strategies are of interest. The rule goes as follows: if the first payoff number, in the payoff pair of the cell, is the maximum of the column of the cell and if the second number is the maximum of the row of the cell then the cell represents a Nash equilibrium.
We can apply this rule to a 3×3 matrix:
Using the rule, we can very quickly (much faster than with formal analysis) see that the Nash equilibria cells are (B,A), (A,B), and (C,C). Indeed, for cell (B,A), 40 is the maximum of the first column and 25 is the maximum of the second row. For (A,B), 25 is the maximum of the second column and 40 is the maximum of the first row; the same applies for cell (C,C). For other cells, either one or both of the duplet members are not the maximum of the corresponding rows and columns.
This said, the actual mechanics of finding equilibrium cells is obvious: find the maximum of a column and check if the second member of the pair is the maximum of the row. If these conditions are met, the cell represents a Nash equilibrium. Check all columns this way to find all NE cells. An N×N matrix may have between 0 and N×N pure-strategy Nash equilibria.
Stability
The concept of stability, useful in the analysis of many kinds of equilibria, can also be applied to Nash equilibria.
A Nash equilibrium for a mixed-strategy game is stable if a small change (specifically, an infinitesimal change) in probabilities for one player leads to a situation where two conditions hold:
the player who did not change has no better strategy in the new circumstance
the player who did change is now playing with a strictly worse strategy.
If these cases are both met, then a player with the small change in their mixed strategy will return immediately to the Nash equilibrium. The equilibrium is said to be stable. If condition one does not hold then the equilibrium is unstable. If only condition one holds then there are likely to be an infinite number of optimal strategies for the player who changed.
In the "driving game" example above there are both stable and unstable equilibria. The equilibria involving mixed strategies with 100% probabilities are stable. If either player changes their probabilities slightly, they will be both at a disadvantage, and their opponent will have no reason to change their strategy in turn. The (50%,50%) equilibrium is unstable. If either player changes their probabilities (which would neither benefit or damage the expectation of the player who did the change, if the other player's mixed strategy is still (50%,50%)), then the other player immediately has a better strategy at either (0%, 100%) or (100%, 0%).
Stability is crucial in practical applications of Nash equilibria, since the mixed strategy of each player is not perfectly known, but has to be inferred from statistical distribution of their actions in the game. In this case unstable equilibria are very unlikely to arise in practice, since any minute change in the proportions of each strategy seen will lead to a change in strategy and the breakdown of the equilibrium.
Finally in the eighties, building with great depth on such ideas Mertens-stable equilibria were introduced as a solution concept. Mertens stable equilibria satisfy both forward induction and backward induction. In a game theory context stable equilibria now usually refer to Mertens stable equilibria.
Occurrence
If a game has a unique Nash equilibrium and is played among players under certain conditions, then the NE strategy set will be adopted. Sufficient conditions to guarantee that the Nash equilibrium is played are:
The players all will do their utmost to maximize their expected payoff as described by the game.
The players are flawless in execution.
The players have sufficient intelligence to deduce the solution.
The players know the planned equilibrium strategy of all of the other players.
The players believe that a deviation in their own strategy will not cause deviations by any other players.
There is common knowledge that all players meet these conditions, including this one. So, not only must each player know the other players meet the conditions, but also they must know that they all know that they meet them, and know that they know that they know that they meet them, and so on.
Where the conditions are not met
Examples of game theory problems in which these conditions are not met:
The first condition is not met if the game does not correctly describe the quantities a player wishes to maximize. In this case there is no particular reason for that player to adopt an equilibrium strategy. For instance, the prisoner's dilemma is not a dilemma if either player is happy to be jailed indefinitely.
Intentional or accidental imperfection in execution. For example, a computer capable of flawless logical play facing a second flawless computer will result in equilibrium. Introduction of imperfection will lead to its disruption either through loss to the player who makes the mistake, or through negation of the common knowledge criterion leading to possible victory for the player. (An example would be a player suddenly putting the car into reverse in the game of chicken, ensuring a no-loss no-win scenario).
In many cases, the third condition is not met because, even though the equilibrium must exist, it is unknown due to the complexity of the game, for instance in Chinese chess. Or, if known, it may not be known to all players, as when playing tic-tac-toe with a small child who desperately wants to win (meeting the other criteria).
The criterion of common knowledge may not be met even if all players do, in fact, meet all the other criteria. Players wrongly distrusting each other's rationality may adopt counter-strategies to expected irrational play on their opponents’ behalf. This is a major consideration in "chicken" or an arms race, for example.
Where the conditions are met
In his Ph.D. dissertation, John Nash proposed two interpretations of his equilibrium concept, with the objective of showing how equilibrium points can be connected with observable phenomenon.
This idea was formalized by R. Aumann and A. Brandenburger, 1995, Epistemic Conditions for Nash Equilibrium, Econometrica, 63, 1161-1180 who interpreted each player's mixed strategy as a conjecture about the behaviour of other players and have shown that if the game and the rationality of players is mutually known and these conjectures are commonly known, then the conjectures must be a Nash equilibrium (a common prior assumption is needed for this result in general, but not in the case of two players. In this case, the conjectures need only be mutually known).
A second interpretation, that Nash referred to by the mass action interpretation, is less demanding on players:
For a formal result along these lines, see Kuhn, H. and et al., 1996, "The Work of John Nash in Game Theory", Journal of Economic Theory, 69, 153–185.
Due to the limited conditions in which NE can actually be observed, they are rarely treated as a guide to day-to-day behaviour, or observed in practice in human negotiations. However, as a theoretical concept in economics and evolutionary biology, the NE has explanatory power. The payoff in economics is utility (or sometimes money), and in evolutionary biology is gene transmission; both are the fundamental bottom line of survival. Researchers who apply games theory in these fields claim that strategies failing to maximize these for whatever reason will be competed out of the market or environment, which are ascribed the ability to test all strategies. This conclusion is drawn from the "stability" theory above. In these situations the assumption that the strategy observed is actually a NE has often been borne out by research.
NE and non-credible threats
The Nash equilibrium is a superset of the subgame perfect Nash equilibrium. The subgame perfect equilibrium in addition to the Nash equilibrium requires that the strategy also is a Nash equilibrium in every subgame of that game. This eliminates all non-credible threats, that is, strategies that contain non-rational moves in order to make the counter-player change their strategy.
The image to the right shows a simple sequential game that illustrates the issue with subgame imperfect Nash equilibria. In this game player one chooses left(L) or right(R), which is followed by player two being called upon to be kind (K) or unkind (U) to player one, However, player two only stands to gain from being unkind if player one goes left. If player one goes right the rational player two would de facto be kind to her/him in that subgame. However, The non-credible threat of being unkind at 2(2) is still part of the blue (L, (U,U)) Nash equilibrium. Therefore, if rational behavior can be expected by both parties the subgame perfect Nash equilibrium may be a more meaningful solution concept when such dynamic inconsistencies arise.
Proof of existence
Proof using the Kakutani fixed-point theorem
Nash's original proof (in his thesis) used Brouwer's fixed-point theorem (e.g., see below for a variant). This section presents a simpler proof via the Kakutani fixed-point theorem, following Nash's 1950 paper (he credits David Gale with the observation that such a simplification is possible).
To prove the existence of a Nash equilibrium, let be the best response of player i to the strategies of all other players.
Here, , where , is a mixed-strategy profile in the set of all mixed strategies and is the payoff function for player i. Define a set-valued function such that . The existence of a Nash equilibrium is equivalent to having a fixed point.
Kakutani's fixed point theorem guarantees the existence of a fixed point if the following four conditions are satisfied.
is compact, convex, and nonempty.
is nonempty.
is upper hemicontinuous
is convex.
Condition 1. is satisfied from the fact that is a simplex and thus compact. Convexity follows from players' ability to mix strategies. is nonempty as long as players have strategies.
Condition 2. and 3. are satisfied by way of Berge's maximum theorem. Because is continuous and compact, is non-empty and upper hemicontinuous.
Condition 4. is satisfied as a result of mixed strategies. Suppose , then . i.e. if two strategies maximize payoffs, then a mix between the two strategies will yield the same payoff.
Therefore, there exists a fixed point in and a Nash equilibrium.
When Nash made this point to John von Neumann in 1949, von Neumann famously dismissed it with the words, "That's trivial, you know. That's just a fixed-point theorem." (See Nasar, 1998, p. 94.)
Alternate proof using the Brouwer fixed-point theorem
We have a game where is the number of players and is the action set for the players. All of the action sets are finite. Let denote the set of mixed strategies for the players. The finiteness of the s ensures the compactness of .
We can now define the gain functions. For a mixed strategy , we let the gain for player on action be
The gain function represents the benefit a player gets by unilaterally changing their strategy. We now define where
for . We see that
Next we define:
It is easy to see that each is a valid mixed strategy in . It is also easy to check that each is a continuous function of , and hence is a continuous function. As the cross product of a finite number of compact convex sets, is also compact and convex. Applying the Brouwer fixed point theorem to and we conclude that has a fixed point in , call it . We claim that is a Nash equilibrium in . For this purpose, it suffices to show that
This simply states that each player gains no benefit by unilaterally changing their strategy, which is exactly the necessary condition for a Nash equilibrium.
Now assume that the gains are not all zero. Therefore, and such that . Then
So let
Also we shall denote as the gain vector indexed by actions in . Since is the fixed point we have:
Since we have that is some positive scaling of the vector . Now we claim that
To see this, first if then this is true by definition of the gain function. Now assume that . By our previous statements we have that
and so the left term is zero, giving us that the entire expression is as needed.
So we finally have that
where the last inequality follows since is a non-zero vector. But this is a clear contradiction, so all the gains must indeed be zero. Therefore, is a Nash equilibrium for as needed.
Computing Nash equilibria
If a player A has a dominant strategy then there exists a Nash equilibrium in which A plays . In the case of two players A and B, there exists a Nash equilibrium in which A plays and B plays a best response to . If is a strictly dominant strategy, A plays in all Nash equilibria. If both A and B have strictly dominant strategies, there exists a unique Nash equilibrium in which each plays their strictly dominant strategy.
In games with mixed-strategy Nash equilibria, the probability of a player choosing any particular (so pure) strategy can be computed by assigning a variable to each strategy that represents a fixed probability for choosing that strategy. In order for a player to be willing to randomize, their expected payoff for each (pure) strategy should be the same. In addition, the sum of the probabilities for each strategy of a particular player should be 1. This creates a system of equations from which the probabilities of choosing each strategy can be derived.
Examples
In the matching pennies game, player A loses a point to B if A and B play the same strategy and wins a point from B if they play different strategies. To compute the mixed-strategy Nash equilibrium, assign A the probability of playing H and of playing T, and assign B the probability of playing H and of playing T.
Thus, a mixed-strategy Nash equilibrium in this game is for each player to randomly choose H or T with and .
Oddness of equilibrium points
In 1971, Robert Wilson came up with the "oddness theorem", which says that "almost all" finite games have a finite and odd number of Nash equilibria. In 1993, Harsanyi published an alternative proof of the result. "Almost all" here means that any game with an infinite or even number of equilibria is very special in the sense that if its payoffs were even slightly randomly perturbed, with probability one it would have an odd number of equilibria instead.
The prisoner's dilemma, for example, has one equilibrium, while the battle of the sexes has three—two pure and one mixed, and this remains true even if the payoffs change slightly. The free money game is an example of a "special" game with an even number of equilibria. In it, two players have to both vote "yes" rather than "no" to get a reward and the votes are simultaneous. There are two pure-strategy Nash equilibria, (yes, yes) and (no, no), and no mixed strategy equilibria, because the strategy "yes" weakly dominates "no". "Yes" is as good as "no" regardless of the other player's action, but if there is any chance the other player chooses "yes" then "yes" is the best reply. Under a small random perturbation of the payoffs, however, the probability that any two payoffs would remain tied, whether at 0 or some other number, is vanishingly small, and the game would have either one or three equilibria instead.
See also
Notes
References
Bibliography
Game theory textbooks
.
Dixit, Avinash, Susan Skeath and David Reiley. Games of Strategy. W.W. Norton & Company. (Third edition in 2009.) An undergraduate text.
. Suitable for undergraduate and business students.
Fudenberg, Drew and Jean Tirole (1991) Game Theory MIT Press.
. Lucid and detailed introduction to game theory in an explicitly economic context.
Morgenstern, Oskar and John von Neumann (1947) The Theory of Games and Economic Behavior Princeton University Press.
.
. A modern introduction at the graduate level.
. A comprehensive reference from a computational perspective; see Chapter 3. Downloadable free online.
Original Nash papers
Nash, John (1950) "Equilibrium points in n-person games" Proceedings of the National Academy of Sciences 36(1):48-49.
Nash, John (1951) "Non-Cooperative Games" The Annals of Mathematics 54(2):286-295.
Other references
Mehlmann, A. (2000) The Game's Afoot! Game Theory in Myth and Paradox, American Mathematical Society.
Nasar, Sylvia (1998), A Beautiful Mind, Simon & Schuster.
Aviad Rubinstein: "Hardness of Approximation Between P and NP", ACM, ISBN 978-1-947487-23-9 (May 2019), DOI: https://doi.org/10.1145/3241304 . # Explains the Nash Equilibrium is a hard problem in computation.
External links
Complete Proof of Existence of Nash Equilibria
Simplified Form and Related Results
Game theory equilibrium concepts
Fixed points (mathematics)
1951 in economic history | Nash equilibrium | [
"Mathematics"
] | 7,562 | [
"Mathematical analysis",
"Fixed points (mathematics)",
"Game theory",
"Topology",
"Game theory equilibrium concepts",
"Dynamical systems"
] |
45,346 | https://en.wikipedia.org/wiki/Theories%20of%20urban%20planning | Planning theory is the body of scientific concepts, definitions, behavioral relationships, and assumptions that define the body of knowledge of urban planning. There is no one unified planning theory but various. Whittemore identifies nine procedural theories that dominated the field between 1959 and 1983: the Rational-Comprehensive approach, the Incremental approach, the Transformative Incremental (TI) approach, the Transactive approach, the Communicative approach, the Advocacy approach, the Equity approach, the Radical approach, and the Humanist or Phenomenological approach.
Background
Urban planning can include urban renewal, by adapting urban planning methods to existing cities suffering from decline. Alternatively, it can concern the massive challenges associated with urban growth, particularly in the Global South. All in all, urban planning exists in various forms and addresses many different issues. The modern origins of urban planning lie in the movement for urban reform that arose as a reaction against the disorder of the industrial city in the mid-19th century. Many of the early influencers were inspired by anarchism, which was popular in the turn of the 19th and 20th centuries. The new imagined urban form was meant to go hand-in-hand with a new society, based upon voluntary co-operation within self-governing communities.
In the late 20th century, the term sustainable development has come to represent an ideal outcome in the sum of all planning goals. Sustainable architecture involves renewable materials and energy sources and is increasing in importance as an environmentally friendly solution
Blueprint planning
Since at least the Renaissance and the Age of Enlightenment, urban planning had generally been assumed to be the physical planning and design of human communities. Therefore, it was seen as related to architecture and civil engineering, and thereby to be carried out by such experts. This kind of planning was physicalist and design-orientated, and involved the production of masterplans and blueprints which would show precisely what the 'end-state' of land use should be, similar to architectural and engineering plans. Similarly, the theory of urban planning was mainly interested in visionary planning and design which would demonstrate how the ideal city should be organised spatially.
Sanitary movement
Although it can be seen as an extension of the sort of civic pragmatism seen in Oglethorpe's plan for Savannah or William Penn's plan for Philadelphia, the roots of the rational planning movement lie in Britain's Sanitary movement (1800–1890). During this period, advocates such as Charles Booth argued for central organized, top-down solutions to the problems of industrializing cities. In keeping with the rising power of industry, the source of the planning authority in the Sanitary movement included both traditional governmental offices and private development corporations. In London and its surrounding suburbs, cooperation between these two entities created a network of new communities clustered around the expanding rail system.
Garden city movement
The Garden city movement was founded by Ebenezer Howard (1850-1928). His ideas were expressed in the book Garden Cities of To-morrow (1898). His influences included Benjamin Walter Richardson, who had published a pamphlet in 1876 calling for low population density, good housing, wide roads, an underground railway and for open space; Thomas Spence who had supported common ownership of land and the sharing of the rents it would produce; Edward Gibbon Wakefield who had pioneered the idea of colonizing planned communities to house the poor in Adelaide (including starting new cities separated by green belts at a certain point); James Silk Buckingham who had designed a model town with a central place, radial avenues and industry in the periphery; as well as Alfred Marshall, Peter Kropotkin and the back-to-the-land movement, which had all called for the moving of masses to the countryside.
Howards' vision was to combine the best of both the countryside and the city in a new environment called Town-Country. To make this happen, a group of individuals would establish a limited-dividend company to buy cheap agricultural land, which would then be developed with investment from manufacturers and housing for the workers. No more than 32,000 people would be housed in a settlement, spread over 1,000 acres. Around it would be a permanent green belt of 5,000 acres, with farms and institutions (such as mental institutions) which would benefit from the location. After reaching the limit, a new settlement would be started, connected by an inter-city rail, with the polycentric settlements together forming the "Social City". The lands of the settlements would be jointly owned by the inhabitants, who would use rents received from it to pay off the mortgage necessary to buy the land and then invest the rest in the community through social security. Actual garden cities were built by Howard in Letchworth, Brentham Garden Suburb, and Welwyn Garden City. The movement would also inspire the later New towns movement.
Linear city
Arturo Soria y Mata's idea of the Linear city (1882) replaced the traditional idea of the city as a centre and a periphery with the idea of constructing linear sections of infrastructure - roads, railways, gas, water, etc.- along an optimal line and then attaching the other components of the city along the length of this line. As compared to the concentric diagrams of Ebenezer Howard and other in the same period, Soria's linear city creates the infrastructure for a controlled process of expansion that joins one growing city to the next in a rational way, instead of letting them both sprawl. The linear city was meant to ‘ruralize the city and urbanize the countryside’, and to be universally applicable as a ring around existing cities, as a strip connecting two cities, or as an entirely new linear town across an unurbanized region. The idea was later taken up by Nikolay Alexandrovich Milyutin in the planning circles of the 1920s Soviet Union. The Ciudad Lineal was a practical application of the concept.
Regional planning movement
Patrick Geddes (1864-1932) was the founder of regional planning. His main influences were the geographers Élisée Reclus and Paul Vidal de La Blache, as well as the sociologist Pierre Guillaume Frédéric le Play. From these he received the idea of the natural region. According to Geddes, planning must start by surveying such a region by crafting a "Valley Section" which shows the general slope from mountains to the sea that can be identified across scale and place in the world, with the natural environment and the cultural environments produced by it included. This was encapsulated in the motto "Survey before Plan". He saw cities as being changed by technology into more regional settlements, for which he coined the term conurbation. Similar to the garden city movement, he also believed in adding green areas to these urban regions. The Regional Planning Association of America advanced his ideas, coming up with the 'regional city' which would have a variety of urban communities across a green landscape of farms, parks and wilderness with the help of telecommunication and the automobile. This had major influence on the County of London Plan, 1944.
City Beautiful movement
The City Beautiful movement was inspired by 19th century European capital cities such as Georges-Eugène Haussmann's Paris or the Vienna Ring Road. An influential figure was Daniel Burnham (1846-1912), who was the chief of construction of the World's Columbian Exposition in 1893. Urban problems such as the 1886 Haymarket affair in Chicago had created a perceived need to reform the morality of the city among some of the elites. Burnham's greatest achievement was the Chicago plan of 1909. His aim was "to restore to the city a lost visual and aesthetic harmony, thereby creating the physical prerequisite for the emergence of a harmonious social order", essentially creating social reform through new slum clearance and creating public space, which also endeared it the support of the Progressivist movement. This was also believed to be economically advantageous by drawing in tourists and wealthy migrants. Because of this it has been referred to as "trickle-down urban development" and as "centrocentrist" for focusing only on the core of the city. Other major cities planned according to the movement principles included British colonial capitals in New Delhi, Harare, Lusaka Nairobi and Kampala, as well as that of Canberra in Australia, and Albert Speer's plan for the Nazi capital Germania.
Towers in the park
Le Corbusier (1887–1965) pioneered a new urban form called towers in the park. His approach was based on defining the house as 'a machine to live in'. The Plan Voisin he devised for Paris, which was never fulfilled, would have involved the demolition of much of historic Paris in favour of 18 uniform 700-foot tower blocks. Ville Contemporaine and the Ville Radieuse formulated his basic principles, including decongestion of the city by increased density and open space by building taller on a smaller footprint. Wide avenues should also be built to the city centre by demolishing old structures, which was criticized for lack of environmental awareness. His generic ethos of planning was based on the rule of experts who would "work out their plans in total freedom from partisan pressures and special interests" and that "once their plans are formulated, they must be implemented without opposition". His influence on the Soviet Union helped inspire the 'urbanists' who wanted to build planned cities full of massive apartment blocks in Soviet countryside. The only city which he ever actually helped plan was Chandigarh in India. Brasília, planned by Oscar Niemeyer, also was heavily influenced by his thought. Both cities suffered from the issue of unplanned settlements growing outside them.
Decentralised planning
In the United States, Frank Lloyd Wright similarly identified vehicular mobility as a principal planning metric. Car-based suburbs had already been developed in the Country Club District in 1907-1908 (including later the world's first car-based shopping centre of Country Club Plaza), as well as in Beverly Hills in 1914 and Palos Verdes Estates in 1923. Wright began to idealise this vision in his Broadacre City starting in 1924, with similarities to the garden city and regional planning movements. The fundamental idea was for technology to liberate individuals. In his Usonian vision, he described the city as"spacious, well-landscaped highways, grade crossings eliminated by a new kind of integrated by-passing or over- or under-passing all traffic in cultivated or living areas … Giant roads, themselves great architecture, pass public service stations . . . passing by farm units, roadside markets, garden schools, dwelling places, each on its acres of individually adorned and cultivated ground".This was justified as a democratic ideal, as "“Democracy is the ideal of reintegrated decentralization … many free units developing strength as they learn by function and grow together in spacious mutual freedom.” This vision was however criticized by Herbert Muschamp as being contradictory in its call for individualism while relying on the master-architect to design it all.
After World War II, suburbs similar to Broadacre City spread throughout the US, but without the social or economic aspects of his ideas. A notable example was that of Levittown, built 1947 to 1951. The suburban design was criticized for their lack of form by Lewis Mumford as it lacked clear boundaries, and by Ian Nairn because "Each building is treated in isolation, nothing binds it to the next one".
In the Soviet Union too, the so-called deurbanists (such as Moisei Ginzburg and Mikhail Okhitovich) advocated for the use of electricity and new transportation technologies (especially the car) to disperse the population from the cities to the countryside, with the ultimate aim of a "townless, fully decentralized, and evenly populated country". However, in 1931 the Communist Party ruled such views as forbidden.
Opposition to blueprint planning
Throughout both the United States and Europe, the rational planning movement declined in the latter half of the 20th century. Key events in the United States include the demolition of the Pruitt-Igoe housing project in St. Louis and the national backlash against urban renewal projects, particularly urban expressway projects. An influential critic of such planning was Jane Jacobs, who wrote The Death and Life of Great American Cities in 1961, claimed to be "one of the most influential books in the short history of city planning". She attacked the garden city movement because its "prescription for saving the city was to do the city in" and because it "conceived of planning also as essentially paternalistic, if not authoritarian". The Corbusians on the other hand were claimed to be egoistic. In contrast, she defended the dense traditional inner-city neighborhoods like Brooklyn Heights or North Beach, San Francisco, and argued that an urban neighbourhood required about 200-300 people per acre, as well as a high net ground coverage at the expense of open space. She also advocated for a diversity of land uses and building types, with the aim of having a constant churn of people throughout the neighbourhood across the times of the day. This essentially meant defending urban environments as they were before modern planning had aimed to start changing them. As she believed that such environments were essentially self-organizing, her approach was effectively one of laissez-faire, and has been criticized for not being able to guarantee "the development of good neighbourhoods".
The most radical opposition to blueprint planning was declared in 1969 in a manifesto on the New Society, with the words that: The whole concept of planning (the town-and-country kind at least) has gone cockeyed … Somehow, everything must be watched; nothing must be allowed simply to “happen.” No house can be allowed to be commonplace in the way that things just are commonplace: each project must be weighed, and planned, and approved, and only then built, and only after that discovered to be commonplace after all.Another form of opposition came from the advocacy planning movement, opposes to traditional top-down and technical planning.
Modernist planning
Cybernetics and modernism inspired the related theories of rational process and systems approaches to urban planning in the 1960s. They were imported into planning from other disciplines. The systems approach was a reaction to the issues associated with the traditional view of planning. It did not understand the social and economic sides of cities, the complexity and interconnectedness of urban life, as well as lacking in flexibility. The 'quantitative revolution' of the 1960s also created a drive for more scientific and precise thinking, while the rise of ecology made the approach more natural.
Systems theory
Systems theory is based on the conception of phenomena as 'systems', which are themselves coherent entities composed of interconnected and interdependent parts. A city can in this way be conceptualised as a system with interrelated parts of different land uses, connected by transport and other communications. The aim of urban planning thereby becomes that of planning and controlling the system. Similar ideas had been put forward by Geddes, who had seen cities and their regions as analogous to organisms, though they did not receive much attention while planning was dominated by architects.
The idea of the city as a system meant that it became critical for planners to understand how cities functioned. It also meant that a change to one part in a city would have effects on others parts as well. There were also doubts raised about the goal of producing detailed blueprints of how cities should look like in the end, instead suggesting the need for more flexible plans with trajectories instead of fixed futures. Planning should also be an ongoing process of monitoring and taking action in the city, rather than just producing the blueprint at one time. The systems approach also necessitated taking into account the economic and social aspects of cities, beyond just the aesthetic and physical ones.
Rational process approach
The focus on the procedural aspect of planning had already been pioneered by Geddes in his Survey-Analysis-Plan approach. However, this approach had several shortfalls. It did not consider the reasons for doing a survey in the first place. It also suggested that there should be simply a single plan to be considered. Finally, it did not take into account the implementation stage of the plan. There should also be further action in monitoring the outcomes of the plan after that. The rational process, in contrast, identified five different stages: (1) the definition of problems and aims; (2) the identification of alternatives; (3) the evaluation of alternatives; (4) implementation: (5) monitoring. This new approach represented a rejection of blueprint planning.
Incrementalism
Beginning in the late 1950s and early 1960s, critiques of the rational paradigm began to emerge and formed into several different schools of planning thought. The first of these schools is Lindblom's incrementalism. Lindblom describes planning as "muddling through" and thought that practical planning required decisions to be made incrementally. This incremental approach meant choosing from small number of policy approaches that can only have a small number consequences and are firmly bounded by reality, constantly adjusting the objectives of the planning process and using multiple analyses and evaluations.
Mixed scanning model
The mixed scanning model, developed by Etzioni, takes a similar approach to Lindblom. Etzioni (1968) suggested that organizations plan on two different levels: the tactical and the strategic. He posited that organizations could accomplish this by essentially scanning the environment on multiple levels and then choose different strategies and tactics to address what they found there. While Lindblom's approach only operated on the functional level Etzioni argued, the mixed scanning approach would allow planning organizations to work on both the functional and more big-picture oriented levels.
Political planning
In the 1960s, a view emerged of planning as an inherently normative and political activity. Advocates of this approach included Norman Dennis, Martin Meyerson, Edward C. Banfield, Paul Davidoff, and Norton E. Long, the latter remarking that:Plans are policies and policies, in a democracy at any rate, spell politics. The question is not whether planning will reflect politics but whose politics it will reflect. What values and whose values will planners seek to implement? . . . No longer can the planner take refuge in the neutrality of the objectivity of the personally uninvolved scientist.The choices between alternative end points in planning was a key issue which was seen as political.
Participatory planning
Participatory planning is an urban planning paradigm that emphasizes involving the entire community in the strategic and management processes of urban planning; or, community-level planning processes, urban or rural. It is often considered as part of community development. Participatory planning aims to harmonize views among all of its participants as well as prevent conflict between opposing parties. In addition, marginalized groups have an opportunity to participate in the planning process.
Patrick Geddes had first advocated for the "real and active participation" of citizens when working in the British Raj, arguing against the "Dangers of Municipal Government from above" which would cause "detachment from public and popular feeling, and consequently, before long, from public and popular needs and usefulness". Further on, self-build was researched by Raymond Unwin in the 1930s in his Town Planning in Practice. The Italian anarchist architect Giancarlo De Carlo then argued in 1948 that "“The housing problem cannot be solved from above. It is a problem of the people, and it will not be solved, or even boldly faced, except by the concrete will and action of the people themselves", and that planning should exist "as the manifestation of communal collaboration". Through the Architectural Association School of Architecture, his ideas caught John Turner, who started working in Peru with Eduardo Neira. He would go on working in Lima from the mid-'50s to the mid-'60s. There he found that the barrios were not slums, but were rather highly organised and well-functioning. As a result, he came to the conclusion that:"When dwellers control the major decisions and are free to make their own contributions in the design, construction or management of their housing, both this process and the environment produced stimulate individual and social well-being. When people have no control over nor responsibility for key decisions in the housing process, on the other hand, dwelling environments may instead become a barrier to personal fulfillment and a burden on the economy."The role of the government was to provide a framework within which people would be able to work freely, for example by providing them the necessary resources, infrastructure and land. Self-build was later again taken up by Christopher Alexander, who led a project called People Rebuild Berkeley in 1972, with the aim to create "self-sustaining, self-governing" communities, though it ended up being closer to traditional planning.
Synoptic planning
After the "fall" of blueprint planning in the late 1950s and early 1960s, the synoptic model began to emerge as a dominant force in planning. Lane (2005) describes synoptic planning as having four central elements:
"(1) an enhanced emphasis on the specification of goals and targets; (2) an emphasis on quantitative analysis and predication of the environment; (3) a concern to identify and evaluate alternative policy options; and (4) the evaluation of means against ends (page 289)."
Public participation was first introduced into this model and it was generally integrated into the system process described above. However, the problem was that the idea of a single public interest still dominated attitudes, effectively devaluing the importance of participation because it suggests the idea that the public interest is relatively easy to find and only requires the most minimal form of participation.
Transactive planning
Transactive planning was a radical break from previous models. Instead of considering public participation as a method that would be used in addition to the normal training planning process, participation was a central goal. For the first time, the public was encouraged to take on an active role in the policy-setting process, while the planner took on the role of a distributor of information and a feedback source. Transactive planning focuses on interpersonal dialogue that develops ideas, which will be turned into action. One of the central goals is mutual learning where the planner gets more information on the community and citizens to become more educated about planning issues.
Advocacy planning
Formulated in the 1960s by lawyer and planning scholar Paul Davidoff, the advocacy planning model takes the perspective that there are large inequalities in the political system and in the bargaining process between groups that result in large numbers of people unorganized and unrepresented in the process. It concerns itself with ensuring that all people are equally represented in the planning process by advocating for the interests of the underprivileged and seeking social change. Again, public participation is a central tenet of this model. A plurality of public interests is assumed, and the role of the planner is essentially the one as a facilitator who either advocates directly for underrepresented groups directly or encourages them to become part of the process.
Radical planning
Radical planning is a stream of urban planning which seeks to manage development in an equitable and community-based manner. The seminal text to the radical planning movement is Foundations for a Radical Concept in Planning (1973), by Stephen Grabow and Allen Heskin. Grabow and Heskin provided a critique of planning as elitist, centralizing and change-resistant, and proposed a new paradigm based upon systems change, decentralization, communal society, facilitation of human development and consideration of ecology. Grabow and Heskin were joined by Head of Department of Town Planning from the Polytechnic of the South Bank Shean McConnell, and his 1981 work Theories for Planning.
In 1987 John Friedmann entered the fray with Planning in the Public Domain: From Knowledge to Action, promoting a radical planning model based on "decolonization", "democratization", "self-empowerment" and "reaching out". Friedmann described this model as an "Agropolitan development" paradigm, emphasizing the re-localization of primary production and manufacture. In "Toward a Non-Euclidian Mode of Planning" (1993) Friedmann further promoted the urgency of decentralizing planning, advocating a planning paradigm that is normative, innovative, political, transactive and based on a social learning approach to knowledge and policy.
Bargaining model
The bargaining model views planning as the result of giving and take on the part of a number of interests who are all involved in the process. It argues that this bargaining is the best way to conduct planning within the bounds of legal and political institutions. The most interesting part of this theory of planning is that it makes public participation the central dynamic in the decision-making process. Decisions are made first and foremost by the public, and the planner plays a more minor role.
Communicative approach
The communicative approach to planning is perhaps the most difficult to explain. It focuses on using communication to help different interests in the process to understand each other. The idea is that each individual will approach a conversation with his or her own subjective experience in mind and that from that conversation shared goals and possibilities will emerge. Again, participation plays a central role in this model. The model seeks to include a broad range of voice to enhance the debate and negotiation that is supposed to form the core of actual plan making. In this model, participation is actually fundamental to the planning process happening. Without the involvement of concerned interests, there is no planning. Looking at each of these models it becomes clear that participation is not only shaped by the public in a given area or by the attitude of the planning organization or planners that work for it. In fact, public participation is largely influenced by how planning is defined, how planning problems are defined, the kinds of knowledge that planners choose to employ and how the planning context is set. Though some might argue that is too difficult to involve the public through transactive, advocacy, bargaining and communicative models because transportation is some ways more technical than other fields, it is important to note that transportation is perhaps unique among planning fields in that its systems depend on the interaction of a number of individuals and organizations.
Process
Changes to the planning process
Strategic Urban Planning over past decades have witnessed the metamorphosis of the role of the urban planner in the planning process. More citizens calling for democratic planning & development processes have played a huge role in allowing the public to make important decisions as part of the planning process. Community organizers and social workers are now very involved in planning from the grassroots level. The term advocacy planning was coined by Paul Davidoff in his influential 1965 paper, "Advocacy and Pluralism in Planning" which acknowledged the political nature of planning and urged planners to acknowledge that their actions are not value-neutral and encouraged minority and underrepresented voices to be part of planning decisions. Benveniste argued that planners had a political role to play and had to bend some truth to power if their plans were to be implemented.
Developers have also played huge roles in development, particularly by planning projects. Many recent developments were results of large and small-scale developers who purchased land, designed the district and constructed the development from scratch. The Melbourne Docklands, for example, was largely an initiative pushed by private developers to redevelop the waterfront into a high-end residential and commercial district.
Recent theories of urban planning, espoused, for example by Salingaros see the city as an adaptive system that grows according to process similar to those of plants. They say that urban planning should thus take its cues from such natural processes. Such theories also advocate participation by inhabitants in the design of the urban environment, as opposed to simply leaving all development to large-scale construction firms.
In the process of creating an urban plan or urban design, carrier-infill is one mechanism of spatial organization in which the city's figure and ground components are considered separately. The urban figure, namely buildings, is represented as total possible building volumes, which are left to be designed by architects in the following stages. The urban ground, namely in-between spaces and open areas, are designed to a higher level of detail. The carrier-infill approach is defined by an urban design performing as the carrying structure that creates the shape and scale of the spaces, including future building volumes that are then infilled by architects' designs. The contents of the carrier structure may include street pattern, landscape architecture, open space, waterways, and other infrastructure. The infill structure may contain zoning, building codes, quality guidelines, and Solar Access based upon a solar envelope. Carrier-Infill urban design is differentiated from complete urban design, such as in the monumental axis of Brasília, in which the urban design and architecture were created together.
In carrier-infill urban design or urban planning, the negative space of the city, including landscape, open space, and infrastructure is designed in detail. The positive space, typically building a site for future construction, is only represented in unresolved volumes. The volumes are representative of the total possible building envelope, which can then be infilled by individual architects.
See also
Index of urban planning articles
Index of urban studies articles
List of planned cities
List of planning journals
List of urban planners
List of urban theorists
MONU – magazine on urbanism
Planetizen
Transition Towns (network)
Transportation demand management
Urban acupuncture
Urban vitality
References
Notes
Bibliography
(A standard text for many college and graduate courses in city planning in America)
Dalley, Stephanie, 1989, Myths from Mesopotamia: Creation, the Flood, Gilgamesh, and Others, Oxford World's Classics, London, pp. 39–136
Hoch, Charles, Linda C. Dalton and Frank S. So, editors (2000). The Practice of Local Government Planning, Intl City County Management Assn; 3rd edition. (The "Green Book")
Kemp, Roger L. and Carl J. Stephani (2011). "Cities Going Green: A Handbook of Best Practices." McFarland and Co., Inc., Jefferson, NC, USA, and London, England, UK. .
Santamouris, Matheos (2006). Environmental Design of Urban Buildings: An Integrated Approach.
Shrady, Nicholas, The Last Day: Wrath, Ruin & Reason in The Great Lisbon Earthquake of 1755, Penguin, 2008,
Tunnard, Christopher and Boris Pushkarev (1963). Man-Made America: Chaos or Control?: An Inquiry into Selected Problems of Design in the Urbanized Landscape, New Haven: Yale University Press. (This book won the National Book Award, strictly America; a time capsule of photography and design approach.)
Wheeler, Stephen (2004). "Planning Sustainable and Livable Cities", Routledge; 3rd edition.
Yiftachel, Oren, 1995, "The Dark Side of Modernism: Planning as Control of an Ethnic Minority," in Sophie Watson and Katherine Gibson, eds., Postmodern Cities and Spaces (Oxford and Cambridge, MA: Blackwell), pp. 216–240.
A Short Introduction to Radical Planning Theory and Practice, Doug Aberley Ph.D. MCIP, Winnipeg Inner City Research Alliance Summer Institute, June 2003
McConnell, Shean. Theories for Planning, 1981, David & Charles, London.
Further reading
Urban Planning, 1794–1918: An International Anthology of Articles, Conference Papers, and Reports, Selected, Edited, and Provided with Headnotes by John W. Reps, Professor Emeritus, Cornell University.
City Planning According to Artistic Principles, Camillo Sitte, 1889
Missing Middle Housing: Responding to the Demand for Walkable Urban Living by Daniel Parolek of Opticos Design, Inc., 2012
Kemp, Roger L. and Carl J. Stephani (2011). "Cities Going Green: A Handbook of Best Practices." McFarland and Co., Inc., Jefferson, NC, USA, and London, England, UK. ().
Tomorrow: A Peaceful Path to Real Reform, Ebenezer Howard, 1898
The Improvement of Towns and Cities, Charles Mulford Robinson, 1901
Town Planning in practice, Raymond Unwin, 1909
The Principles of Scientific Management, Frederick Winslow Taylor, 1911
Cities in Evolution, Patrick Geddes, 1915
The Image of the City, Kevin Lynch, 1960
The Concise Townscape, Gordon Cullen, 1961
The Death and Life of Great American Cities, Jane Jacobs, 1961
The City in History, Lewis Mumford, 1961
The City is the Frontier, Charles Abrams, Harper & Row Publishing, New York, 1965.
A Pattern Language, Christopher Alexander, Sara Ishikawa and Murray Silverstein, 1977
What Do Planners Do?: Power, Politics, and Persuasion, Charles Hoch, American Planning Association, 1994.
Planning the Twentieth-Century American City, Christopher Silver and Mary Corbin Sies (Eds.), Johns Hopkins University Press, 1996
"The City Shaped: Urban Patterns and Meanings Through History", Spiro Kostof, 2nd Edition, Thames and Hudson Ltd, 1999
The American City: A Social and Cultural History, Daniel J. Monti, Jr., Oxford, England and Malden, Massachusetts: Blackwell Publishers, 1999. 391 pp. .
Urban Development: The Logic Of Making Plans, Lewis D. Hopkins , Island Press, 2001.
'Readings in Planning Theory, 4th edition, Susan Fainstein and James DeFilippis, Oxford, England and Malden, Massachusetts: Blackwell Publishers, 2016.
Taylor, Nigel, (2007), Urban Planning Theory since 1945, London, Sage.
Planning for the Unplanned: Recovering from Crises in Megacities, by Aseem Inam (published by Routledge USA, 2005).
External links
Environmental social science
Urban geography
Urban design | Theories of urban planning | [
"Engineering",
"Environmental_science"
] | 6,859 | [
"Urban planning",
"Environmental social science",
"Architecture"
] |
45,353 | https://en.wikipedia.org/wiki/Writer | A writer is a person who uses written words in different writing styles, genres and techniques to communicate ideas, to inspire feelings and emotions, or to entertain. Writers may develop different forms of writing such as novels, short stories, monographs, travelogues, plays, screenplays, teleplays, songs, and essays as well as reports, educational material, and news articles that may be of interest to the general public. Writers' works are nowadays published across a wide range of media. Skilled writers who are able to use language to express ideas well, often contribute significantly to the cultural content of a society.
The term "writer" is also used elsewhere in the arts and music, such as songwriter or a screenwriter, but also a stand-alone "writer" typically refers to the creation of written language. Some writers work from an oral tradition.
Writers can produce material across a number of genres, fictional or non-fictional. Other writers use multiple media such as graphics or illustration to enhance the communication of their ideas. Another recent demand has been created by civil and government readers for the work of non-fictional technical writers, whose skills create understandable, interpretive documents of a practical or scientific kind. Some writers may use images (drawing, painting, graphics) or multimedia to augment their writing. In rare instances, creative writers are able to communicate their ideas via music as well as words.
As well as producing their own written works, writers often write about how they write (their writing process); why they write (that is, their motivation); and also comment on the work of other writers (criticism). Writers work professionally or non-professionally, that is, for payment or without payment and may be paid either in advance, or on acceptance, or only after their work is published. Payment is only one of the motivations of writers and many are not paid for their work.
The term writer has been used as a synonym of author, although the latter term has a somewhat broader meaning and is used to convey legal responsibility for a piece of writing, even if its composition is anonymous, unknown or collaborative. Author most often refers to the writer of a book.
Types
Writers choose from a range of literary genres to express their ideas. Most writing can be adapted for use in another medium. For example, a writer's work may be read privately or recited or performed in a play or film. Satire for example, may be written as a poem, an essay, a film, a comic play, or a part of journalism. The writer of a letter may include elements of criticism, biography, or journalism.
Many writers work across genres. The genre sets the parameters but all kinds of creative adaptation have been attempted: novel to film; poem to play; history to musical. Writers may begin their career in one genre and change to another. For example, historian William Dalrymple began in the genre of travel literature and also writes as a journalist. Many writers have produced both fiction and non-fiction works and others write in a genre that crosses the two. For example, writers of historical romances, such as Georgette Heyer, create characters and stories set in historical periods. In this genre, the accuracy of the history and the level of factual detail in the work both tend to be debated. Some writers write both creative fiction and serious analysis, sometimes using other names to separate their work. Dorothy Sayers, for example, wrote crime fiction but was also a playwright, essayist, translator, and critic.
Literary and creative
Poet
Poets make maximum use of the language to achieve an emotional and sensory effect as well as a cognitive one. To create these effects, they use rhyme and rhythm and they also apply the properties of words with a range of other techniques such as alliteration and assonance. A common topic is love and its vicissitudes. Shakespeare's best-known love story Romeo and Juliet, for example, written in a variety of poetic forms, has been performed in innumerable theaters and made into at least eight cinematic versions. John Donne is another poet renowned for his love poetry.
Novelist
Satirist
A satirist uses wit to ridicule the shortcomings of society or individuals, with the intent of revealing stupidity. Usually, the subject of the satire is a contemporary issue such as ineffective political decisions or politicians, although human vices such as greed are also a common and prevalent subject. Philosopher Voltaire wrote a satire about optimism called Candide, which was subsequently turned into an opera, and many well known lyricists wrote for it. There are elements of Absurdism in Candide, just as there are in the work of contemporary satirist Barry Humphries, who writes comic satire for his character Dame Edna Everage to perform on stage.
Satirists use different techniques such as irony, sarcasm, and hyperbole to make their point and they choose from the full range of genres – the satire may be in the form of prose or poetry or dialogue in a film, for example. One of the most well-known satirists is Jonathan Swift who wrote the four-volume work Gulliver's Travels and many other satires, including A Modest Proposal and The Battle of the Books.
Short story writer
A short story writer is a writer of short stories, works of fiction that can be read in a single sitting.
Performative
Librettist
Libretti (the plural of libretto) are the texts for musical works such as operas. The Venetian poet and librettist Lorenzo Da Ponte, for example, wrote the libretto for some of Mozart's greatest operas. Luigi Illica and Giuseppe Giacosa were Italian librettists who wrote for Giacomo Puccini. Most opera composers collaborate with a librettist but unusually, Richard Wagner wrote both the music and the libretti for his works himself.
Lyricist
Usually writing in verses and choruses, a lyricist specializes in writing lyrics, the words that accompany or underscore a song or opera. Lyricists also write the words for songs. In the case of Tom Lehrer, these were satirical. Lyricist Noël Coward, who wrote musicals and songs such as "Mad Dogs and Englishmen" and the recited song "I Went to a Marvellous Party", also wrote plays and films and performed on stage and screen as well. Writers of lyrics, such as these two, adapt other writers' work as well as create entirely original parts.
Playwright
A playwright writes plays which may or may not be performed on a stage by actors. A play's narrative is driven by dialogue. Like novelists, playwrights usually explore a theme by showing how people respond to a set of circumstances. As writers, playwrights must make the language and the dialogue succeed in terms of the characters who speak the lines as well as in the play as a whole. Since most plays are performed, rather than read privately, the playwright has to produce a text that works in spoken form and can also hold an audience's attention over the period of the performance. Plays tell "a story the audience should care about", so writers have to cut anything that worked against that. Plays may be written in prose or verse. Shakespeare wrote plays in iambic pentameter as does Mike Bartlett in his play King Charles III (2014).
Playwrights also adapt or re-write other works, such as plays written earlier or literary works originally in another genre. Famous playwrights such as Henrik Ibsen or Anton Chekhov have had their works adapted several times. The plays of early Greek playwrights Sophocles, Euripides, and Aeschylus are still performed. Adaptations of a playwright's work may be honest to the original or creatively interpreted. If the writers' purpose in re-writing the play is to make a film, they will have to prepare a screenplay. Shakespeare's plays, for example, while still regularly performed in the original form, are often adapted and abridged, especially for the cinema. An example of a creative modern adaptation of a play that nonetheless used the original writer's words, is Baz Luhrmann's version of Romeo and Juliet. The amendment of the name to Romeo + Juliet indicates to the audience that the version will be different from the original. Tom Stoppard's play Rosencrantz and Guildenstern Are Dead is a play inspired by Shakespeare's Hamlet that takes two of Shakespeare's most minor characters and creates a new play in which they are the protagonists.
Screenwriter
Screenwriters write a screenplay – or script – that provides the words for media productions such as films, television series and video games. Screenwriters may start their careers by writing the screenplay speculatively; that is, they write a script with no advance payment, solicitation or contract. On the other hand, they may be employed or commissioned to adapt the work of a playwright or novelist or other writer. Self-employed writers who are paid by contract to write are known as freelancers and screenwriters often work under this type of arrangement.
Screenwriters, playwrights and other writers are inspired by the classic themes and often use similar and familiar plot devices to explore them. For example, in Shakespeare's Hamlet is a "play within a play", which the hero uses to demonstrate the king's guilt. Hamlet hives the co-operation of the actors to set up the play as a thing "wherein I'll catch the conscience of the king". Teleplay writer Joe Menosky deploys the same "play within a play" device in an episode of the science fiction television series Star Trek: Voyager. The bronze-age playwright/hero enlists the support of a Star Trek crew member to create a play that will convince the ruler (or "patron" as he is called), of the futility of war.
Speechwriter
A speechwriter prepares the text for a speech to be given before a group or crowd on a specific occasion and for a specific purpose. They are often intended to be persuasive or inspiring, such as the speeches given by skilled orators like Cicero; charismatic or influential political leaders like Nelson Mandela; or for use in a court of law or parliament. The writer of the speech may be the person intended to deliver it, or it might be prepared by a person hired for the task on behalf of someone else. Such is the case when speechwriters are employed by many senior-level elected officials and executives in both government and private sectors.
Interpretive and academic
Biographer
Biographers write an account of another person's life. Richard Ellmann (1918–1987), for example, was an eminent and award-winning biographer whose work focused on the Irish writers James Joyce, William Butler Yeats, and Oscar Wilde. For the Wilde biography, he won the 1989 Pulitzer Prize for Biography.
Critic
Critics consider and assess the extent to which a work succeeds in its purpose. The work under consideration may be literary, theatrical, musical, artistic, or architectural. In assessing the success of a work, the critic takes account of why it was done – for example, why a text was written, for whom, in what style, and under what circumstances. After making such an assessment, critics write and publish their evaluation, adding the value of their scholarship and thinking to substantiate any opinion. The theory of criticism is an area of study in itself: a good critic understands and is able to incorporate the theory behind the work they are evaluating into their assessment. Some critics are already writers in another genre. For example, they might be novelists or essayists. Influential and respected writer/critics include the art critic Charles Baudelaire (1821–1867) and the literary critic James Wood (born 1965), both of whom have books published containing collections of their criticism. Some critics are poor writers and produce only superficial or unsubstantiated work. Hence, while anyone can be an uninformed critic, the notable characteristics of a good critic are understanding, insight, and an ability to write well.
Editor
An editor prepares literary material for publication. The material may be the editor's own original work but more commonly, an editor works with the material of one or more other people. There are different types of editor. Copy editors format text to a particular style and/or correct errors in grammar and spelling without changing the text substantively. On the other hand, an editor may suggest or undertake significant changes to a text to improve its readability, sense or structure. This latter type of editor can go so far as to excise some parts of the text, add new parts, or restructure the whole. The work of editors of ancient texts or manuscripts or collections of works results in differing editions. For example, there are many editions of Shakespeare's plays by notable editors who also contribute original introductions to the resulting publication. Editors who work on journals and newspapers have varying levels of responsibility for the text. They may write original material, in particular editorials, select what is to be included from a range of items on offer, format the material, and/or fact check its accuracy.
Encyclopaedist
Encyclopaedists create organised bodies of knowledge. Denis Diderot (1713–1784) is renowned for his contributions to the Encyclopédie. The encyclopaedist Bernardino de Sahagún (1499–1590) was a Franciscan whose Historia general de las cosas de Nueva España is a vast encyclopedia of Mesoamerican civilization, commonly referred to as the Florentine Codex, after the Italian manuscript library which holds the best-preserved copy.
Essayist
Essayists write essays, which are original pieces of writing of moderate length in which the author makes a case in support of an opinion. They are usually in prose, but some writers have used poetry to present their argument.
Historian
A historian is a person who studies and writes about the past and is regarded as an authority on it. The purpose of a historian is to employ historical analysis to create coherent narratives that explain "what happened" and "why or how it happened". Professional historians typically work in colleges and universities, archival centers, government agencies, museums, and as freelance writers and consultants. Edward Gibbon's six-volume History of the Decline and Fall of the Roman Empire influenced the development of historiography.
Lexicographer
Writers who create dictionaries are called lexicographers. One of the most famous is Samuel Johnson (1709–1784), whose Dictionary of the English Language was regarded not only as a great personal scholarly achievement but was also a dictionary of such pre-eminence, that would have been referred to by such writers as Jane Austen.
Researcher/Scholar
Researchers and scholars who write about their discoveries and ideas sometimes have profound effects on society. Scientists and philosophers are good examples because their new ideas can revolutionise the way people think and how they behave. Three of the best known examples of such a revolutionary effect are Nicolaus Copernicus, who wrote De revolutionibus orbium coelestium (1543); Charles Darwin, who wrote On the Origin of Species (1859); and Sigmund Freud, who wrote The Interpretation of Dreams (1899).
These three highly influential, and initially very controversial, works changed the way people understood their place in the world. Copernicus's heliocentric view of the cosmos displaced humans from their previously accepted place at the center of the universe; Darwin's evolutionary theory placed humans firmly within, as opposed to above, the order of manner; and Freud's ideas about the power of the unconscious mind overcame the belief that humans were consciously in control of all their own actions.
Translator
Translators have the task of finding some equivalence in another language to a writer's meaning, intention and style. Translators whose work has had very significant cultural effect include Al-Ḥajjāj ibn Yūsuf ibn Maṭar, who translated Elements from Greek into Arabic and Jean-François Champollion, who deciphered Egyptian hieroglyphs with the result that he could publish the first translation of the Rosetta Stone hieroglyphs in 1822. Difficulties with translation are exacerbated when words or phrases incorporate rhymes, rhythms, or puns; or when they have connotations in one language that are non-existent in another. For example, the title of Le Grand Meaulnes by Alain-Fournier is supposedly untranslatable because "no English adjective will convey all the shades of meaning that can be read into the simple [French] word 'grand' which takes on overtones as the story progresses." Translators have also become a part of events where political figures who speak different languages meet to look into the relations between countries or solve political conflicts. It is highly critical for the translator to deliver the right information as a drastic impact could be caused if any error occurred.
Reportage
Blogger
Writers of blogs, which have appeared on the World Wide Web since the 1990s, need no authorisation to be published. The contents of these short opinion pieces or "posts" form a commentary on issues of specific interest to readers who can use the same technology to interact with the author, with an immediacy hitherto impossible. The ability to link to other sites means that some blog writers – and their writing – may become suddenly and unpredictably popular. Malala Yousafzai, a young Pakistani education activist, rose to prominence due to her blog for BBC.
A blog writer is using the technology to create a message that is in some ways like a newsletter and in other ways, like a personal letter. "The greatest difference between a blog and a photocopied school newsletter, or an annual family letter photocopied and mailed to a hundred friends, is the potential audience and the increased potential for direct communication between audience members". Thus, as with other forms of letters the writer knows some of the readers, but one of the main differences is that "some of the audience will be random" and "that presumably changes the way we [writers] write." It has been argued that blogs owe a debt to Renaissance essayist Michel de Montaigne, whose Essais ("attempts"), were published in 1580, because Montaigne "wrote as if he were chatting to his readers: just two friends, whiling away an afternoon in conversation".
Columnist
Columnists write regular parts for newspapers and other periodicals, usually containing a lively and entertaining expression of opinion. Some columnists have had collections of their best work published as a collection in a book so that readers can re-read what would otherwise be no longer available. Columns are quite short pieces of writing so columnists often write in other genres as well. An example is the female columnist Elizabeth Farrelly, who besides being a columnist, is also an architecture critic and author of books.
Diarist
Writers who record their experiences, thoughts, or emotions in a sequential form over a period of time in a diary are known as diarists. Their writings can provide valuable insights into historical periods, specific events, or individual personalities. Examples include Samuel Pepys (1633–1703), an English administrator and Member of Parliament, whose detailed private diary provides eyewitness accounts of events during the 17th century, most notably of the Great Fire of London. Anne Frank (1929–1945) was a 13-year-old Dutch girl whose diary from 1942 to 1944 records both her experiences as a persecuted Jew in World War II and an adolescent dealing with intra-family relationships.
Journalist
Journalists write reports about current events after investigating them and gathering information. Some journalists write reports about predictable or scheduled events such as social or political meetings. Others are investigative journalists who need to undertake considerable research and analysis in order to write an explanation or account of something complex that was hitherto unknown or not understood. Often investigative journalists are reporting criminal or corrupt activity which puts them at risk personally and means that what it is likely that attempts may be made to attack or suppress what they write. An example is Bob Woodward, a journalist who investigated and wrote about criminal activities by the US President.
Memoirist
Writers of memoirs produce accounts from the memories of their own lives, which are considered unusual, important, or scandalous enough to be of interest to general readers. Although meant to be factual, readers are alerted to the likelihood of some inaccuracies or bias towards an idiosyncratic perception by the choice of genre. A memoir, for example, is allowed to have a much more selective set of experiences than an autobiography which is expected to be more complete and make a greater attempt at balance. Well-known memoirists include Frances Vane, Viscountess Vane, and Giacomo Casanova.
Utilitarian
Ghostwriter
Ghostwriters write for, or in the style of, someone else so the credit goes to the person on whose behalf the writing is done.
Letter writer
Writers of letters use a reliable form of transmission of messages between individuals, and surviving sets of letters provide insight into the motivations, cultural contexts, and events in the lives of their writers. Peter Abelard (1079–1142), philosopher, logician, and theologian is known not only for the heresy contained in some of his work, and the punishment of having to burn his own book, but also for the letters he wrote to Héloïse d'Argenteuil .
The letters (or epistles) of Paul the Apostle were so influential that over the two thousand years of Christian history, Paul became "second only to Jesus in influence and the amount of discussion and interpretation generated".
Report writer
Report writers are people who gather information, organise and document it so that it can be presented to some person or authority in a position to use it as the basis of a decision. Well-written reports influence policies as well as decisions. For example, Florence Nightingale (1820–1910) wrote reports that were intended to effect administrative reform in matters concerning health in the army. She documented her experience in the Crimean War and showed her determination to see improvements: "...after six months of incredible industry she had put together and written with her own hand her Notes affecting the Health, Efficiency and Hospital Administration of the British Army. This extraordinary composition, filling more than eight hundred closely printed pages, laying down vast principles of far-reaching reform, discussing the minutest detail of a multitude of controversial subjects, containing an enormous mass of information of the most varied kinds – military, statistical, sanitary, architectural" became for a long time, the "leading authority on the medical administration of armies".
The logs and reports of Master mariner William Bligh contributed to his being honourably acquitted at the court-martial inquiring into the loss of .
Scribe
A scribe writes ideas and information on behalf of another, sometimes copying from another document, sometimes from oral instruction on behalf of an illiterate person, sometimes transcribing from another medium such as a tape recording, shorthand, or personal notes.
Being able to write was a rare achievement for over 500 years in Western Europe so monks who copied texts were scribes responsible for saving many texts from first times. The monasteries, where monks who knew how to read and write lived, provided an environment stable enough for writing. Irish monks, for example, came to Europe in about 600 and "found manuscripts in places like Tours and Toulouse" which they copied. The monastic writers also illustrated their books with highly skilled art work using gold and rare colors.
Technical writer
A technical writer prepares instructions or manuals, such as user guides or owner's manuals for users of equipment to follow. Technical writers also write different procedures for business, professional or domestic use. Since the purpose of technical writing is practical rather than creative, its most important quality is clarity. The technical writer, unlike the creative writer, is required to adhere to the relevant style guide.
Process and methods
Writing process
There is a range of approaches that writers take to the task of writing. Each writer needs to find their own process and most describe it as more or less a struggle.
Sometimes writers have had the bad fortune to lose their work and have had to start again. Before the invention of photocopiers and electronic text storage, a writer's work had to be stored on paper, which meant it was very susceptible to fire in particular. (In very earlier times, writers used vellum and clay which were more robust materials.) Writers whose work was destroyed before completion include L. L. Zamenhof, the inventor of Esperanto, whose years of work were thrown into the fire by his father because he was afraid that "his son would be thought a spy working code".
Essayist and historian Thomas Carlyle, lost the only copy of a manuscript for The French Revolution: A History when it was mistakenly thrown into the fire by a maid. He wrote it again from the beginning. Writers usually develop a personal schedule. Angus Wilson, for example, wrote for a number of hours every morning.
Writer's block is a relatively common experience among writers, especially professional writers, when for a period of time the writer feels unable to write for reasons other than lack of skill or commitment.
Sole
Most writers write alone – typically they are engaged in a solitary activity that requires them to struggle with both the concepts they are trying to express and the best way to express it. This may mean choosing the best genre or genres as well as choosing the best words. Writers often develop idiosyncratic solutions to the problem of finding the right words to put on a blank page or screen. "Didn't Somerset Maugham also write facing a blank wall? ... Goethe couldn't write a line if there was another person anywhere in the same house, or so he said at some point."
Collaborative
Collaborative writing means that other authors write and contribute to a part of writing. In this approach, it is highly likely the writers will collaborate on editing the part too. The more usual process is that the editing is done by an independent editor after the writer submits a draft version.
In some cases, such as that between a librettist and composer, a writer will collaborate with another artist on a creative work. One of the best known of these types of collaborations is that between Gilbert and Sullivan. Librettist W. S. Gilbert wrote the words for the comic operas created by the partnership.
Committee
Occasionally, a writing task is given to a committee of writers. The most well-known example is the task of translating the Bible into English, sponsored by King James VI of England in 1604 and accomplished by six committees, some in Cambridge and some in Oxford, who were allocated different sections of the text. The resulting Authorized King James Version, published in 1611, has been described as an "everlasting miracle" because its writers (that is, its Translators) sought to "hold themselves consciously poised between the claims of accessibility and beauty, plainness and richness, simplicity and majesty, the people and the king", with the result that the language communicates itself "in a way which is quite unaffected, neither literary nor academic, not historical, nor reconstructionist, but transmitting a nearly incredible immediacy from one end of human civilisation to another."
Multimedia
Some writers support the verbal part of their work with images or graphics that are an integral part of the way their ideas are communicated. William Blake is one of rare poets who created his own paintings and drawings as integral parts of works such as his Songs of Innocence and of Experience. Cartoonists are writers whose work depends heavily on hand drawn imagery. Other writers, especially writers for children, incorporate painting or drawing in more or less sophisticated ways. Shaun Tan, for example, is a writer who uses imagery extensively, sometimes combining fact, fiction and illustration, sometimes for a didactic purpose, sometimes on commission. Children's writers Beatrix Potter, May Gibbs, and Theodor Seuss Geisel are as well known for their illustrations as for their texts.
Crowd sourced
Some writers contribute very small sections to a part of writing that cumulates as a result. This method is particularly suited to very large works, such as dictionaries and encyclopaedias. The best known example of the former is the Oxford English Dictionary, under the editorship of lexicographer James Murray, who was provided with the prolific and helpful contributions of W.C. Minor, at the time an inmate of a hospital for the criminally insane.
The best known example of the latter – an encyclopaedia that is crowdsourced – is Wikipedia, which relies on millions of writers and editors such as Simon Pulsifer worldwide.
Motivations
Writers have many different reasons for writing, among which is usually some combination of self-expression and recording facts, history or research results. The many physician writers, for example, have combined their observation and knowledge of the human condition with their desire to write and contributed many poems, plays, translations, essays and other texts. Some writers write extensively on their motivation and on the likely motivations of other writers. For example, George Orwell's essay "Why I Write" (1946) takes this as its subject. As to "what constitutes success or failure to a writer", it has been described as "a complicated business, where the material rubs up against the spiritual, and psychology plays a big part".
Command
Some writers are the authors of specific military orders whose clarity will determine the outcome of a battle. Among the most controversial and unsuccessful was Lord Raglan's order at the Charge of the Light Brigade, which being vague and misinterpreted, led to defeat with many casualties.
Develop skill/explore ideas
Some writers use the writing task to develop their own skill (in writing itself or in another area of knowledge) or explore an idea while they are producing a piece of writing. Philologist J. R. R. Tolkien, for example, created a new language for his fantasy books.
Entertain
Some genres are a particularly appropriate choice for writers whose chief purpose is to entertain. Among them are limericks, many comics and thrillers. Writers of children's literature seek to entertain children but are also usually mindful of the educative function of their work as well.
Influence
Anger has motivated many writers, including Martin Luther, angry at religious corruption, who wrote the Ninety-five Theses in 1517, to reform the church, and Émile Zola (1840–1902) who wrote the public letter, J'Accuse in 1898 to bring public attention to government injustice, as a consequence of which he had to flee to England from his native France. Such writers have affected ideas, opinion or policy significantly.
Payment
Writers may write a particular piece for payment (even if at other times, they write for another reason), such as when they are commissioned to create a new work, transcribe an original one, translate another writer's work, or write for someone who is illiterate or inarticulate. In some cases, writing has been the only way an individual could earn an income. Frances Trollope is an example of women who wrote to save herself and her family from penury, at a time when there were very few socially acceptable employment opportunities for them. Her book about her experiences in the United States, called Domestic Manners of the Americans became a great success, "even though she was over fifty and had never written before in her life" after which "she continued to write hard, carrying this on almost entirely before breakfast". According to her writer son Anthony Trollope "her books saved the family from ruin".
Teach
Aristotle, who was tutor to Alexander the Great, wrote to support his teaching. He wrote two treatises for the young prince: "On Monarchy", and "On Colonies" and his dialogues also appear to have been written either "as lecture notes or discussion papers for use in his philosophy school at the Athens Lyceum between 334 and 323 BC." They encompass both his 'scientific' writings (metaphysics, physics, biology, meteorology, and astronomy, as well as logic and argument) the 'non-scientific' works (poetry, oratory, ethics, and politics), and "major elements in traditional Greek and Roman education".
Writers of textbooks also use writing to teach and there are numerous instructional guides to writing itself. For example, many people will find it necessary to make a speech "in the service of your company, church, civic club, political party, or other organization" and so, instructional writers have produced texts and guides for speechmaking.
Tell a story
Many writers use their skill to tell the story of their people, community or cultural tradition, especially one with a personal significance. Examples include Shmuel Yosef Agnon; Miguel Ángel Asturias; Doris Lessing; Toni Morrison; Isaac Bashevis Singer; and Patrick White.
Writers such as Mario Vargas Llosa, Herta Müller, and Erich Maria Remarque write about the effect of conflict, dispossession and war.
Seek a lover
Writers use prose, poetry, and letters as part of courtship rituals. Edmond Rostand's play Cyrano de Bergerac, written in verse, is about both the power of love and the power of the self-doubting writer/hero's writing talent.
Authorship
Pen names
Writers sometimes use a pseudonym, otherwise known as a pen name or "nom de plume". The reasons they do this include to separate their writing from other work (or other types of writing) for which they are known; to enhance the possibility of publication by reducing prejudice (such as against women writers or writers of a particular race); to reduce personal risk (such as political risks from individuals, groups or states that disagree with them); or to make their name better suit another language.
Examples of well-known writers who used a pen name include: George Eliot (1819–1880), whose real name was Mary Anne (or Marian) Evans; George Orwell (1903–1950), whose real name was Eric Blair; George Sand (1804–1876), whose real name was Lucile Aurore Dupin; Dr. Seuss (1904–1991), whose real name was Theodor Seuss Geisel; Stendhal (1783–1842), whose real name was Marie-Henri Beyle; and Mark Twain (1835–1910), whose real name was Samuel Langhorne Clemens.
Apart from the large numbers of works attributable only to "Anonymous", there are a large number of writers who were once known and are now unknown. Efforts are made to find and re-publish these writers' works. One example is the publication of books like Japan As Seen and Described by Famous Writers (a 2010 reproduction of a pre-1923 publication) by "Anonymous". Another example is the founding of a Library and Study Centre for the Study of Early English Women's Writing in Chawton, England.
Fictional writers
Some fictional writers are very well known because of the strength of their characterization by the real writer or the significance of their role as writer in the plot of a work. Examples of this type of fictional writer include Edward Casaubon, a fictional scholar in George Eliot's Middlemarch, and Edwin Reardon, a fictional writer in George Gissing's New Grub Street. Casaubon's efforts to complete an authoritative study affect the decisions taken by the protagonists in Eliot's novel and inspire significant parts of the plot. In Gissing's work, Reardon's efforts to produce high quality writing put him in conflict with another character, who takes a more commercial approach. Robinson Crusoe is a fictional writer who was originally credited by the real writer (Daniel Defoe) as being the author of the confessional letters in the work of the same name. Bridget Jones is a comparable fictional diarist created by writer Helen Fielding. Both works became well-known and popular; their protagonists and story were developed further through many adaptations, including film versions. Cyrano de Bergerac was a real writer who created a fictional character with his own name. The Sibylline Books, a collection of prophecies were supposed to have been purchased from the Cumaean Sibyl by the last king of Rome. Since they were consulted during periods of crisis, it could be said that they are a case of real works created by a fictional writer.
Writers of sacred texts
Religious texts or scriptures are the texts which different religious traditions consider to be sacred, or of central importance to their religious tradition. Some religions and spiritual movements believe that their sacred texts are divinely or supernaturally revealed or inspired, while others have individual authors.
Controversial writing
Skilled writers influence ideas and society, so there are many instances where a writer's work or opinion has been unwelcome and controversial. In some cases, they have been persecuted or punished. Aware that their writing will cause controversy or put themselves and others into danger, some writers self-censor; or withhold their work from publication; or hide their manuscripts; or use some other technique to preserve and protect their work. Two of the most famous examples are Leonardo da Vinci and Charles Darwin. Leonardo "had the habit of conversing with himself in his writings and of putting his thoughts into the clearest and most simple form". He used "left-handed or mirror writing" (a technique described as "so characteristic of him") to protect his scientific research from other readers. The fear of persecution, social disgrace, and being proved incorrect are regarded as contributing factors to Darwin's delaying the publication of his radical and influential work On the Origin of Species.
One of the results of controversies caused by a writer's work is scandal, which is a negative public reaction that causes damage to reputation and depends on public outrage. It has been said that it is possible to scandalise the public because the public "wants to be shocked in order to confirm its own sense of virtue". The scandal may be caused by what the writer wrote or by the style in which it was written. In either case, the content or the style is likely to have broken with tradition or expectation. Making such a departure may in fact, be part of the writer's intention or at least, part of the result of introducing innovations into the genre in which they are working. For example, novelist D H Lawrence challenged ideas of what was acceptable as well as what was expected in form. These may be regarded as literary scandals, just as, in a different way, are the scandals involving writers who mislead the public about their identity, such as Norma Khouri or Helen Darville who, in deceiving the public, are considered to have committed fraud.
Writers may also cause the more usual type of scandal – whereby the public is outraged by the opinions, behaviour or life of the individual (an experience not limited to writers). Poet Paul Verlaine outraged society with his behaviour and treatment of his wife and child as well as his lover. Among the many writers whose writing or life was affected by scandals are Oscar Wilde, Lord Byron, Jean-Paul Sartre, Albert Camus, and H. G. Wells. One of the most famously scandalous writers was the Marquis de Sade who offended the public both by his writings and by his behaviour.
Punishment
The consequence of scandal for a writer may be censorship or discrediting of the work, or social ostracism of its creator. In some instances, punishment, persecution, or prison follow. The list of journalists killed in Europe, list of journalists killed in the United States and the list of journalists killed in Russia are examples. Others include:
The Balibo Five, a group of Australian television journalists who were killed while attempting to report on Indonesian incursions into Portuguese Timor in 1975.
Dietrich Bonhoeffer (1906–1945), an influential theologian who wrote The Cost of Discipleship and was hanged for his resistance to Nazism.
Galileo Galilei (1564–1642), who was sentenced to imprisonment for heresy as a consequence of writing in support of the then controversial theory of heliocentrism, although the sentence was almost immediately commuted to house arrest.
Antonio Gramsci (1891–1937), who wrote political theory and criticism and was imprisoned for this by the Italian Fascist regime.
Günter Grass (1927–2015), whose poem "What Must Be Said" led to his being declared persona non grata in Israel.
Peter Greste (born 1965), a journalist who was imprisoned in Egypt for news reporting which was "damaging to national security."
Primo Levi (1919–1987) who, among many Jews imprisoned during World War II, wrote an account of his incarceration called If This Is a Man.
Sima Qian (145 or 135 BC – 86 BC) who "successfully defended a vilified master from defamatory charges" and was given "the choice between castration or execution." He "became a eunuch and had to bury his own book ... in order to protect it from the authorities."
Salman Rushdie (born 1947), whose novel The Satanic Verses was banned and burned internationally after causing such a worldwide storm that a fatwā was issued against him. Though Rushdie survived, numerous others were killed in incidents connected to the novel.
Roberto Saviano (born 1979), whose best-selling book Gomorrah provoked the Neapolitan Camorra, annoyed Silvio Berlusconi and led to him receiving permanent police protection.
Simon Sheppard (born 1957) who was imprisoned in the UK for inciting racial hatred.
Aleksandr Solzhenitsyn (1918–2008), who used his experience of imprisonment as the subject of his writing in One Day in the Life of Ivan Denisovich and Cancer Ward—the latter, while legally published in the Soviet Union, had to gain the approval of the USSR Union of Writers.
William Tyndale ( – 1536), who was executed because he translated the Bible into English.
Protection and representation
The organisation Reporters Without Borders (also known by its French name: Reporters Sans Frontières) was set up to help protect writers and advocate on their behalf.
The professional and industrial interests of writers are represented by various national or regional guilds or unions. Examples include writers guilds in Australia and Great Britain and unions in Arabia, Armenia, Azerbaijan, Canada, Estonia, Hungary, Ireland, Moldova, Philippines, Poland, Quebec, Romania, Russia, Sudan, and Ukraine. In the United States, there is both a writers guild and a National Writers Union.
Awards
There are many awards for writers whose writing has been adjudged excellent. Among them are the many literary awards given by individual countries, such as the Prix Goncourt and the Pulitzer Prize, as well as international awards such as the Nobel Prize in Literature. Russian writer Boris Pasternak (1890–1960), under pressure from his government, reluctantly declined the Nobel Prize that he won in 1958.
See also
Academic publishing
Hack writer
Lists of writers
List of women writers
List of non-binary writers
List of writers' conferences
Genre fiction
Professional writing
Website content writer
Writer's voice
Betty Abah
References
External links
Communication design
Articles containing video clips
Writing | Writer | [
"Engineering"
] | 8,811 | [
"Design",
"Communication design"
] |
45,383 | https://en.wikipedia.org/wiki/Ecoregion | An ecoregion (ecological region) is an ecologically and geographically defined area that is smaller than a bioregion, which in turn is smaller than a biogeographic realm. Ecoregions cover relatively large areas of land or water, and contain characteristic, geographically distinct assemblages of natural communities and species. The biodiversity of flora, fauna and ecosystems that characterise an ecoregion tends to be distinct from that of other ecoregions. In theory, biodiversity or conservation ecoregions are relatively large areas of land or water where the probability of encountering different species and communities at any given point remains relatively constant, within an acceptable range of variation (largely undefined at this point).
Ecoregions are also known as "ecozones" ("ecological zones"), although that term may also refer to biogeographic realms.
Three caveats are appropriate for all bio-geographic mapping approaches. Firstly, no single bio-geographic framework is optimal for all taxa. Ecoregions reflect the best compromise for as many taxa as possible. Secondly, ecoregion boundaries rarely form abrupt edges; rather, ecotones and mosaic habitats bound them. Thirdly, most ecoregions contain habitats that differ from their assigned biome. Biogeographic provinces may originate due to various barriers, including physical (plate tectonics, topographic highs), climatic (latitudinal variation, seasonal range) and ocean chemical related (salinity, oxygen levels).
History
The history of the term is somewhat vague. It has been used in many contexts: forest classifications (Loucks, 1962), biome classifications (Bailey, 1976, 2014), biogeographic classifications (WWF/Global 200 scheme of Olson & Dinerstein, 1998), etc.
The phrase "ecological region" was widely used throughout the 20th century by biologists and zoologists to define specific geographic areas in research. In the early 1970s, the term 'ecoregion' was introduced (short for ecological region), and R.G. Bailey published the first comprehensive map of U.S. ecoregions in 1976. The term was used widely in scholarly literature in the 1980s and 1990s, and in 2001 scientists at the U.S. conservation organization World Wildlife Fund (WWF) codified and published the first global-scale map of Terrestrial Ecoregions of the World (TEOW), led by D. Olsen, E. Dinerstein, E. Wikramanayake, and N. Burgess. While the two approaches are related, the Bailey ecoregions (nested in four levels) give more importance to ecological criteria and climate zones, while the WWF ecoregions give more importance to biogeography, that is, the distribution of distinct species assemblages.
The TEOW framework originally delineated 867 terrestrial ecoregions nested into 14 major biomes, contained with the world's 8 major biogeographical realms. Subsequent regional papers by the co-authors covering Africa, Indo-Pacific, and Latin America differentiate between ecoregions and bioregions, referring to the latter as "geographic clusters of ecoregions that may span several habitat types, but have strong biogeographic affinities, particularly at taxonomic levels higher than the species level (genus, family)". The specific goal of the authors was to support global biodiversity conservation by providing a "fourfold increase in resolution over that of the 198 biotic provinces of Dasmann (1974) and the 193 units of Udvardy (1975)." In 2007, a comparable set of Marine Ecoregions of the World (MEOW) was published, led by M. Spalding, and in 2008 a set of Freshwater Ecoregions of the World (FEOW) was published, led by R. Abell.
Bailey's ecoregion concept prioritizes ecological criteria and climate, while the WWF concept prioritizes biogeography, that is, the distribution of distinct species assemblages.
In 2017, an updated terrestrial ecoregions dataset was released in the paper "An Ecoregion-Based Approach to Protecting Half the Terrestrial Realm" led by E. Dinerstein with 48 co-authors. Using recent advances in satellite imagery the ecoregion perimeters were refined and the total number reduced to 846 (and later 844), which can be explored on a web application developed by Resolve and Google Earth Engine.
Definition and categorization
An ecoregion is a "recurring pattern of ecosystems associated with characteristic combinations of soil and landform that characterise that region". Omernik (2004) elaborates on this by defining ecoregions as: "areas within which there is spatial coincidence in characteristics of geographical phenomena associated with differences in the quality, health, and integrity of ecosystems". "Characteristics of geographical phenomena" may include geology, physiography, vegetation, climate, hydrology, terrestrial and aquatic fauna, and soils, and may or may not include the impacts of human activity (e.g. land use patterns, vegetation changes). There is significant, but not absolute, spatial correlation among these characteristics, making the delineation of ecoregions an imperfect science. Another complication is that environmental conditions across an ecoregion boundary may change very gradually, e.g. the prairie-forest transition in the midwestern United States, making it difficult to identify an exact dividing boundary. Such transition zones are called ecotones.
Ecoregions can be categorized using an algorithmic approach or a holistic, "weight-of-evidence" approach where the importance of various factors may vary. An example of the algorithmic approach is Robert Bailey's work for the U.S. Forest Service, which uses a hierarchical classification that first divides land areas into very large regions based on climatic factors, and subdivides these regions, based first on dominant potential vegetation, and then by geomorphology and soil characteristics. The weight-of-evidence approach is exemplified by James Omernik's work for the United States Environmental Protection Agency, subsequently adopted (with modification) for North America by the Commission for Environmental Cooperation.
The intended purpose of ecoregion delineation may affect the method used. For example, the WWF ecoregions were developed to aid in biodiversity conservation planning, and place a greater emphasis than the Omernik or Bailey systems on floral and faunal differences between regions. The WWF classification defines an ecoregion as:
A large area of land or water that contains a geographically distinct assemblage of natural communities that:
(a) Share a large majority of their species and ecological dynamics;
(b) Share similar environmental conditions, and;
(c) Interact ecologically in ways that are critical for their long-term persistence.
According to WWF, the boundaries of an ecoregion approximate the original extent of the natural communities prior to any major recent disruptions or changes. WWF has identified 867 terrestrial ecoregions, and approximately 450 freshwater ecoregions across the Earth.
Importance
The use of the term ecoregion is an outgrowth of a surge of interest in ecosystems and their functioning. In particular, there is awareness of issues relating to spatial scale in the study and management of landscapes. It is widely recognized that interlinked ecosystems combine to form a whole that is "greater than the sum of its parts". There are many attempts to respond to ecosystems in an integrated way to achieve "multi-functional" landscapes, and various interest groups from agricultural researchers to conservationists are using the "ecoregion" as a unit of analysis.
The "Global 200" is the list of ecoregions identified by WWF as priorities for conservation.
Terrestrial
Terrestrial ecoregions are land ecoregions, as distinct from freshwater and marine ecoregions. In this context, terrestrial is used to mean "of land" (soil and rock), rather than the more general sense "of Earth" (which includes land and oceans).
WWF (World Wildlife Fund) ecologists currently divide the land surface of the Earth into eight biogeographical realms containing 867 smaller terrestrial ecoregions (see list). The WWF effort is a synthesis of many previous efforts to define and classify ecoregions.
The eight realms follow the major floral and faunal boundaries, identified by botanists and zoologists, that separate the world's major plant and animal communities. Realm boundaries generally follow continental boundaries, or major barriers to plant and animal distribution, like the Himalayas and the Sahara. The boundaries of ecoregions are often not as decisive or well recognized, and are subject to greater disagreement.
Ecoregions are classified by biome type, which are the major global plant communities determined by rainfall and climate. Forests, grasslands (including savanna and shrubland), and deserts (including xeric shrublands) are distinguished by climate (tropical and subtropical vs. temperate and boreal climates) and, for forests, by whether the trees are predominantly conifers (gymnosperms), or whether they are predominantly broadleaf (Angiosperms) and mixed (broadleaf and conifer). Biome types like Mediterranean forests, woodlands, and scrub; tundra; and mangroves host very distinct ecological communities, and are recognized as distinct biome types as well.
Marine
Marine ecoregions are: "Areas of relatively homogeneous species composition, clearly distinct from adjacent systems….In ecological terms, these are strongly cohesive units, sufficiently large to encompass ecological or life history processes for most sedentary species." They have been defined by The Nature Conservancy (TNC) and World Wildlife Fund (WWF) to aid in conservation activities for marine ecosystems. Forty-three priority marine ecoregions were delineated as part of WWF's Global 200 efforts. The scheme used to designate and classify marine ecoregions is analogous to that used for terrestrial ecoregions. Major habitat types are identified: polar, temperate shelves and seas, temperate upwelling, tropical upwelling, tropical coral, pelagic (trades and westerlies), abyssal, and hadal (ocean trench). These correspond to the terrestrial biomes.
The Global 200 classification of marine ecoregions is not developed to the same level of detail and comprehensiveness as that of the terrestrial ecoregions; only the priority conservation areas are listed.
See Global 200 Marine ecoregions for a full list of marine ecoregions.
In 2007, TNC and WWF refined and expanded this scheme to provide a system of comprehensive near shore (to 200 meters depth) Marine Ecoregions of the World (MEOW). The 232 individual marine ecoregions are grouped into 62 marine provinces, which in turn group into 12 marine realms, which represent the broad latitudinal divisions of polar, temperate, and tropical seas, with subdivisions based on ocean basins (except for the southern hemisphere temperate oceans, which are based on continents).
Major marine biogeographic realms, analogous to the eight terrestrial biogeographic realms, represent large regions of the ocean basins: Arctic, Temperate Northern Atlantic, Temperate Northern Pacific, Tropical Atlantic, Western Indo-Pacific, Central Indo-Pacific, Eastern Indo-Pacific, Tropical Eastern Pacific, Temperate South America, Temperate Southern Africa, Temperate Australasia, and Southern Ocean.
A similar system of identifying areas of the oceans for conservation purposes is the system of large marine ecosystems (LMEs), developed by the US National Oceanic and Atmospheric Administration (NOAA).
Freshwater
A freshwater ecoregion is a large area encompassing one or more freshwater systems that contains a distinct assemblage of natural freshwater communities and species. The freshwater species, dynamics, and environmental conditions within a given ecoregion are more similar to each other than to those of surrounding ecoregions and together form a conservation unit. Freshwater systems include rivers, streams, lakes, and wetlands. Freshwater ecoregions are distinct from terrestrial ecoregions, which identify biotic communities of the land, and marine ecoregions, which are biotic communities of the oceans.
A map of Freshwater Ecoregions of the World, released in 2008, has 426 ecoregions covering virtually the entire non-marine surface of the earth.
World Wildlife Fund (WWF) identifies twelve major habitat types of freshwater ecoregions: Large lakes, large river deltas, polar freshwaters, montane freshwaters, temperate coastal rivers, temperate floodplain rivers and wetlands, temperate upland rivers, tropical and subtropical coastal rivers, tropical and subtropical floodplain rivers and wetlands, tropical and subtropical upland rivers, xeric freshwaters and endorheic basins, and oceanic islands. The freshwater major habitat types reflect groupings of ecoregions with similar biological, chemical, and physical characteristics and are roughly equivalent to biomes for terrestrial systems.
The Global 200, a set of ecoregions identified by WWF whose conservation would achieve the goal of saving a broad diversity of the Earth's ecosystems, includes a number of areas highlighted for their freshwater biodiversity values. The Global 200 preceded Freshwater Ecoregions of the World and incorporated information from regional freshwater ecoregional assessments that had been completed at that time.
See also
Crisis ecoregion
Lists of ecoregions
References
Bibliography
Sources related to the WWC scheme:
Main papers:
Abell, R., M. Thieme, C. Revenga, M. Bryer, M. Kottelat, N. Bogutskaya, B. Coad, N. Mandrak, S. Contreras-Balderas, W. Bussing, M. L. J. Stiassny, P. Skelton, G. R. Allen, P. Unmack, A. Naseka, R. Ng, N. Sindorf, J. Robertson, E. Armijo, J. Higgins, T. J. Heibel, E. Wikramanayake, D. Olson, H. L. Lopez, R. E. d. Reis, J. G. Lundberg, M. H. Sabaj Perez, and P. Petry. (2008). Freshwater ecoregions of the world: A new map of biogeographic units for freshwater biodiversity conservation. BioScience 58:403–414, .
Olson, D. M., Dinerstein, E., Wikramanayake, E. D., Burgess, N. D., Powell, G. V. N., Underwood, E. C., D'Amico, J. A., Itoua, I., Strand, H. E., Morrison, J. C., Loucks, C. J., Allnutt, T. F., Ricketts, T. H., Kura, Y., Lamoreux, J. F., Wettengel, W. W., Hedao, P., Kassem, K. R. (2001). Terrestrial ecoregions of the world: a new map of life on Earth. BioScience 51(11):933–938, .
Spalding, M. D. et al. (2007). Marine ecoregions of the world: a bioregionalization of coastal and shelf areas. BioScience 57: 573–583, .
Africa:
Burgess, N., J.D. Hales, E. Underwood, and E. Dinerstein (2004). Terrestrial Ecoregions of Africa and Madagascar: A Conservation Assessment. Island Press, Washington, D.C., .
Thieme, M.L., R. Abell, M.L.J. Stiassny, P. Skelton, B. Lehner, G.G. Teugels, E. Dinerstein, A.K. Toham, N. Burgess & D. Olson. 2005. Freshwater ecoregions of Africa and Madagascar: A conservation assessment. Washington DC: WWF, .
Latin America
Dinerstein, E., Olson, D. Graham, D.J. et al. (1995). A Conservation Assessment of the Terrestrial Ecoregions of Latin America and the Caribbean. World Bank, Washington DC., .
Olson, D. M., E. Dinerstein, G. Cintron, and P. Iolster. 1996. A conservation assessment of mangrove ecosystems of Latin America and the Caribbean. Final report for The Ford Foundation. World Wildlife Fund, Washington, D.C.
Olson, D. M., B. Chernoff, G. Burgess, I. Davidson, P. Canevari, E. Dinerstein, G. Castro, V. Morisset, R. Abell, and E. Toledo. 1997. Freshwater biodiversity of Latin America and the Caribbean: a conservation assessment. Draft report. World Wildlife Fund-U.S., Wetlands International, Biodiversity Support Program, and United States Agency for International Development, Washington, D.C., .
North America
Abell, R.A. et al. (2000). Freshwater Ecoregions of North America: A Conservation Assessment Washington, DC: Island Press, .
Ricketts, T.H. et al. 1999. Terrestrial Ecoregions of North America: A Conservation Assessment. Washington (DC): Island Press, .
Russia and Indo-Pacific
Krever, V., Dinerstein, E., Olson, D. and Williams, L. 1994. Conserving Russia's Biological Diversity: an analytical framework and initial investment portfolio. WWF, Switzerland.
Wikramanayake, E., E. Dinerstein, C. J. Loucks, D. M. Olson, J. Morrison, J. L. Lamoreux, M. McKnight, and P. Hedao. 2002. Terrestrial ecoregions of the Indo-Pacific: a conservation assessment. Island Press, Washington, DC, USA, .
Others:
Brunckhorst, D. 2000. Bioregional planning: resource management beyond the new millennium. Harwood Academic Publishers: Sydney, Australia.
Busch, D.E. and J.C. Trexler. eds. 2003. Monitoring Ecosystems: Interdisciplinary approaches for evaluating ecoregional initiatives. Island Press. 447 pages.
External links
WWF WildFinder (interactive on-line map of ecoregions with additional information about animal species)
, Original web page
Activist network cultivating Ecoregions/Bioregions
, Original web page
World Map of Ecoregions
Biogeography | Ecoregion | [
"Biology"
] | 3,782 | [
"Biogeography"
] |
45,394 | https://en.wikipedia.org/wiki/Neotropical%20realm | The Neotropical realm is one of the eight biogeographic realms constituting Earth's land surface. Physically, it includes the tropical terrestrial ecoregions of the Americas and the entire South American temperate zone.
Definition
In biogeography, the Neotropic or Neotropical realm is one of the eight terrestrial realms. This realm includes South America, Central America, the Caribbean Islands, and southern North America. In Mexico, the Yucatán Peninsula and southern lowlands, and most of the east and west coastlines, including the southern tip of the Baja California Peninsula are Neotropical. In the United States southern Florida and coastal Central Florida are considered Neotropical.
The realm also includes temperate southern South America. In contrast, the Neotropical Floristic Kingdom excludes southernmost South America, which instead is placed in the Antarctic kingdom.
The Neotropic is delimited by similarities in fauna or flora. Its fauna and flora are distinct from the Nearctic realm (which includes most of North America) because of the long separation of the two continents. The formation of the Isthmus of Panama joined the two continents two to three million years ago, precipitating the Great American Interchange, an important biogeographical event.
The Neotropic includes more tropical rainforest (tropical and subtropical moist broadleaf forests) than any other realm, extending from southern Mexico through Central America and northern South America to southern Brazil, including the vast Amazon rainforest. These rainforest ecoregions are one of the most important reserves of biodiversity on Earth. These rainforests are also home to a diverse array of indigenous peoples, who to varying degrees persist in their autonomous and traditional cultures and subsistence within this environment. The number of these peoples who are as yet relatively untouched by external influences continues to decline significantly, however, along with the near-exponential expansion of urbanization, roads, pastoralism and forest industries which encroach on their customary lands and environment. Nevertheless, amidst these declining circumstances this vast "reservoir" of human diversity continues to survive, albeit much depleted. In South America alone, some 350–400 indigenous languages and dialects are still living (down from an estimated 1,500 at the time of first European contact), in about 37 distinct language families and a further number of unclassified and isolate languages. Many of these languages and their cultures are also endangered. Accordingly, conservation in the Neotropical realm is a hot political concern, and raises many arguments about development versus indigenous versus ecological rights and access to or ownership of natural resources.
Major ecological regions
The World Wide Fund for Nature (WWF) subdivides the realm into bioregions, defined as "geographic clusters of ecoregions that may span several habitat types, but have strong biogeographic affinities, particularly at taxonomic levels higher than the species level (genus, family)."
Laurel forest and other cloud forest are subtropical and mild temperate forest, found in areas with high humidity and relatively stable and mild temperatures. Tropical rainforest, tropical and subtropical moist broadleaf forests are highlight in Southern North America, Amazonia, Caribbean, Central America, Northern Andes and Central Andes.
Amazonia
The Amazonia bioregion is mostly covered by tropical moist broadleaf forest, including the vast Amazon rainforest, which stretches from the Andes Mountains to the Atlantic Ocean, and the lowland forests of the Guianas. The bioregion also includes tropical savanna and tropical dry forest ecoregions.
Caribbean
Central America
Central Andes
The Central Andes lie between the gulfs of Guayaquil and Penas and thus encompass southern Ecuador, Chile, Peru, western Bolivia, and northwest and western Argentina.
Eastern South America
Eastern South America includes the Caatinga xeric shrublands of northeastern Brazil, the broad Cerrado grasslands and savannas of the Brazilian Plateau, and the Pantanal and Chaco grasslands. The diverse Atlantic forests of eastern Brazil are separated from the forests of Amazonia by the Caatinga and Cerrado, and are home to a distinct flora and fauna.
Northern Andes
North of the Gulf of Guayaquil in Ecuador and Colombia, a series of accreted oceanic terranes (discrete allochthonous fragments) have developed that constitute the Baudo, or Coastal, Mountains and the Cordillera Occidental.
Orinoco
The Orinoco is a region of humid forested broadleaf forest and wetland primarily comprising the drainage basin for the Orinoco River and other adjacent lowland forested areas. This region includes most of Venezuela and parts of Colombia, as well as Trinidad and Tobago.
Southern South America
The temperate forest ecoregions of southwestern South America, including the temperate rain forests of the Valdivian temperate rain forests and Magellanic subpolar forests ecoregions, and the Juan Fernández Islands and Desventuradas Islands, are a refuge for the ancient Antarctic flora, which includes trees like the southern beech (Nothofagus), podocarps, the alerce (Fitzroya cupressoides), and Araucaria pines like the monkey-puzzle tree (Araucaria araucana). These rainforests are endangered by extensive logging and their replacement by fast-growing non-native pines and eucalyptus.
History
South America was originally part of the supercontinent of Gondwana, which included Africa, Australia, India, New Zealand, and Antarctica, and the Neotropic shares many plant and animal lineages with these other continents, including marsupial mammals and the Antarctic flora.
After the final breakup of the Gondwana about 110 million years ago, South America was separated from Africa and drifted north and west. 66 million years ago, the Cretaceous–Paleogene extinction event altered local flora and fauna. Much later, about two to three million years ago, South America was joined with North America by the formation of the Isthmus of Panama, which allowed a biotic exchange between the two continents, the Great American Interchange. South American species like the ancestors of the Virginia opossum (Didelphis virginiana) and the armadillo moved into North America, and North Americans like the ancestors of South America's camelids, including the llama (Lama glama), moved south. The long-term effect of the exchange was the extinction of many South American species, mostly by outcompetition by northern species.
Endemic animals and plants
Animals
The Neotropical realm has 31 endemic bird families, which is over twice the number of any other realm. They include tanagers, rheas, tinamous, curassows, antbirds, ovenbirds, toucans, and seriemas. Bird families originally unique to the Neotropics include hummingbirds (family Trochilidae) and wrens (family Troglodytidae).
Mammal groups originally unique to the Neotropics include:
Order Xenarthra: anteaters, sloths, and armadillos
New World monkeys
Solenodontidae, the solenodons
Caviomorpha rodents, including capybaras, guinea pigs, hutias, and chinchillas
American opossums (order Didelphimorphia) and shrew opossums (order Paucituberculata)
The Neotropical realm has 63 endemic fish families and subfamilies, which is more than any other realm. Neotropical fishes include more than 5,700 species, and represent at least 66 distinct lineages in continental freshwaters (Albert and Reis, 2011). The well-known red-bellied piranha is endemic to the Neotropic realm, occupying a larger geographic area than any other piranha species. Some fish groups originally unique to the Neotropics include:
Order Gymnotiformes: Neotropical electric fishes
Family Characidae: tetras and allies
Family Loricariidae: armoured catfishes
Subfamily Cichlinae: Neotropical cichlids
Subfamily Poeciliinae: guppies and relatives
Examples of other animal groups that are entirely or mainly restricted to the Neotropical region include:
Caimans
New World coral snakes
Poison dart frogs
Dactyloidae ("anoles")
Rock iguanas (Cyclura)
Preponini and Anaeini butterflies (including Agrias)
Brassolini and Morphini butterflies (including Caligo and Morpho)
Callicorini butterflies
Heliconiini butterflies
Ithomiini butterflies
Riodininae butterflies
Eumaeini butterflies
Firetips or firetail skipper butterflies
Euglossini bees
Augochlorini bees
Pseudostigmatidae ("giant damselflies")
Mantoididae (short-bodied mantises)
Canopidae, Megarididae, and Phloeidae (pentatomoid bugs)
Aetalionidae and Melizoderidae (treehoppers)
Gonyleptidae (harvestmen)
Plants
According to Simberloff. as of 1984 there were a total of 92,128 species of flowering plants (Angiosperms) in the Neotropics.
Plant families endemic and partly subendemic to the realm are, according to Takhtajan (1978), Hymenophyllopsidaceae, Marcgraviaceae, Caryocaraceae, Pellicieraceae, Quiinaceae, Peridiscaceae, Bixaceae, Cochlospermaceae, Tovariaceae, Lissocarpaceae (Lissocarpa), Brunelliaceae, Dulongiaceae, Columelliaceae, Julianiaceae, Picrodendraceae, Goupiaceae, Desfontainiaceae, Plocospermataceae, Tropaeolaceae, Dialypetalanthaceae (Dialypetalanthus), Nolanaceae (Nolana), Calyceraceae, Heliconiaceae, Cannaceae, Thurniaceae and Cyclanthaceae.
Plant families that originated in the Neotropic include Bromeliaceae, Cannaceae and Heliconiaceae.
Plant species with economic importance originally unique to the Neotropic include:
Potato (Solanum tuberosum)
Tomato (Solanum lycopersicum)
Cacao tree (Theobroma cacao), source of cocoa and chocolate
Maize (Zea mays)
Passion fruit (Passiflora edulis)
Guava (Psidium guajava)
Lima bean (Phaseolus lunatus)
Cotton (Gossypium barbadense)
Cassava (Manihot esculenta)
Sweet potato (Ipomoea batatas)
Amaranth (Amaranthus caudatus)
Quinoa (Chenopodium quinoa)
Neotropical terrestrial ecoregions
Citations
General and cited bibliography
Albert, J. S., and R. E. Reis (2011). Historical Biogeography of Neotropical Freshwater Fishes. University of California Press, Berkeley. 424 pp. .
Bequaert, Joseph C. "An Introductory Study of Polistes in the United States and Canada with Descriptions of Some New North and South American Forms (Hymenoptera; Vespidæ)". Journal of the New York Entomological Society 48.1 (1940): 1-31.
Cox, C. B.; P. D. Moore (1985). Biogeography: An Ecological and Evolutionary Approach (Fourth Edition). Blackwell Scientific Publications, Oxford.
Dinerstein, E., Olson, D. Graham, D. J. et al. (1995). A Conservation Assessment of the Terrestrial Ecoregions of Latin America and the Caribbean. World Bank, Washington, D.C.
Olson, D. M., B. Chernoff, G. Burgess, I. Davidson, P. Canevari, E. Dinerstein, G. Castro, V. Morisset, R. Abell, and E. Toledo. 1997. Freshwater biodiversity of Latin America and the Caribbean: a conservation assessment. Draft report. World Wildlife Fund-U.S., Wetlands International, Biodiversity Support Program, and United States Agency for International Development, Washington, D.C.
Reis, R. E., S. O. Kullander, and C. J. Ferraris Jr. 2003. Check List of the Freshwater Fishes of South and Central America. Edipucrs, Porto Alegre. 729 pp.
Udvardy, M. D. F. (1975). A classification of the biogeographical provinces of the world. IUCN Occasional Paper no. 18. Morges, Switzerland: IUCN.
van der Sleen, Peter, and James S. Albert, eds. Field Guide to the Fishes of the Amazon, Orinoco, and Guianas. Princeton University Press, 2017.
External links
List of terrestrial ecoregions
Eco-Index, a bilingual searchable reference of conservation and research projects in the Neotropics; a service of the Rainforest Alliance
NeoTropic
Acosta, Guillermo et al., 2018. "Climate change and peopling of the Neotropics during the Pleistocene-Holocene transition". Boletín de la Sociedad Geológica Mexicana. .
.
.
.
.
.
Biogeographic realms
Biogeography
Natural history of Central America
Natural history of North America
Natural history of South America
Natural history of the Caribbean
Phytogeography | Neotropical realm | [
"Biology"
] | 2,720 | [
"Biogeography"
] |
45,397 | https://en.wikipedia.org/wiki/Jane%20Goodall | Dame Jane Morris Goodall (; born Valerie Jane Morris-Goodall; 3 April 1934), formerly Baroness Jane van Lawick-Goodall, is an English zoologist, primatologist and anthropologist. She is considered the world's foremost expert on chimpanzees, after 60 years' studying the social and family interactions of wild chimpanzees. Goodall first went to Gombe Stream National Park in Tanzania to observe its chimpanzees in 1960.
She is the founder of the Jane Goodall Institute and the Roots & Shoots programme and has worked extensively on conservation and animal welfare issues. As of 2022, she is on the board of the Nonhuman Rights Project. In April 2002, she was named a United Nations Messenger of Peace. Goodall is an honorary member of the World Future Council.
Early life
Valerie Jane Morris-Goodall was born in April 1934 in Hampstead, London, to businessman (1907–2001) and Margaret Myfanwe Joseph (1906–2000), a novelist from Milford Haven, Pembrokeshire, who wrote under the name Vanne Morris-Goodall.
The family later moved to Bournemouth, and Goodall attended Uplands School, an independent school in nearby Poole.
As a child, Goodall's father gave her a stuffed toy chimpanzee named Jubilee as an alternative to a teddy bear. Goodall has said her fondness for it sparked her early love of animals, commenting, "My mother's friends were horrified by this toy, thinking it would frighten me and give me nightmares." Jubilee still sits on Goodall's dresser in London.
Africa
Goodall had always been drawn to animals and Africa, which brought her to the farm of a friend in the Kenya highlands in 1957. From there, she obtained work as a secretary, and acting on her friend's advice, she telephoned Louis Leakey, the Kenyan archaeologist and palaeontologist, with no other thought than to make an appointment to discuss animals. Leakey, believing that the study of existing great apes could provide indications of the behaviour of early hominids, was looking for a chimpanzee researcher, though he kept the idea to himself. Instead, he proposed that Goodall work for him as a secretary. After obtaining approval from his co-researcher and wife, British paleoanthropologist Mary Leakey, Louis sent Goodall to Olduvai Gorge in Tanganyika (now part of Tanzania), where he laid out his plans.
In 1958, Leakey sent Goodall to London to study primate behaviour with Osman Hill and primate anatomy with John Napier. Leakey raised funds, and on 14 July 1960, Goodall went to Gombe Stream National Park, becoming the first of what would come to be called The Trimates. She was accompanied by her mother, whose presence was necessary to satisfy the requirements of David Anstey, chief warden, who was concerned for their safety. Goodall credits her mother with encouraging her to pursue a career in primatology, a male-dominated field at the time. Goodall has said that women were not accepted in the field when she started her research in the late 1950s. , the field of primatology is made up almost evenly of men and women, in part thanks to the trailblazing of Goodall and her encouragement of young women to join the field.
Leakey arranged funding, and in 1962 he sent Goodall, who had no degree, to the University of Cambridge. She was the eighth person to be allowed to study for a PhD at Cambridge without first having obtained a bachelor's degree. She went to Newnham College, Cambridge, where she received her Bachelor of Arts in natural sciences by 1964, which is when she went up to the new Darwin College, Cambridge, for a Doctor of Philosophy in ethology. Her thesis was completed in 1966 under the supervision of Robert Hinde on the Behaviour of free-living chimpanzees, detailing her first five years of study at the Gombe Reserve.
On 19 June 2006, the Open University of Tanzania awarded her an honorary Doctor of Science degree.
Work
Research at Gombe Stream National Park
Goodall studied chimpanzee social and family life beginning with the Kasakela chimpanzee community in Gombe Stream National Park, Tanzania, in 1960. She found that "it isn't only human beings who have personality, who are capable of rational thought [and] emotions like joy and sorrow." She also observed behaviours such as hugs, kisses, pats on the back, and even tickling, what we consider "human" actions. Goodall insists that these gestures are evidence of "the close, supportive, affectionate bonds that develop between family members and other individuals within a community, which can persist throughout a life span of more than 50 years."
Goodall's research at Gombe Stream challenged two long-standing beliefs of the day: that only humans could construct and use tools, and that chimpanzees were vegetarians. While observing one chimpanzee feeding at a termite mound, she watched him repeatedly place stalks of grass into termite holes, then remove them from the hole covered with clinging termites, effectively "fishing" for termites. The chimpanzees would also take twigs from trees and strip off the leaves to make the twig more effective, a form of object modification that is the rudimentary beginnings of toolmaking. Humans had long distinguished themselves from the rest of the animal kingdom as "Man the Toolmaker". In response to Goodall's revolutionary findings, Louis Leakey wrote, "We must now redefine man, redefine tool, or accept chimpanzees as human!"
In contrast to the peaceful and affectionate behaviours she observed, Goodall also found an aggressive side of chimpanzee nature at Gombe Stream. She discovered that chimpanzees will systematically hunt and eat smaller primates such as colobus monkeys. Goodall watched a hunting group isolate a colobus monkey high in a tree and block all possible exits; then one chimpanzee climbed up and captured and killed the colobus. The others then each took parts of the carcass, sharing with other members of the troop in response to begging behaviours. The chimpanzees at Gombe kill and eat as much as one-third of the colobus population in the park each year. This alone was a major scientific find that challenged previous conceptions of chimpanzee diet and behaviour.
Goodall also observed the tendency for aggression and violence within chimpanzee troops. Goodall observed dominant females deliberately killing the young of other females in the troop to maintain their dominance, sometimes going as far as cannibalism. She says of this revelation, "During the first ten years of the study I had believed [...] that the Gombe chimpanzees were, for the most part, rather nicer than human beings. [...] Then suddenly we found that chimpanzees could be brutal—that they, like us, had a darker side to their nature." She described the 1974–1978 Gombe Chimpanzee War in her 1990 memoir, Through a Window: My Thirty Years with the Chimpanzees of Gombe. Her findings revolutionised contemporary knowledge of chimpanzee behaviour and were further evidence of the social similarities between humans and chimpanzees.
Goodall set herself apart from convention by naming the animals in her studies of primates instead of assigning each a number. Numbering was a nearly universal practice at the time and was thought to be important in avoiding emotional attachment to the subject being studied and thus losing objectivity. Goodall wrote in 1993: "When, in the early 1960s, I brazenly used such words as 'childhood', 'adolescence', 'motivation', 'excitement', and 'mood' I was much criticised. Even worse was my crime of suggesting that chimpanzees had 'personalities'. I was ascribing human characteristics to nonhuman animals and was thus guilty of that worst of ethological sins—anthropomorphism."
Setting herself apart from other researchers also led her to develop a close bond with the chimpanzees and to become the only human ever accepted into chimpanzee society. She was the lowest-ranking member of a troop for a period of 22 months.
Among those whom Goodall named during her years in Gombe were:
David Greybeard, a grey-chinned male who first warmed up to Goodall;
Goliath, a friend of David Greybeard, originally the alpha male named for his bold nature;
Mike, who through his cunning and improvisation displaced Goliath as the alpha male;
Humphrey, a big, strong, bullysome male;
Gigi, a large, sterile female who delighted in being the "aunt" of any young chimps or humans;
Mr. McGregor, a belligerent older male;
Flo, a motherly, high-ranking female with a bulbous nose and ragged ears, and her children; Figan, Faben, Freud, Fifi, and Flint;
Frodo, Fifi's second-oldest child, an aggressive male who would frequently attack Jane and ultimately forced her to leave the troop when he became alpha male.
Jane Goodall Institute
In 1977, Goodall established the Jane Goodall Institute (JGI), which supports the Gombe research, and she is a global leader in the effort to protect chimpanzees and their habitats. With nineteen offices around the world, the JGI is widely recognised for community-centred conservation and development programs in Africa. Its global youth program, Roots & Shoots, began in 1991 when a group of 16 local teenagers met with Goodall on her back porch in Dar es Salaam, Tanzania. They were eager to discuss a range of problems they knew about from first-hand experience that caused them deep concern. The organisation has over 10,000 groups in over 100 countries .
In 1992, Goodall founded the Tchimpounga Chimpanzee Rehabilitation Centre in the Republic of Congo to care for chimpanzees orphaned due to bush-meat trade. The rehabilitation houses over a hundred chimps over its three islands.
In 1994, Goodall founded the Lake Tanganyika Catchment Reforestation and Education (TACARE or "Take Care") pilot project to protect chimpanzees' habitat from deforestation by reforesting hills around Gombe while simultaneously educating neighbouring communities on sustainability and agriculture training. The TACARE project also supports young girls by offering them access to reproductive health education and through scholarships to finance their college tuition.
Owing to an overflow of handwritten notes, photographs, and data piling up at Jane's home in Dar es Salaam in the mid-1990s, the Jane Goodall Institute's Center for Primate Studies was created at the University of Minnesota to house and organise this data. all of the original Jane Goodall archives reside there and have been digitised, analysed, and placed in an online database. On 17 March 2011, Duke University spokesman Karl Bates announced that the archives will move to Duke, with Anne E. Pusey, Duke's chairman of evolutionary anthropology, overseeing the collection. Pusey, who managed the archives in Minnesota and worked with Goodall in Tanzania, had worked at Duke for a year.
In 2018 and 2020, Goodall partnered with friend and CEO Michael Cammarata on two natural product lines from Schmidt's Naturals and Neptune Wellness Solutions. Five percent of every sale benefited the Jane Goodall Institute.
As of 2004, Goodall devotes virtually all of her time to advocacy on behalf of chimpanzees and the environment, travelling nearly 300 days a year. Goodall is also on the advisory council for the world's largest chimpanzee sanctuary outside of Africa, Save the Chimps in Fort Pierce, Florida.
Jane Goodall is an advisory board member for The Society for the Protection of Underground Networks (SPUN).
Activism
Goodall credits the 1986 Understanding Chimpanzees conference, hosted by the Chicago Academy of Sciences, with shifting her focus from observation of chimpanzees to a broader and more intense concern with animal-human conservation. She is the former president of Advocates for Animals, an organisation based in Edinburgh, Scotland, that campaigns against the use of animals in medical research, zoos, farming and sport.
She is a vegetarian and advocates the diet for ethical, environmental, and health reasons. In The Inner World of Farm Animals (2009), Goodall writes that farm animals are "far more aware and intelligent than we ever imagined and, despite having been bred as domestic slaves, they are individual beings in their own right. As such, they deserve our respect. And our help. Who will plead for them if we are silent?" Goodall has also said: "Thousands of people who say they 'love' animals sit down once or twice a day to enjoy the flesh of creatures who have been treated with so little respect and kindness just to make more meat." In 2021, Goodall became a vegan and authored a cookbook titled Eat Meat Less.
Goodall is an outspoken environmental advocate, speaking on the effects of climate change on endangered species such as chimpanzees. Goodall, alongside her foundation, collaborated with NASA to use satellite imagery from the Landsat series to remedy the effects of deforestation on chimpanzees and local communities in Western Africa by offering the villagers information on how to reduce activity and preserve their environment. To ensure the safe and ethical treatment of animals during ethological studies, Goodall, alongside Professor Mark Bekoff, founded the organization Ethologists for the Ethical Treatment of Animals in 2000.
In 2008, Goodall gave a lecture entitled "Reason for Hope" at the University of San Diego's Joan B. Kroc Institute for Peace & Justice, and in the same year demanded the European Union end the use of medical research on animals and ensure more funding for alternative methods of medical research. She controversially described Edinburgh Zoo's new primate enclosure as a "wonderful facility" where monkeys "are probably better off [than those] living in the wild in an area like Budongo, where one in six gets caught in a wire snare, and countries like Congo, where chimpanzees, monkeys and gorillas are shot for food commercially." This was in conflict with Advocates for Animals' position on captive animals. In June that year, she resigned the presidency of the organisation which she had held since 1998, citing her busy schedule and explaining, "I just don't have time for them." Goodall is a patron of the population concern charity Population Matters and is an ambassador for Disneynature.
In 2010, Goodall, through JGI, formed a coalition with a number of organizations such as the Wildlife Conservation Society (WCS) and the Humane Society of the United States (HSUS) and petitioned to list all chimpanzees, including those that are captive, as endangered. In 2015, the U.S. Fish and Wildlife Service (USFWS) announced that they would accept this rule and that all chimpanzees would be classified as endangered. In 2011, she became a patron of the Australian animal protection group Voiceless. "I have for decades been concerned about factory farming, in part because of the tremendous harm inflicted on the environment, but also because of the shocking ongoing cruelty perpetuated on millions of sentient beings."
In 2012, she took on the role of challenger for the Engage in Conservation Challenge with The DO School, formerly known as the D&F Academy. She worked with a group of aspiring social entrepreneurs to create a workshop to engage young people in conserving biodiversity, and to tackle a perceived global lack of awareness of the issue. In 2014, Goodall wrote to Air France executives, criticizing the airline's continued transport of monkeys to laboratories. Goodall called the practice "cruel" and "traumatic" for the monkeys involved. The same year, Goodall also wrote to the National Institutes of Health (NIH) to criticize maternal deprivation experiments on baby monkeys in NIH laboratories.
Prior to the 2015 UK general election, she endorsed the parliamentary candidacy of the Green Party's Caroline Lucas. She is a critic of fox hunting and signed a letter to Members of Parliament in 2015 opposing the Conservative prime minister David Cameron's plan to amend the Hunting Act 2004.
In August 2019, Goodall was honoured for her contributions to science with a bronze sculpture in Midtown Manhattan alongside nine other women, part of the Statues for Equality project. In 2020 she advocated for ecocide (mass damage or destruction of nature) to be made an international crime, stating “The concept of Ecocide is long overdue. It could lead to an important change in the way people perceive – and respond to – the current environmental crisis.” That same year, Goodall vowed to plant 5 million trees, part of the 1 trillion tree initiative founded by the World Economic Forum. In 2021, Goodall called on the EU Commission to abolish caging of farm animals.
In 2021, Goodall joined the Rewriting Extinction campaign to fight the climate and biodiversity crisis through comics. She is listed as a contributor to the book The Most Important Comic Book on Earth: Stories to Save the World which was released on 28 October 2021 by DK.
Opinions
Goodall is known to support the possibility that undiscovered species of primates may still exist today, including cryptids such as Sasquatch, Yeren and other types of Bigfoot. She has talked about this possibility in various interviews and debates. In 2012, when the Huffington Post asked her about it, Goodall replied: "I'm fascinated and would actually love them to exist," adding, "Of course, it's strange that there has never been a single authentic hide or hair of the Bigfoot, but I've read all the accounts."
Religion and spirituality
Goodall was raised in a Christian congregationalist family. As a young woman, she took night classes in Theosophy. Her family were occasional churchgoers, but Goodall began attending more regularly as a teenager when the church appointed a new minister, Trevor Davies. "He was highly intelligent and his sermons were powerful and thought-provoking... I could have listened to his voice for hours... I fell madly in love with him... Suddenly, no one had to encourage me to go to church. Indeed, there were never enough services for my liking." Of her later discovery of the atheism and agnosticism of many of her scientific colleagues, Goodall wrote that "[f]ortunately, by the time I got to Cambridge I was twenty-seven years old and my beliefs had already moulded so that I was not influenced by these opinions."
In her 1999 book Reason for Hope: A Spiritual Journey, Goodall describes the implications of a mystical experience she had at Notre Dame Cathedral in 1977: "Since I cannot believe that this was the result of chance, I have to admit anti-chance. And so I must believe in a guiding power in the universe – in other words, I must believe in God." When asked if she believes in God, Goodall said in September 2010: "I don't have any idea of who or what God is. But I do believe in some great spiritual power. I feel it particularly when I'm out in nature. It's just something that's bigger and stronger than what I am or what anybody is. I feel it. And it's enough for me." When asked in the same year if she still considers herself a Christian, Goodall told the Guardian "I suppose so; I was raised as a Christian." and stated that she saw no contradiction between evolution and belief in God.
In her foreword to the 2017 book The Intelligence of the Cosmos by Ervin Laszlo, a philosopher of science who advocates quantum consciousness theory, Goodall wrote: "we must accept that there is an Intelligence driving the process [of evolution], that the Universe and life on Earth are inspired and in-formed by an unknown and unknowable Creator, a Supreme Being, a Great Spiritual Power."
Personal life
Goodall has married twice. On 28 March 1964, she married a Dutch nobleman, wildlife photographer Baron Hugo van Lawick, at Chelsea Old Church, London, and became known during their marriage as Baroness Jane van Lawick-Goodall. The couple had a son, Hugo Eric Louis (born 1967); they divorced in 1974. The following year, she married Derek Bryceson, a member of Tanzania's parliament and the director of that country's national parks. Bryceson died of cancer in October 1980. Owing to his position in the Tanzanian government as head of the country's national park system, Bryceson could protect Goodall's research project and implement an embargo on tourism at Gombe.
Goodall has stated that dogs are her favourite animal.
Goodall has prosopagnosia, which makes it difficult to recognize familiar faces.
Criticism
Feeding stations
Many standard methods aim to avoid interference by observers, and in particular some believe that the use of feeding stations to attract Gombe chimpanzees has altered normal foraging and feeding patterns and social relationships. This argument is the focus of a book published by Margaret Power in 1991. It has been suggested that higher levels of aggression and conflict with other chimpanzee groups in the area were due to the feeding, which could have created the "wars" between chimpanzee social groups described by Goodall, aspects of which she did not witness in the years before artificial feeding began at Gombe. Thus, some regard Goodall's observations as distortions of normal chimpanzee behaviour.
Goodall herself acknowledged that feeding contributed to aggression within and between groups, but maintained that the effect was limited to alteration of the intensity and not the nature of chimpanzee conflict, and further suggested that feeding was necessary for the study to be effective at all. Craig Stanford of the Jane Goodall Research Institute at the University of Southern California states that researchers conducting studies with no artificial provisioning have a difficult time viewing any social behaviour of chimpanzees, especially those related to inter-group conflict.
Some studies, such as those by Crickette Sanz in the Goualougo Triangle (Congo) and Christophe Boesch in the Taï National Park (Ivory Coast), have not shown the aggression observed in the Gombe studies. However, other primatologists disagree that the studies are flawed; for example, Jim Moore provides a critique of Margaret Powers' assertions and some studies of other chimpanzee groups have shown aggression similar to that in Gombe even in the absence of feeding.
Plagiarism and Seeds of Hope
On 22 March 2013, Hachette Book Group announced that Goodall's and co-author Gail Hudson's new book, Seeds of Hope, would not be released on 2 April as planned due to the discovery of plagiarised portions. A reviewer for The Washington Post found unattributed sections that were copied from websites about organic tea and tobacco and an "amateurish astrology site", as well as from Wikipedia. Goodall apologised and stated, "It is important to me that the proper sources are credited, and I will be working diligently with my team to address all areas of concern. My goal is to ensure that when this book is released it is not only up to the highest of standards, but also that the focus be on the crucial messages it conveys." The book was released on 1 April 2014, after review and the addition of 57 pages of endnotes.
In popular culture
Gary Larson cartoon incident
One of Gary Larson's Far Side cartoons shows two chimpanzees grooming. One finds a blonde human hair on the other and inquires, "Conducting a little more 'research' with that Jane Goodall tramp?" Goodall herself was in Africa at the time, and the Jane Goodall Institute thought this was in bad taste and had its lawyers draft a letter to Larson and his distribution syndicate in which they described the cartoon as an "atrocity". They were stymied by Goodall herself: when she returned and saw the cartoon, she stated that she found the cartoon amusing.
Since then, all profits from sales of a shirt featuring this cartoon have gone to the Jane Goodall Institute. Goodall wrote a preface to The Far Side Gallery 5, detailing her version of the controversy, and the institute's letter was included next to the cartoon in the complete Far Side collection. She praised Larson's creative ideas, which often compare and contrast the behaviour of humans and animals. In 1988, when Larson visited Goodall's research facility in Tanzania, he was attacked by a chimpanzee named Frodo.
Lego
On 3 March 2022, in celebration of Women's History Month and International Women's Day, The Lego Group issued set number 40530, A Jane Goodall Tribute, depicting a Jane Goodall minifigure and three chimpanzees in an African forest scene.
Awards and recognition
Goodall has received many honours for her environmental and humanitarian work, as well as others. She was named a Dame Commander of the Order of the British Empire in an Investiture held at Buckingham Palace in 2004. In April 2002, Secretary-General Kofi Annan named Goodall a United Nations Messenger of Peace. Her other honours include the Tyler Prize for Environmental Achievement, the French Legion of Honour, Medal of Tanzania, Japan's prestigious Kyoto Prize, the Benjamin Franklin Medal in Life Science, the Gandhi-King Award for Nonviolence and the Spanish Prince of Asturias Awards.
Goodall is also a member of the advisory board of BBC Wildlife magazine and a patron of Population Matters (formerly the Optimum Population Trust).
Goodall has received many tributes, honours, and awards from local governments, schools, institutions, and charities around the world. Goodall is honoured by The Walt Disney Company with a plaque on the Tree of Life at Disney's Animal Kingdom theme park, alongside a carving of her beloved David Greybeard, the original chimpanzee that approached Goodall during her first year at Gombe. She is a member of both the American Academy of Arts and Sciences and the American Philosophical Society.
In 2010, Dave Matthews and Tim Reynolds held a benefit concert at DAR Constitution Hall in Washington DC to commemorate "Gombe 50: a global celebration of Jane Goodall's pioneering chimpanzee research and inspiring vision for our future". Time magazine named Goodall as one of the 100 most influential people in the world in 2019. In 2021, she received the Templeton Prize.
On 31 December 2021, Goodall was the guest editor of the BBC Radio Four Today programme. She chose Francis Collins to be presenter of Thought for the Day.
In 2022, Dr. Goodall received the Stephen Hawking Medal for Science Communication for her long-term study of social and family interactions of wild chimpanzees.
In April 2023, Goodall was made Officer in the Order of Orange-Nassau in a ceremony in The Hague, the Netherlands.
In October 2024, Dr. Goodall gave "A Speech for History" at UNESCO. She delivered an optimistic message on conservation and the role that everyone can play in preserving our planet by educating youth and communities to protect and respect the natural world.
In January 2025, Dr. Goodall was awarded the Presidential Medal of Freedom.
Works
Books
1969 My Friends the Wild Chimpanzees Washington, DC: National Geographic Society
1971 Innocent Killers (with H. van Lawick). Boston: Houghton Mifflin; London: Collins
1971 In the Shadow of Man Boston: Houghton Mifflin; London: Collins. Published in 48 languages
1986 The Chimpanzees of Gombe: Patterns of Behavior Boston: Belknap Press of the Harvard University Press. Published also in Japanese and Russian. R.R. Hawkins Award for the Outstanding Technical, Scientific or Medical book of 1986, to Bellknap Press of Harvard University Press, Boston. The Wildlife Society (USA) Award for "Outstanding Publication in Wildlife Ecology and Management"
1990 Through a Window: 30 years observing the Gombe chimpanzees London: Weidenfeld & Nicolson; Boston: Houghton Mifflin. Translated into more than 15 languages. 1991 Penguin edition, UK. American Library Association "Best" list among Nine Notable Books (Nonfiction) for 1991
1991 Visions of Caliban (co-authored with Dale Peterson, PhD). Boston: Houghton Mifflin. New York Times "Notable Book" for 1993. Library Journal "Best Sci-Tech Book" for 1993
1999 Brutal Kinship (with Michael Nichols). New York: Aperture Foundation
1999 Reason For Hope; A Spiritual Journey (with Phillip Berman). New York: Warner Books, Inc. Translated into Japanese and Portuguese
2000 40 Years At Gombe New York: Stewart, Tabori, and Chang
2000 Africa In My Blood (edited by Dale Peterson). New York: Houghton Mifflin Company
2001 Beyond Innocence: An Autobiography in Letters, the later years (edited by Dale Peterson). New York: Houghton Mifflin Company
2002 The Ten Trusts: What We Must Do To Care for the Animals We Love (with Marc Bekoff). San Francisco: Harper San Francisco
2005 Harvest for Hope: A Guide to Mindful Eating New York: Warner Books, Inc.
2009 Hope for Animals and Their World: How Endangered Species Are Being Rescued from the Brink Grand Central Publishing
2013 Seeds of Hope: Wisdom and Wonder from the World of Plants (with Gail Hudson) Grand Central Publishing
2021 The Book of Hope, with Douglas Abrams and Gail Hudson, Viking
Children's books
1972 Grub: The Bush Baby (with H. van Lawick). Boston: Houghton Mifflin
1988 My Life with the Chimpanzees New York: Byron Preiss Visual Publications, Inc. Translated into French, Japanese and Chinese. Parenting's Reading-Magic Award for "Outstanding Book for Children," 1989
1989 The Chimpanzee Family Book Saxonville, MA: Picture Book Studio; Munich: Neugebauer Press; London: Picture Book Studio. Translated into more than 15 languages, including Japanese and Swahili. The UNICEF Award for the best children's book of 1989. Austrian state prize for best children's book of 1990.
1989 Jane Goodall's Animal World: Chimps New York: Macmillan
1989 Animal Family Series: Chimpanzee Family; Lion Family; Elephant Family; Zebra Family; Giraffe Family; Baboon Family; Hyena Family; Wildebeest Family Toronto: Madison Marketing Ltd
1994 With Love, New York / London: North-South Books. Translated into German, French, Italian, and Japanese
1999 Dr. White (illustrated by Julie Litty). New York: North-South Books
2000 The Eagle & the Wren (illustrated by Alexander Reichstein). New York: North-South Books
2001: Chimpanzees I Love: Saving Their World and Ours New York: Scholastic Press
2002 (Foreword) "Slowly, Slowly, Slowly," Said the Sloth by Eric Carle. Philomel Books
2004 Rickie and Henri: A True Story (with Alan Marks) Penguin Young Readers Group
Films
Goodall is the subject of more than 40 films:
1965 Miss Goodall and the Wild Chimpanzees National Geographic Society
1973 Jane Goodall and the World of Animal Behavior: The Wild Dogs of Africa with Hugo van Lawick
1975 Miss Goodall: The Hyena Story The World of Animal Behavior Series 16mm 1979 version for DiscoVision, not released for LaserDisc
1976 Lions of the Serengeti an episode of The World About Us on BBC2
1984: Among the Wild Chimpanzees National Geographic Special
1988 People of the Forest with Hugo van Lawick
1990 Chimpanzee Alert in the Nature Watch Series, Central Television
1990 The Life and Legend of Jane Goodall National Geographic Society.
1990 The Gombe Chimpanzees Bavarian Television
1995 Fifi's Boys for the Natural World series for the BBC
1996 Chimpanzee Diary for BBC2 Animal Zone
1997 Animal Minds for BBC
Goodall voiced herself in the animated TV series The Wild Thornberrys.
2000 Jane Goodall: Reason For Hope PBS special produced by KTCA
2001
2002 Jane Goodall's Wild Chimpanzees (IMAX format), in collaboration with Science North
2005 Jane Goodall's Return to Gombe for Animal Planet
2006 Chimps, So Like Us HBO film nominated for 1990 Academy Award
2007 When Animals Talk, We Should Listen theatrical documentary feature co-produced by Animal Planet
2010 Jane's Journey theatrical documentary feature co-produced by Animal Planet
2012 Chimpanzee theatrical nature documentary feature co-produced by Disneynature
2017 Jane biographical documentary film National Geographic Studios, in association with Public Road Productions. The film is directed and written by Brett Morgen, music by Philip Glass
Zayed's 2018 Antarctic Lights, Dr Jane featured in the Environment Agency-Abu Dhabi film that screened on National Geographic-Abu Dhabi and won a World Medal at the New York Film and TV Awards.
2019 Exploring Hans Hass Dr Jane Goodall featured in the biographical documentary film about the legendary diving pioneer and filmmaker Hans Hass
2020 Jane Goodall: The Hope, biographical documentary film, National Geographic Studios, produced by Lucky 8
2023 Jane Goodall: Reasons for Hope is an IMAX format documentary about successful projects to restore earth's wildlife habitat, animals, birds and environment.
See also
Animal Faith
USC Jane Goodall Research Center
Nonhuman Rights Project
Dian Fossey, the trimate who studied gorillas until her murder
Birutė Galdikas, the trimate who dedicated herself to orangutan study
Steven M. Wise
Washoe
List of animal rights advocates
Timeline of women in science
References
External links
The Jane Goodall Institute official website
Jane Goodall at Discover Magazine
Jane Goodall interviewed by Alyssa McDonald in July 2010 for the New Statesman
Jane Goodall – Overpopulation in the Developing World at Fora TV
Lecture transcript and video of Goodall's speech at the Joan B. Kroc Institute for Peace & Justice at the University of San Diego, April 2008
Jane Goodall extended film interview with transcripts for the 'Why Are We Here?' documentary series.
A Conversation with Jane Goodall (audio interview)
"On Being" radio interview with Krista Tippett, broadcast August 2020
1934 births
20th-century British anthropologists
20th-century British biologists
20th-century English women scientists
20th-century English scientists
21st-century British anthropologists
21st-century British biologists
21st-century English women scientists
21st-century English scientists
Living people
Alumni of Newnham College, Cambridge
Alumni of Darwin College, Cambridge
Animal cognition writers
Fellows of Darwin College, Cambridge
Articles containing video clips
Baronesses of the Netherlands
Benjamin Franklin Medal (Franklin Institute) laureates
British people of Welsh descent
British veganism activists
British women anthropologists
Conservation biologists
Dames Commander of the Order of the British Empire
English anthropologists
English cookbook writers
English women biologists
Ethologists
Kyoto laureates in Basic Sciences
Members of the European Academy of Sciences and Arts
Members of the Society of Woman Geographers
Officers of the Legion of Honour
Recipients of the President's Medal (British Academy)
Templeton Prize laureates
People from Hampstead
Primatologists
Scientists from London
Sustainability advocates
United Nations Messengers of Peace
University of Southern California faculty
Vegan cookbook writers
Women ethologists
Women founders
Writers about Africa
Members of the American Philosophical Society | Jane Goodall | [
"Biology"
] | 7,362 | [
"Ethology",
"Behavior",
"Ethologists"
] |
45,425 | https://en.wikipedia.org/wiki/Smart%20growth | Smart growth is an urban planning and transportation theory that concentrates growth in compact walkable urban centers to avoid sprawl. It also advocates compact, transit-oriented, walkable, bicycle-friendly land use, including neighborhood schools, complete streets, and mixed-use development with a range of housing choices. The term "smart growth" is particularly used in North America. In Europe and particularly the UK, the terms "compact city", "urban densification" or "urban intensification" have often been used to describe similar concepts, which have influenced government planning policies in the UK, the Netherlands and several other European countries.
Smart growth values long-range, regional considerations of sustainability over a short-term focus. Its sustainable development goals are to achieve a unique sense of community and place; expand the range of transportation, employment, and housing choices; equitably distribute the costs and benefits of development; preserve and enhance natural and cultural resources; and promote public health.
Basic concept
Smart growth is a theory of land development that accepts that growth and development will continue to occur, and so seeks to direct that growth in an intentional, comprehensive way. Its proponents include urban planners, architects, developers, community activists, and historic preservationists. The term "smart growth" is an attempt to reframe the conversation from "growth" versus "no growth" (or NIMBY) to good/smart growth versus bad/dumb growth. Proponents seek to distinguish smart growth from urban sprawl, which they claim causes most of the problems that fuel opposition to urban growth, such as traffic congestion and environmental degradation. Smart growth principles are directed at developing sustainable communities that provide a greater range of transportation and housing choices and prioritize infill and redevelopment in existing communities rather than development of "greenfield" farmland or natural lands. Some of the fundamental aims for the benefits of residents and the communities are increasing family income and wealth, providing safe walking routes to schools, fostering livable, safe and healthy places, stimulating economic activity (both locally and regionally), and developing, preserving and investing in built and natural resources.
Smart growth "principles" describe the elements of community that are envisioned and smart growth "regulations" describe the various approaches to implementation, that is, how federal, state, and municipal governments choose to fulfill smart growth principles. Some of these regulatory approaches such as urban growth boundaries predate the use of the term "smart growth". One of the earliest efforts to establish smart growth forward as an explicit regulatory framework were put forth by the American Planning Association (APA). In 1997, the APA introduced a project called Growing Smart and published the "Growing Smart Legislative Guidebook: Model Statutes for Planning and the Management of Change." The U.S. Environmental Protection Agency (EPA) defines smart growth as “a range of development and conservation strategies that help protect our health and natural environment and make our communities more attractive, economically stronger, and more socially diverse." Smart growth agenda is comprehensive and ambitious, however, its implementation is problematic as control of outward movement means limiting availability of single-family homes and reliance on the automobile, the mainstay of the traditional American lifestyle.
Smart growth is related to, or may be used in combination with, the following concepts:
New Urbanism
Growth management
New community design
Sustainable development
Resource stewardship
Land preservation
Preventing urban sprawl
Creating sense of place
Development Best Practices
Preservation development
Sustainable transport
Triple Bottom Line (TBL) accounting - people, planet, profit
The Three Pillars - human, natural, and created capital
The smart growth approach to development is multifaceted and can encompass a variety of techniques. For example, in the state of Massachusetts smart growth is enacted by a combination of techniques including increasing housing density along transit nodes, conserving farm land, and mixing residential and commercial use areas. Perhaps the most descriptive term to characterize this concept is Traditional Neighborhood Development, which recognizes that smart growth and related concepts are not necessarily new, but are a response to car culture and sprawl. Many favor the term New Urbanism, which invokes a new, but traditional way of looking at urban planning.
There are a range of best practices associated with smart growth. These include supporting existing communities, redeveloping underutilized sites, enhancing economic competitiveness, providing more transportation choices, developing livability measures and tools, promoting equitable and affordable housing, providing a vision for sustainable growth, enhancing integrated planning and investment, aligning, coordinating, and leveraging government policies, redefining housing affordability and making the development process transparent.
Related, but somewhat different, are the overarching goals of smart growth, and they include: making the community more competitive for new businesses, providing alternative places to shop, work, and play, creating a better "Sense of Place," providing jobs for residents, increasing property values, improving quality of life, expanding the tax base, preserving open space, controlling growth, and improving safety.
Basic principles
There are 10 accepted principles that define smart growth:
Mix land uses.
Take advantage of compact building design.
Create a range of housing opportunities and choices.
Create walkable neighborhoods.
Foster distinctive, attractive communities with a strong sense of place.
Preserve open space, farmland, natural beauty, and critical environmental areas.
Strengthen and direct development towards existing communities.
Provide a variety of transportation choices.
Make development decisions predictable, fair, and cost effective.
Encourage community and stakeholder collaboration in development decisions.
History
Transportation and community planners began to promote the idea of compact cities and communities and adopt many of the regulatory approaches associated with smart growth in the early 1970s. The cost and difficulty of acquiring land (particularly in historic and/or areas designated as conservancies) to build and widen highways caused some politicians to reconsider basing transportation planning on motor vehicles.
The Congress for the New Urbanism, with architect Peter Calthorpe, promoted and popularized the idea of urban villages that relied on public transportation, bicycling, and walking instead of automobile use. Architect Andrés Duany promoted changing design codes to promote a sense of community, and to discourage driving. Colin Buchanan and Stephen Plowden helped to lead the debate in the United Kingdom.
The Local Government Commission which presents the annual New Partners for Smart Growth conference adopted the original Ahwahnee Principles in 1991 which articulates many of the major principles now generally accepted as part of the smart growth movement such as transit oriented development, a focus on walking distance, greenbelts and wildlife corridors, and infill and redevelopment. The document was co-authored by several of the founders of the New Urbanist movement. The Local Government Commission has been co-sponsoring smart growth-related conferences since 1997. The New Partners for Smart Growth Conference started under that name circa 2002.
Smart Growth America, an organization devoted to promoting smart growth in the United States, was founded in 2002. This organization leads an evolving coalition of national and regional organizations most of which predated its founding such as 1000 Friends of Oregon, founded in 1975, and the Congress for the New Urbanism, founded in 1993. The EPA launched its smart growth program in 1995.
Rationale for smart growth
Smart growth is an alternative to urban sprawl, traffic congestion, disconnected neighborhoods, and urban decay. Its principles challenge old assumptions in urban planning, such as the value of detached houses and automobile use.
Environmental protection
Environmentalists promote smart growth by advocating urban-growth boundaries, or Green belts, as they have been termed in England since the 1930s.
Public health
Transit-oriented development can improve the quality of life and encourage a healthier, pedestrian-based lifestyle with less pollution. EPA suggests that smart growth can help reduce air pollution, improve water quality, and reduce greenhouse gas emissions.
Reaction to existing subsidies
Smart growth advocates claim that much of the urban sprawl of the 20th century was due to government subsidies for infrastructure that redistribute the true costs of sprawl. Examples include subsidies for highway building, fossil fuels, and electricity.
Electrical subsidies
With electricity, there is a cost associated with extending and maintaining the service delivery system, as with water and sewage, but there also is a loss in the commodity being delivered. The farther from the generator, the more power is lost in distribution. According to the Department of Energy's (DOE) Energy Information Administration (EIA), 9 percent of energy is lost in transmission.
Current average cost pricing, where customers pay the same price per unit of power regardless of the true cost of their service, subsidizes sprawl development. With electricity deregulation, some states now charge customers/developers fees for extending distribution to new locations rather than rolling such costs into utility rates.
New Jersey, for example, has implemented a plan that divides the state into five planning areas, some of which are designated for growth, while others are protected. The state is developing a series of incentives to coax local governments into changing zoning laws that will be compatible with the state plan. The New Jersey Board of Public Utilities recently proposed a revised rule that presents a tiered approach to utility financing. In areas not designated for growth, utilities and their ratepayers are forbidden to cover the costs of extending utility lines to new developments—and developers will be required to pay the full cost of public utility infrastructure. In designated growth areas that have local smart plans endorsed by the State Planning Commission, developers will be refunded the cost of extending utility lines to new developments at two times the rate of the revenue received by developers in smart growth areas that do not have approved plans.
Elements
Growth is "smart growth", to the extent that it includes the elements listed below.
Compact neighborhoods
Compact, livable urban neighborhoods attract more people and business. Creating such neighborhoods is a critical element of reducing urban sprawl and protecting the climate. Such a tactic includes adopting redevelopment strategies and zoning policies that channel housing and job growth into urban centers and neighborhood business districts, to create compact, walkable, and bike- and transit-friendly hubs. This sometimes requires local governmental bodies to implement code changes that allow increased height and density downtown and regulations that not only eliminate minimum parking requirements for new development but establish a maximum number of allowed spaces. Other topics fall under this concept:
mixed-use development
inclusion of affordable housing
restrictions or limitations on suburban design forms (e.g., detached houses on individual lots, strip malls and surface parking lots)
inclusion of parks and recreation areas
In sustainable architecture the recent movements of New Urbanism and New Classical Architecture promote a sustainable approach towards construction, that appreciates and develops smart growth, architectural tradition and classical design. This in contrast to modernist and globally uniform architecture, as well as leaning against solitary housing estates and suburban sprawl. Both trends started in the 1980s.
Transit-oriented development
Transit-oriented development (TOD) is a residential or commercial area designed to maximize access to public transport, and mixed-use/compact neighborhoods tend to use transit at all times of the day.
Many cities striving to implement better TOD strategies seek to secure funding to create new public transportation infrastructure and improve existing services. Other measures might include regional cooperation to increase efficiency and expand services, and moving buses and trains more frequently through high-use areas. Other topics fall under this concept:
Transportation demand management measures
road pricing system (tolling)
commercial parking taxes
Pedestrian- and bicycle-friendly design
Biking and walking instead of driving can reduce emissions, save money on fuel and maintenance, and foster a healthier population. Pedestrian- and bicycle-friendly improvements include bike lanes on main streets, an urban bike-trail system, bike parking, pedestrian crossings, and associated master plans. The most pedestrian- and bike-friendly variant of smart growth and New Urbanism is New Pedestrianism because motor vehicles are on a separate grid.
Others
preserving open space and critical habitat, reusing land, and protecting water supplies and air quality
transparent, predictable, fair and cost-effective rules for development
historic preservation
Setting aside large areas where development is prohibited, nature is able to run its course, providing fresh air and clean water.
Expansion around already existing areas allows public services to be located where people are living without taking away from the core city neighborhoods in large urban areas.
Developing around preexisting areas decreases the socioeconomic segregation allowing society to function more equitably, generating a tax base for housing, educational and employment programs.
Policy tools
Zoning ordinances
The most widely used tool for achieving smart growth is modification of local zoning laws. Zoning laws are applicable to most cities and counties in the United States. Smart growth advocates often seek to modify zoning ordinances to increase the density of development and redevelopment allowed in or near existing towns and neighborhoods and/or restrict new development in outlying or environmentally sensitive areas. Additional density incentives can be offered for development of brownfield and greyfield land or for providing amenities such as parks and open space. Zoning ordinances typically include minimum parking requirements. Reductions in or elimination of parking minimums or imposition of parking maximums can also reduce the amount of parking built with new development increasing land available for parks and other community amenities.
Urban growth boundaries
Related to zoning ordinances, an urban growth boundary (UGB) is a tool used in some U.S. cities to contain high density development to certain areas. The first urban growth boundary in the United States was established in 1958 in Kentucky. Subsequently, urban growth boundaries were established in Oregon in the 1970s and Florida in the 1980s. Some believe that UGBs contributed to the escalation of housing prices from 2000 to 2006, as they limited the supply of developable land. However, this is not completely substantiated because prices continued to rise even after municipalities expanded their growth boundaries.
Transfer of development rights
Transfer of development rights (TDR) systems are intended to allow property owners in areas deemed desirable for growth (such as infill and brownfield sites) to purchase the right to build at higher densities from owners of properties in areas deemed undesirable for growth such as environmental lands, farmlands or lands outside of an urban growth boundary. TDR programs have been implemented in over 200 U.S. communities.
Provision of social infrastructure
Systematic provision of infrastructure such as schools, libraries, sporting facilities and community facilities is an integral component of smart growth communities. This is commonly known as 'social infrastructure' or 'community infrastructure'. In Australia, for example, most new suburban developments are master planned, and key social infrastructure is planned at the outset.
Environmental impact assessments
One popular approach to assist in smart growth in democratic countries is for lawmakers to require prospective developers to prepare environmental impact assessments of their plans as a condition for state and/or local governments to give them permission to build their buildings. These reports often indicate how significant impacts generated by the development will be mitigated, the cost of which is usually paid by the developer. These assessments are frequently controversial. Conservationists, neighborhood advocacy groups and NIMBYs are often skeptical about such impact reports, even when they are prepared by independent agencies and subsequently approved by the decision makers rather than the promoters. Conversely, developers will sometimes strongly resist being required to implement the mitigation measures required by the local government as they may be quite costly.
In communities practicing these smart growth policies, developers comply with local codes and requirements. Consequently, developer compliance builds communal trust because it demonstrates a genuine interest in the environmental quality of the community.
Communities implementing smart growth
EPA presented awards for smart growth achievement between 2002 and 2015. The awardees comprised 64 projects in 28 states. Among the localities receiving awards were:
Arlington County, Virginia
Minneapolis and Saint Paul, Minnesota
Davidson, North Carolina
Denver, Colorado.
The smart growth network has recognized these U.S. communities for implementing smart growth principles:
The Kentlands; Gaithersburg, Maryland (for live-work units)
East Liberty; Pittsburgh, Pennsylvania (establishing downtown retail)
Moore Square Museums Magnet Middle School; Raleigh, North Carolina (for being located downtown)
Garfield Park; Chicago, Illinois(retaining transit options)
Murphy Park; St. Louis, Missouri (bringing the features of suburban living to the city)
New Jersey Pine Barrens, South Jersey (for transfer of development rights away from undeveloped land)
Chesterfield Township, New Jersey (for township wide transfer of development rights away from forest and farmland and development of the several hundred acre New Urbanism community of Old York Village.
The European Union has recognized these cities and regions for implementing "smart specialization" which originated from smart growth principles:
Navarre, Spain (Improving education and developing projects for medical tourism and green vehicles)
Flanders, Belgium (Spending funds on transportation, healthcare services, and technological innovation)
Lower Austria ( Cooperating with neighboring regions to develop new markets for local companies)
In May 2011, The European Union released a Regional Policy report for smart growth policy for 2020. The Regional Policy report stated smart specialization was the strategy to focus Europe's resources and administer smart growth principles.
In July 2011, The Atlantic magazine called the BeltLine, a series of housing, trail, and transit projects along a 22-mile (35-km) long disused rail corridor surrounding the core of Atlanta, the United States' "most ambitious smart growth project".
In Savannah, Georgia (US) the historic Oglethorpe Plan has been shown to contain most of the elements of smart growth in its network of wards, each of which has a central civic square. The plan has demonstrated its resilience to changing conditions, and the city is using the plan as a model for growth in newer areas.
In Melbourne, Australia, almost all new outer-suburban developments are master planned, guided by the principles of smart growth.
Smart growth, urban sprawl and automobile dependency
Whether smart growth (or the "compact City") does or can reduce problems of automobile dependency associated with urban sprawl have been fiercely contested issues over several decades. A 2007 meta-study by Keith Barthomomew of the University of Utah found that reductions in driving associated with compact development scenarios averaged 8 percent ranging up to 31.7 percent with the variation being explained by degree of land use mixing and density. An influential study in 1989 by Peter Newman and Jeff Kenworthy compared 32 cities across North America, Australia, Europe and Asia. The study has been criticised for its methodology but the main finding that denser cities, particularly in Asia, have lower car use than sprawling cities, particularly in North America, has been largely accepted — although the relationship is clearer at the extremes across continents than it is within countries where conditions are more similar.
Within cities studies from across many countries (mainly in the developed world) have shown that denser urban areas with greater mixture of land use and better public transport tend to have lower car use than less dense suburban and ex-urban residential areas. This usually holds true even after controlling for socio-economic factors such as differences in household composition and income. This does not necessarily imply that suburban sprawl causes high car use, however. One confounding factor, which has been the subject of many studies, is residential self-selection: people who prefer to drive tend to move towards low density suburbs, whereas people who prefer to walk, cycle or use transit tend to move towards higher density urban areas, better served by public transport. Some studies have found that, when self-selection is controlled for, the built environment has no significant effect on travel behaviour. More recent studies using more sophisticated methodologies have generally refuted these findings: density, land use and public transport accessibility can influence travel behaviour, although social and economic factors, particularly household income, usually exert a stronger influence.
Paradox of intensification
Reviewing the evidence on urban intensification, smart growth and their effects on travel behaviour Melia et al. (2011) found support for the arguments of both supporters and opponents of smart growth. Planning policies which increase population densities in urban areas do tend to reduce car use, but the effect is a weak one, so doubling the population density of a particular area will not halve the frequency or distance of car use.
For example, Portland, Oregon a U.S. city which has pursued smart growth policies, substantially increased its population density between 1990 and 2000 when other US cities of a similar size were reducing in density. As predicted by the paradox, traffic volumes and congestion both increased more rapidly than in the other cities, despite a substantial increase in transit use.
These findings led them to propose the paradox of intensification, which states "Ceteris paribus, urban intensification which increases population density will reduce per capita car use, with benefits to the global environment, but will also increase concentrations of motor traffic, worsening the local environment in those locations where it occurs".
At the citywide level it may be possible, through a range of positive measures to counteract the increases in traffic and congestion which would otherwise result from increasing population densities: Freiburg im Breisgau in Germany is one example of a city which has been more successful in this respect.
This study also reviewed evidence on the local effects of building at higher densities. At the level of the neighbourhood or individual development positive measures (e.g. improvements to public transport) will usually be insufficient to counteract the traffic effect of increasing population density. This leaves policy-makers with four choices: intensify and accept the local consequences, sprawl and accept the wider consequences, a compromise with some element of both, or intensify accompanied by more radical measures such as parking restrictions, closing roads to traffic and carfree zones.
In contrast, the city of Cambridge, Massachusetts reported that its Kendall Square neighborhood saw a 40% increase in commercial space attended by a traffic decrease of 14%.
A report by CEOs for Cities, "Driven Apart," showed that while denser cities in the United States may have more congested commutes they are also shorter on average in both time and distance. This is in contrast to cities where commuters face less congestion but drive longer distances resulting in commutes that take as long or longer.
Proponents
Edward L. Glaeser
Rollin Stanley
Criticism
Robert Bruegmann, professor of art history, architecture, and urban planning at the University of Illinois at Chicago and author of Sprawl: A Compact History, stated that historical attempts to combat urban sprawl have failed, and that the high population density of Los Angeles, currently the most dense urban area in the United States, "lies at the root of many of the woes experienced by L.A. today."
Wendell Cox is a vocal opponent of smart growth policies. He argued before the United States Senate Committee on Environment and Public Works that, "smart growth strategies tend to intensify the very problems they are purported to solve." Cox and Joshua Utt analyzed smart growth and sprawl, and argued that:
Our analysis indicates that the Current Urban Planning Assumptions are of virtually no value in predicting local government expenditures per capita. The lowest local government expenditures per capita are not in the higher density, slower growing, and older municipalities.
On the contrary, the actual data indicate that the lowest expenditures per capita tend to be in medium- and lower-density municipalities (though not the lowest density); medium- and faster-growing municipalities; and newer municipalities. This is after 50 years of unprecedented urban decentralization, which seems to be more than enough time to have developed the purported urban sprawl-related higher local government expenditures. It seems unlikely that the higher expenditures that did not develop due to sprawl in the last 50 years will evolve in the next 20 - despite predictions to the contrary in The Costs of Sprawl 2000 research.
It seems much more likely that the differences in municipal expenditures per capita are the result of political, rather than economic factors, especially the influence of special interests.
The phrase "smart growth" implies that other growth and development theories are not "smart". There is debate about whether transit-proximate development constitutes smart growth when it is not transit-oriented. The National Motorists Association does not object to smart growth as a whole, but strongly objects to traffic calming, which is intended to reduce automobile accidents and fatalities, but may also reduce automobile usage and increase alternate forms of public transportation.
In 2002 the National Center for Public Policy Research, a self-described conservative think tank, published an economic study entitled "Smart Growth and Its Effects on Housing Markets: The New Segregation" which termed smart growth "restricted growth" and suggested that smart growth policies disfavor minorities and the poor by driving up housing prices.
Some libertarian groups, such as the Cato Institute, criticize smart growth on the grounds that it leads to greatly increased land values, and people with average incomes can no longer afford to buy detached houses.
A number of ecological economists claim that industrial civilization has already "overshot" the carrying capacity of the Earth, and "smart growth" is mostly an illusion. Instead, a steady state economy would be needed to bring human societies back into a necessary balance with the ability of the ecosystem to sustain humans (and other species).
A study released in November 2009 characterized the smart-growth policies in the U.S. state of Maryland as a failure, concluding that "[t]here is no evidence after ten years that [smart-growth laws] have had any effect on development patterns." Factors include a lack of incentives for builders to redevelop older neighborhoods and limits on the ability of state planners to force local jurisdictions to approve high-density developments in "smart-growth" areas. Buyers demand low-density development and voters tend to oppose high density developments near them.
Beginning in 2010, groups generally associated with the Tea Party movement began to identify Smart Growth as an outgrowth of the United Nations Agenda 21 which they viewed as an attempt by international interests to force a "sustainable" lifestyle on the United States. However planning groups and even some smart growth opponents counter that Smart Growth concepts and groups predate the 1992 Agenda 21 conference. In addition the word "sustainable development" as used in the Agenda 21 report is often misread to mean real estate development when it typically refers to the much broader concept of human development in the United Nations and foreign aid context which addresses a broader slate of economic, health, poverty, and education issues.
See also
Related topics
New Urbanism
Community Preservation Act
Garden city movement
Planned community
Principles of Intelligent Urbanism
Slow architecture
Sustainable city
Traditional Neighborhood Development (TND)
Urban renewal
Urban vitality
Agenda 21
Soft law
Sustainable development
Organizations
Futurewise.org
Smart Growth America
Greenbelt Alliance
HUD USER
Regulatory Barriers Clearinghouse
PolicyLink
References
Further reading
"Urban Alchemy" — about the need for efficient transit to serve smart growth
"Smart Growth: A Critical Review of the State of the Art" — by Aseem Inam, chapter in book, Companion to Urban Design, edited by Tridib Banerjee and Anastasia Loukaitou-Sideris (published by Routledge UK, 2011)
Effect of Smart Growth Policies on Travel Demand, Transportation Research Board, SHRP 2 Report S2-C16-RR-1, 2014.
External links
Smart Growth Planning
SmartCode 7.0 A model for New Urbanism Planning Codes in PDF Format
Smart Growth America organization
Coalition for Smarter Growth
Smart Growth Online
New Urbanism
Urban studies and planning terminology
Sustainable transport
Community building
Sustainable urban planning | Smart growth | [
"Physics"
] | 5,548 | [
"Physical systems",
"Transport",
"Sustainable transport"
] |
45,434 | https://en.wikipedia.org/wiki/Environmental%20movement | The environmental movement (sometimes referred to as the ecology movement) is a social movement that aims to protect the natural world from harmful environmental practices in order to create sustainable living. Environmentalists advocate the just and sustainable management of resources and stewardship of the environment through changes in public policy and individual behavior. In its recognition of humanity as a participant in (not an enemy of) ecosystems, the movement is centered on ecology, health, as well as human rights.
The environmental movement is an international movement, represented by a range of environmental organizations, from enterprises to grassroots and varies from country to country. Due to its large membership, varying and strong beliefs, and occasionally speculative nature, the environmental movement is not always united in its goals. At its broadest, the movement includes private citizens, professionals, religious devotees, politicians, scientists, nonprofit organizations, and individual advocates like former Wisconsin Senator Gaylord Nelson and Rachel Carson in the 20th century.
History
Early awareness
The origins of the environmental movement lay in response to increasing levels of smoke pollution in the atmosphere during the Industrial Revolution. The emergence of great factories and the concomitant immense growth in coal consumption gave rise to an unprecedented level of air pollution in industrial centers; after 1900 the large volume of industrial chemical discharges added to the growing load of untreated human waste. Under increasing political pressure from the urban middle-class, the first large-scale, modern environmental laws came in the form of Britain's Alkali Acts, passed in 1863, to regulate the deleterious air pollution (gaseous hydrochloric acid) given off by the Leblanc process, used to produce soda ash.
Early interest in the environment was a feature of the Romantic movement in the early 19th century. The poet William Wordsworth had travelled extensively in England's Lake District and wrote that it is a "sort of national property in which every man has a right and interest who has an eye to perceive and a heart to enjoy".
Conservation movement
The modern conservation movement was first manifested in the forests of India, with the practical application of scientific conservation principles. The conservation ethic that began to evolve included three core principles: human activity damaged the environment, there was a civic duty to maintain the environment for future generations, and scientific, empirically based methods should be applied to ensure this duty was carried out. James Ranald Martin was prominent in promoting this ideology, publishing many medico-topographical reports that demonstrated the scale of damage wrought through large-scale deforestation and desiccation, and lobbying extensively for the institutionalization of forest conservation activities in British India through the establishment of Forest Departments.
The Madras Board of Revenue started local conservation efforts in 1842, headed by Alexander Gibson, a professional botanist who systematically adopted a forest conservation programme based on scientific principles. This was the first case of state management of forests in the world. Eventually, the government under Governor-General Lord Dalhousie introduced the first permanent and large-scale forest conservation programme in the world in 1855, a model that soon spread to other colonies, as well as the United States. In 1860, the Department banned the use of shifting cultivation. Hugh Cleghorn's 1861 manual, The forests and gardens of South India, became the definitive work on the subject and was widely used by forest assistants in the subcontinent.
Dietrich Brandis joined the British service in 1856 as superintendent of the teak forests of Pegu division in eastern Burma. During that time Burma's teak forests were controlled by militant Karen tribals. He introduced the "taungya" system, in which Karen villagers provided labour for clearing, planting, and weeding teak plantations. Also, he formulated new forest legislation and helped establish research and training institutions. Brandis as well as founded the Imperial Forestry School at Dehradun.
Formation of environmental protection societies
The late 19th century saw the formation of the first wildlife conservation societies.
The zoologist Alfred Newton published a series of investigations into the Desirability of establishing a 'Close-time' for the preservation of indigenous animals between 1872 and 1903. His advocacy for legislation to protect animals from hunting during the mating season led to the formation of the Plumage League (later the Royal Society for the Protection of Birds) in 1889. The society acted as a protest group campaigning against the use of great crested grebe and kittiwake skins and feathers in fur clothing. The Society attracted growing support from the suburban middle-classes, and influenced the passage of the Sea Birds Preservation Act in 1869 as the first nature protection law in the world.
For most of the century from 1850 to 1950, however, the primary environmental cause was the mitigation of air pollution. The Coal Smoke Abatement Society was formed in 1898 making it one of the oldest environmental NGOs. It was founded by artist Sir William Blake Richmond, frustrated with the pall cast by coal smoke. Although there were earlier pieces of legislation, the Public Health Act 1875 required all furnaces and fireplaces to consume their own smoke.
Systematic and general efforts on behalf of the environment only began in the late 19th century; it grew out of the amenity movement in Britain in the 1870s, which was a reaction to industrialization, the growth of cities, and worsening air and water pollution. Starting with the formation of the Commons Preservation Society in 1865, the movement championed rural preservation against the encroachments of industrialisation. Robert Hunter, solicitor for the society, worked with Hardwicke Rawnsley, Octavia Hill, and John Ruskin to lead a successful campaign to prevent the construction of railways to carry slate from the quarries, which would have ruined the unspoilt valleys of Newlands and Ennerdale. This success led to the formation of the Lake District Defence Society (later to become The Friends of the Lake District).
In 1893 Hill, Hunter and Rawnsley agreed to set up a national body to coordinate environmental conservation efforts across the country; the "National Trust for Places of Historic Interest or Natural Beauty" was formally inaugurated in 1894. The organisation obtained secure footing through the 1907 National Trust Bill, which gave the trust the status of a statutory corporation. and the bill was passed in August 1907.
An early "Back-to-Nature" movement, which anticipated the romantic ideal of modern environmentalism, was advocated by intellectuals such as John Ruskin, William Morris, and Edward Carpenter, who were all against consumerism, pollution and other activities that were harmful to the natural world. The movement was a reaction to the urban conditions of the industrial towns, where sanitation was awful, pollution levels intolerable and housing terribly cramped. Idealists championed the rural life as a mythical Utopia and advocated a return to it. John Ruskin argued that people should return to a "small piece of English ground, beautiful, peaceful, and fruitful. We will have no steam engines upon it ... we will have plenty of flowers and vegetables ... we will have some music and poetry; the children will learn to dance to it and sing it."
Practical ventures in the establishment of small cooperative farms were even attempted and old rural traditions, without the "taint of manufacture or the canker of artificiality", were enthusiastically revived, including the Morris dance and the maypole.
The movement in the United States began in the late 19th century, out of concerns for protecting the natural resources of the West, with individuals such as John Muir and Henry David Thoreau making key philosophical contributions. Thoreau was interested in peoples' relationship with nature and studied this by living close to nature in a simple life. He published his experiences in the book Walden, which argues that people should become intimately close with nature. Muir came to believe in nature's inherent right, especially after spending time hiking in Yosemite Valley and studying both the ecology and geology. He successfully lobbied congress to form Yosemite National Park and went on to set up the Sierra Club in 1892. The conservationist principles as well as the belief in an inherent right of nature were to become the bedrock of modern environmentalism. However, the early movement in the U.S. developed with a contradiction; preservationists like John Muir wanted land and nature set aside for its own sake, and conservationists, such as Gifford Pinchot (appointed as the first Chief of the US Forest Service from 1905 to 1910), wanted to manage natural resources for human use.
20th century
In the 20th century, environmental ideas continued to grow in popularity and recognition. Efforts were beginning to be made to save wildlife, particularly the American bison. The death of the last passenger pigeon as well as the endangerment of the American bison helped to focus the minds of conservationists and popularize their concerns. In 1916, the National Park Service was founded by U.S. President Woodrow Wilson. Pioneers of the movement called for more efficient and professional management of natural resources. They fought for reform because they believed the destruction of forests, fertile soil, minerals, wildlife, and water resources would lead to the downfall of society. The group that has been the most active in recent years is the climate movement.
The U.S. movement began to take off after World War II, as people began to recognize the costs of environmental negligence, disease, and the expansion of air and water pollution through the occurrence of several environmental disasters that occurred post-World War II. Aldo Leopold published A Sand County Almanac in 1949. He believed in a land ethic that recognized that maintaining the "beauty, integrity, and health of natural systems" as a moral and ethical imperative.
Another major literary force in the promotion of the environmental movement was Rachel Carson's 1962 book Silent Spring about declining bird populations due to DDT, an insecticide, pollutant, and man's attempts to control nature through the use of synthetic substances. Her core message for her readers was to identify the complex and fragile ecosystem and the threats facing the population. In 1958, Carson started to work on her last book, with an idea that nature needs human protection. Her influence was radioactive fallout, smog, food additives, and pesticide use. Carson's main focus was on pesticides, which led her to identify nature as fragile. She believed the use of technology dangerous to humans and other species.
Both of these books helped bring the issues into the public eye Rachel Carson's Silent Spring sold over two million copies and is linked to a nationwide ban on DDT and the creation of the EPA.
Beginning in 1969 and continuing into the 1970s, Illinois-based environmental activist James F. Phillips engaged in numerous covert anti-pollution campaigns using the pseudonym "the Fox." His activities included plugging illegal sewage outfall pipes and dumping toxic wastewater produced by a US Steel factory inside the company's Chicago corporate office. Phillips' "ecotage" campaigns attracted considerable media attention and subsequently inspired other direct action protests against environmental destruction.
The first Earth Day was celebrated on April 22, 1970. Its founder, former Wisconsin Senator Gaylord Nelson, was inspired to create this day of environmental education and awareness after seeing the oil spill off the coast of Santa Barbara in 1969. Greenpeace was created in 1971 as an organization that believed that political advocacy and legislation were ineffective or inefficient solutions and supported non-violent action. 1980 saw the creation of Earth First!, a group with an ecocentric view of the world – believing in equality between the rights of humans to flourish, the rights of all other species to flourish and the rights of life-sustaining systems to flourish.
In the 1950s, 1960s, and 1970s, several events illustrated the magnitude of environmental damage caused by humans. In 1954, a hydrogen bomb test at Bikini Atoll exposed the 23-man crew of the Japanese fishing vessel Lucky Dragon 5 to radioactive fallout. The incident is known as Castle Bravo, the largest thermonuclear device ever detonated by the United States and the first in a series of high-yield thermonuclear weapon design tests. In 1967 the oil tanker ran aground off the coast of Cornwall, and in 1969 oil spilled from an offshore well in California's Santa Barbara Channel. In 1971, the conclusion of a lawsuit in Japan drew international attention to the effects of decades of mercury poisoning on the people of Minamata.
At the same time, emerging scientific research drew new attention to existing and hypothetical threats to the environment and humanity. Among them were Paul R. Ehrlich, whose book The Population Bomb (1968) revived Malthusian concerns about the impact of exponential population growth. Biologist Barry Commoner generated a debate about growth, affluence and "flawed technology." Additionally, an association of scientists and political leaders known as the Club of Rome published their report The Limits to Growth in 1972, and drew attention to the growing pressure on natural resources from human activities.
Meanwhile, technological accomplishments such as nuclear proliferation and photos of the Earth from outer space provided both new insights and new reasons for concern over Earth's seemingly small and unique place in the universe.
In 1972, the United Nations Conference on the Human Environment was held in Stockholm, and for the first time united the representatives of multiple governments in discussion relating to the state of the global environment. This conference led directly to the creation of government environmental agencies and the UN Environment Program.
By the mid-1970s anti-nuclear activism had moved beyond local protests and politics to gain a wider appeal and influence. Although it lacked a single co-ordinating organization the anti-nuclear movement's efforts gained a great deal of attention, especially in the United Kingdom and United States. In the aftermath of the Three Mile Island accident in 1979, many mass demonstrations took place. The largest one was held in New York City in September 1979 and involved 200,000 people.
Since the 1970s, public awareness, environmental sciences, ecology, and technology have advanced to include modern focus points like ozone depletion, global climate change, acid rain, mutation breeding, genetically modified crops and genetically modified livestock. With mutation breeding, crop cultivars were created by exposing seeds to chemicals or radiation. Many of these cultivars are still being used today. Genetically modified plants and animals are said by some environmentalists to be inherently bad because they are unnatural. Others point out the possible benefits of GM crops such as water conservation through corn modified to be less "thirsty" and decreased pesticide use through insect – resistant crops. They also point out that some genetically modified livestock have accelerated growth which means there are shorter production cycles which again results in a more efficient use of feed.
Besides genetically modified crops and livestock, synthetic biology is also on the rise and environmentalists argue that these also contain risks, if these organisms were ever to end up in nature. This, as unlike with genetically modified organisms, synthetic biology even uses base pairs that do not exist in nature.
In the early 1990s, multiple environmental activists in the United States became targets of violent attacks.
21st century
In 2022, Global Witness reported that, in the preceding decade, more than 1,700 land and environmental defenders were killed, about one every two days. Brazil, Colombia, Philippines, and Mexico were the deadliest countries. Violence and intimidation against environmental activists have also been reported in Central and Eastern Europe. In Romania, anti-logging activists have been killed, while in Belarus, the government arrested several environmental activists and dissolved their organizations. Belarus has also withdrawn from the Aarhus Convention.
In 2023, the Environmental Protection Agency announced on Jan 10. that the first $100 million in federal environmental justice will open up to community organizations, local governments and other qualified applicants in the coming weeks.
United States
Beginning in the conservation movement at the beginning of the 20th century, the contemporary environmental movement's roots can be traced back to Rachel Carson's 1962 book Silent Spring, Murray Bookchin's 1962 book Our Synthetic Environment, and Paul R. Ehrlich's 1968 The Population Bomb. American environmentalists have campaigned against nuclear weapons and nuclear power in the 1960s and 1970s, acid rain in the 1980s, ozone depletion and deforestation in the 1990s, and most recently climate change and global warming.
The United States passed many pieces of environmental legislation in the 1970s, such as the Clean Water Act, the Clean Air Act, the Endangered Species Act, and the National Environmental Policy Act. These remain as the foundations for current environmental standards.
In the 1990s, the anti-environmental 'Wise Use' movement emerged in the United States.
Timeline of US environmental history
1832 – Hot Springs Reservation
1864 – Yosemite Valley
1872 – Yellowstone National Park
1892 – Sierra Club
1916 – National Park Service Organic Act
1916 – National Audubon Society
1949 – UN Scientific Conference on the Conservation and Utilization of Resources
1961 – World Wildlife Foundation
1964 – Land and Water Conservation Act
1964 – National Wilderness Preservation System
1968 – National Trails System Act
1968 – National Wild and Scenic Rivers System/Wild and Scenic Rivers Act
1969 – National Environmental Policy Act
1970 – First Earth Day- 22 April
1970 – Clean Air Act
1970 – Environmental Protection Agency
1971 – Greenpeace
1972 – Clean Water Act
1973 – Endangered Species Act
1980 – Earth First!
1992 – UN Earth Summit in Rio de Janeiro
1997 – Kyoto Protocol commits state parties to reduce greenhouse gas emissions
2017 – First National CleanUp Day
2022 – Inflation Reduction Act
Latin America
After the International Environmental Conference in Stockholm in 1972 Latin American officials returned with a high hope of growth and protection of the fairly untouched natural resources. Governments spent millions of dollars, and created departments and pollution standards. However, the outcomes have not always been what officials had initially hoped. Activists blame this on growing urban populations and industrial growth. Many Latin American countries have had a large inflow of immigrants that are living in substandard housing. Enforcement of the pollution standards is lax and penalties are minimal; in Venezuela, the largest penalty for violating an environmental law is 50,000 bolivar fine ($3,400) and three days in jail. In the 1970s or 1980s, many Latin American countries were transitioning from military dictatorships to democratic governments.
Brazil
In 1992, Brazil came under scrutiny with the United Nations Conference on Environment and Development in Rio de Janeiro. Brazil has a history of little environmental awareness. It has the highest biodiversity in the world and also the highest amount of habitat destruction. One-third of the world's forests lie in Brazil. It is home to the largest river, The Amazon, and the largest rainforest, the Amazon Rainforest. People have raised funds to create state parks and increase the consciousness of people who have destroyed forests and polluted waterways. From 1973 to the 1990s, and then in the 2000s, indigenous communities and rubber tappers also carried out blockades that protected much rainforest. It is home to several organizations that have fronted the environmental movement. The Blue Wave Foundation was created in 1989 and has partnered with advertising companies to promote national education campaigns to keep Brazil's beaches clean. Funatura was created in 1986 and is a wildlife sanctuary program. Pro-Natura International is a private environmental organization created in 1986.
From the late 2000s onwards community resistance saw the formerly pro-mining southeastern state of Minas Gerais cancel a number of projects that threatened to destroy forests. In northern Brazil’s Pará state the Movimento dos Trabalhadores Rurais Sem Terra (Landless Workers Movement) and others campaigned and took part in occupations and blockades against the environmentally harmful Carajás iron ore mine.
Europe
In 1952 the Great London Smog episode killed thousands of people and led the UK to create the first Clean Air Act in 1956. In 1957 the first major nuclear accident occurred in Windscale in northern England. The supertanker Torrey Canyon ran aground off the coast of Cornwall in 1967, causing the first major oil leak that killed marine life along the coast. In 1972, in Stockholm, the United Nations Conference on the Human Environment created the UN Environment Programme. The EU's environmental policy was formally founded by a European Council declaration and the first five-year environment programme was adopted. The main idea of the declaration was that prevention is better than the cure and the polluter should pay.
In the 1980s the green parties that were created a decade before began to have some political success. In 1986, there was a nuclear accident in Chernobyl, Ukraine. A large-scale environmental campaign was staged in Ukraine in 1986. The end of the 1980s and start of the 1990s saw the fall of communism across central and Eastern Europe, the fall of the Berlin Wall, and the Union of East and West Germany. In 1992 there was a UN summit held in Rio de Janeiro where Agenda 21 was adopted. The Kyoto Protocol was created in 1997, setting specific targets and deadlines to reduce global greenhouse gas emissions. The Kyoto Protocol has 192 signatories, including the European Union, Cook Islands, Niue, and all UN member states except Andorra, Canada, South Sudan, and the United States. In the 1990s blockades were held in Germany, the UK, France and the Netherlands to protect forests and other areas from clearing for road construction. In the early 2000s, activists believed that environmental policy concerns were overshadowed by energy security, globalism, and terrorism. Since that time major movements concerning issues such as climate change, fracking and other issues have arisen.
Asia
Middle East
The environmental movement is reaching the less developed world with different degrees of success. The Arab world, including the Middle East and North Africa, has different adaptations of the environmental movement. Countries on the Persian Gulf have high incomes and rely heavily on the large amount of energy resources in the area. Each country in the Arab world has varying combinations of low or high amounts of natural resources and low or high amounts of labor.
The League of Arab States has one specialized sub-committee, of 12 standing specialized subcommittees in the Foreign Affairs Ministerial Committees, which deals with Environmental Issues. Countries in the League of Arab States have demonstrated an interest in environmental issues, on paper some environmental activists have doubts about the level of commitment to environmental issues; being a part of the world community may have obliged these countries to portray concern for the environment. The initial level of environmental awareness may be the creation of a ministry of the environment. The year of establishment of a ministry is also indicative of the level of engagement. Saudi Arabia was the first to establish environmental law in 1992 followed by Egypt in 1994. Somalia is the only country without environmental law. In 2010 the Environmental Performance Index listed Algeria as the top Arab country at 42 of 163; Morocco was at 52 and Syria at 56. The Environmental Performance Index measures the ability of a country to actively manage and protect its environment and the health of its citizens. A weighted index is created by giving 50% weight for environmental health objective (health) and 50% for ecosystem vitality (ecosystem); values range from 0–100. No Arab countries were in the top quartile, and 7 countries were in the lowest quartile.
South Korea and Taiwan
South Korea and Taiwan experienced similar growth in industrialization from 1965 to 1990 with few environmental controls. South Korea's Han River and Nakdong River were so polluted by unchecked dumping of industrial waste that they were close to being classified as biologically dead. Taiwan's formula for balanced growth was to prevent industrial concentration and encourage manufacturers to set up in the countryside. This led to 20% of the farmland being polluted by industrial waste and 30% of the rice grown on the island was contaminated with heavy metals. Both countries had spontaneous environmental movements drawing participants from different classes. Their demands were linked with issues of employment, occupational health, and agricultural crisis. They were also quite militant; the people learned that protesting can bring results. The polluting factories were forced to make immediate improvements to the conditions or pay compensation to victims. Some were even forced to shut down or move locations. The people were able to force the government to come out with new restrictive rules on toxins, industrial waste, and air pollution. All of these new regulations caused the migration of those polluting industries from Taiwan and South Korea to China and other countries in Southeast Asia with more relaxed environmental laws.
China
China's environmental movement is characterized by the rise of environmental NGOs, policy advocacy, spontaneous alliances, and protests that often only occur at the local level. Environmental protests in China are increasingly expanding their scope of concerns, calling for broader participation "in the name of the public."
The Chinese have realized the ability of riots and protests to have success and had led to an increase in disputes in China by 30% since 2005 to more than 50,000 events. Protests cover topics such as environmental issues, land loss, income, and political issues. They have also grown in size from about 10 people or fewer in the mid-1990s to 52 people per incident in 2004. China has more relaxed environmental laws than other countries in Asia, so many polluting factories have relocated to China, causing pollution in China.
Water pollution, water scarcity, soil pollution, soil degradation, and desertification are issues currently in discussion in China. The groundwater table of the North China Plain is dropping by 1.5 m (5 ft) per year. This groundwater table occurs in the region of China that produces 40% of the country's grain. The Center for Legal Assistance to Pollution Victims works to confront legal issues associated with environmental justice by hearing court cases that expose the narratives of victims of environmental pollution. As China continues domestic economic reforms and integration into global markets, there emerge new linkages between China's domestic environmental degradation and global ecological crisis.
Comparing the experience of China, South Korea, Japan and Taiwan reveals that the impact of environmental activism is heavily modified by domestic political context, particularly the level of integration of mass-based protests and policy advocacy NGOs. Hinted by the history of neighboring Japan and South Korea, the possible convergence of NGOs and anti-pollution protests will have significant implications for Chinese environmental politics in the coming years.
India
Environmental and public health is an ongoing struggle within India. The first seed of an environmental movement in India was the foundation in 1964 of Dasholi Gram Swarajya Sangh, a labour cooperative started by Chandi Prasad Bhatt. It was inaugurated by Sucheta Kriplani and founded on land donated by Shyma Devi. This initiative was eventually followed up with the Chipko movement starting in 1974.
The most severe single event underpinning the movement was the Bhopal gas leakage on 3 December 1984. 40 tons of methyl isocyanate was released, immediately killing 2,259 people and ultimately affecting 700,000 citizens.
India has a national campaign against Coca-Cola and Pepsi Cola plants due to their practices of drawing groundwater and contaminating fields with sludge. The movement is characterized by local struggles against intensive aquaculture farms. The most influential part of the environmental movement in India is the anti-dam movement. Dam creation has been thought of as a way for India to catch up with the West by connecting to the power grid with giant dams, coal or oil-powered plants, or nuclear plants. Jhola Aandolan a mass movement is conducting as fighting against polyethylene carry bags uses and promoting cloth/jute/paper carry bags to protect the environment and nature. Activists in the Indian environmental movement consider global warming, sea levels rising, and glaciers retreating decreasing the amount of water flowing into streams to be the biggest challenges for them to face in the early twenty-first century.
Eco Revolution movement has been started by Eco Needs Foundation in 2008 from Aurangabad Maharashtra that seeks the participation of children, youth, researchers, spiritual and political leaders to organise awareness programmes and conferences. Child activists against air pollution in India and greenhouse gas emissions by India include Licypriya Kangujam. From the mid to late 2010s a coalition of urban and Indigenous communities came together to protect Aarey, a forest located in the suburbs of Mumbai. Farming and indigenous communities have also opposed pollution and clearing caused by mining in states such as Goa, Odisha, and Chhattisgarh.
Bangladesh
Mithun Roy Chowdhury, President, Save Nature & Wildlife (SNW), Bangladesh, insisted that the people of Bangladesh raise their voice against Tipaimukh Dam, being constructed by the Government of India. He said the Tipaimukh Dam project will be another "death trap for Bangladesh like the Farakka Barrage," which would lead to an environmental disaster for 50 million people in the Meghna River basin. He said that this project will start desertification in Bangladesh.
Bangladesh was ranked the most polluted country in the world due to defective automobiles, particularly diesel-powered vehicles, and hazardous gases from industry. The air is a hazard to Bangladesh's human health, ecology, and economic progress.
Africa
South Africa
In 2022, a court in South Africa has confirmed the constitutional right of the country's citizens to an environment that isn't harmful to their health, which includes the right to clean air. The case is referred to "Deadly Air" case. The area includes one of South Africa's largest cities, Ekurhuleni, and a large portion of the Mpumalanga province.
Oceania
Australia
New Zealand
Scope of the movement
Environmental science is the study of the interactions among the physical, chemical, and biological components of the environment.
Ecology, or ecological science, is the scientific study of the distribution and abundance of living organisms and how these properties are affected by interactions between the organisms and their environment.
Primary focus points
The environmental movement is broad in scope and can include any topic related to the environment, conservation, and biology, as well as the preservation of landscapes, flora, and fauna for a variety of purposes and uses. See List of environmental issues. When an act of violence is committed against someone or some institution in the name of environmental defense it is referred to as eco-terrorism.
The conservation movement seeks to protect natural areas for sustainable consumption, as well as traditional (hunting, fishing, trapping) and spiritual use.
Environmental conservation is the process in which one is involved in conserving the natural aspects of the environment. Whether through reforestation, recycling, or pollution control, environmental conservation sustains the natural quality of life.
Environmental health movement dates at least to Progressive Era, and focuses on urban standards like clean water, efficient sewage handling, and stable population growth. Environmental health could also deal with nutrition, preventive medicine, aging, and other concerns specific to human well-being. Environmental health is also seen as an indicator for the state of the environment, or an early warning system for what may happen to humans
Environmental justice is a movement that began in the U.S. in the 1980s and seeks an end to environmental racism and to prevent low-income and minority communities from an unbalanced exposure to highways, garbage dumps, and factories. The Environmental Justice movement seeks to link "social" and "ecological" environmental concerns, while at the same time preventing de facto racism, and classism. This makes it particularly adequate for the construction of labor-environmental alliances.
Ecology movement could involve the Gaia Theory, as well as Value of Earth and other interactions between humans, science, and responsibility.
Bright green environmentalism is a currently popular sub-movement, which emphasizes the idea that through technology, good design and more thoughtful use of energy and resources, people can live responsible, sustainable lives while enjoying prosperity.
Light green, and dark green environmentalism are yet other sub-movements, respectively distinguished by seeing environmentalism as a lifestyle choice (light greens), and promoting reduction in human numbers and/or a relinquishment of technology (dark greens)
Deep Ecology is an ideological spinoff of the ecology movement that views the diversity and integrity of the planetary ecosystem, in and for itself, as its primary value.
The anti-nuclear movement opposes the use of various nuclear technologies. The initial anti-nuclear objective was nuclear disarmament and later the focus began to shift to other issues, mainly opposition to the use of nuclear power. There have been many large anti-nuclear demonstrations and protests. Major anti-nuclear groups include Campaign for Nuclear Disarmament, Friends of the Earth, Greenpeace, International Physicians for the Prevention of Nuclear War, and the Nuclear Information and Resource Service.
The pro-nuclear movement consists of people, including former opponents of nuclear energy, who calculate that the threat to humanity from climate change is far worse than any risk associated with nuclear energy.
Environmental law and theory
Property rights
Many environmental lawsuits question the legal rights of property owners, and whether the general public has a right to intervene with detrimental practices occurring on someone else's land. Environmental law organizations exist all across the world, such as the Environmental Law and Policy Center in the midwestern United States.
Citizens' rights
One of the earliest lawsuits to establish that citizens may sue for environmental and aesthetic harms was Scenic Hudson Preservation Conference v. Federal Power Commission, decided in 1965 by the Second Circuit Court of Appeals. The case helped halt the construction of a power plant on Storm King Mountain in New York State. See also United States environmental law and David Sive, an attorney who was involved in the case.
Nature's rights
Christopher D. Stone's 1972 essay, "Should trees have standing?" addressed the question of whether natural objects themselves should have legal rights. In the essay, Stone suggests that his argument is valid because many current rightsholders (women, children) were once seen as objects.
Environmental reactivism
Numerous criticisms and ethical ambiguities have led to growing concerns about technology, including the use of potentially harmful pesticides, water additives like fluoride, and the extremely dangerous ethanol-processing plants.
When residents living near proposed developments organize opposition they are sometimes called "NIMBYS", short for "not in my back yard".
Just Stop Oil, an environmentalist activist group, as well as and other activists are clarifying the issue of climate change and how it is impacting the way of life for humans.
King Charles used events to engage with business and community leaders about environmental issues.
Environmentalism today
Today, the sciences of ecology and environmental science, in addition to any aesthetic goals, provide the basis of unity to some of the serious environmentalists. As more information is gathered in scientific fields, more scientific issues like biodiversity, as opposed to mere aesthetics, are a concern to environmentalists. Conservation biology is a rapidly developing field.
In recent years, the environmental movement has increasingly focused on global warming as one of the top issues. As concerns about climate change moved more into the mainstream, from the connections drawn between global warming and Hurricane Katrina to Al Gore's 2006 documentary film An Inconvenient Truth, more and more environmental groups refocused their efforts. In the United States, 2007 witnessed the largest grassroots environmental demonstration in years, Step It Up 2007, with rallies in over 1,400 communities and all 50 states for real global warming solutions.
Publicity and widespread organising of school strike for the climate began after Swedish schoolgirl Greta Thunberg staged a protest in August 2018 outside the Swedish Riksdag (parliament). The September 2019 climate strikes were likely the largest climate strikes in world history.
In 2019, a survey found that climate breakdown is viewed as the most important issue facing the world in seven out of the eight countries surveyed.
Many religious organizations and individual churches now have programs and activities dedicated to environmental issues. The religious movement is often supported by interpretation of scriptures. Most major religious groups are represented including Jewish, Islamic, Anglican, Orthodox, Evangelical, Zoroastrian, Christian and Catholic.
Radical environmentalism
Radical environmentalism emerged from an ecocentrism-based frustration with the co-option of mainstream environmentalism. The radical environmental movement aspires to what scholar Christopher Manes calls "a new kind of environmental activism: iconoclastic, uncompromising, discontented with traditional conservation policy, at times illegal ..." Radical environmentalism presupposes a need to reconsider Western ideas of religion and philosophy (including capitalism, patriarchy and globalization) sometimes through "resacralising" and reconnecting with nature.
Greenpeace represents an organization with a radical approach, but has contributed in serious ways towards understanding of critical issues, and has a science-oriented core with radicalism as a means to media exposure. Groups like Earth First! take a much more radical posture. Some radical environmentalist groups, like Earth First! and the Earth Liberation Front, illegally sabotage or destroy infrastructural capital.
Criticisms
Conservative critics of the movement characterize it as radical and misguided. Especially critics of the United States Endangered Species Act, which has come under scrutiny lately, and the Clean Air Act, which they said conflict with private property rights, corporate profits and the nation's overall economic growth. Critics also challenge the scientific evidence for global warming. They argue that the environmental movement has diverted attention from more pressing issues. Western environmental activists have also been criticized for performative activism, eco-colonialism, and enacting white savior tropes, especially celebrities who promote conservation in developing countries.
Deforestation, air pollution, and endangered species have all been appearing as controversial issues in Western literature for hundreds, and in some cases, thousands of years.
See also
Anti-consumerism
Chemical Leasing
Carbon Neutrality
Car-free movement
Earth Science
Earth Strike
Ecofascism
Ecological economics
Ecological modernization
Ecopsychology
Ecosia
Eco-socialism
Environmental justice
Environmental philosophy
Environmental organizations
Family planning
Free-market environmentalism
Green anarchism
Green movement
Green seniors
Green syndicalism
Holistic management
National Cleanup Day
Natural environment
Political ecology
Positive environmentalism
Sexecology
Social ecology
Substitute good
Sustainability
Sustainability and systemic change resistance
Technogaianism
Timeline of environmental events
Voluntary Human Extinction Movement
List of women climate scientists and activists
References
Further reading
Brinkley, Douglas. Silent Spring Revolution: John F. Kennedy, Rachel Carson, Lyndon Johnson, Richard Nixon, and the Great Environmental Awakening (2022) excerpt
Gottlieb, Robert. Forcing the Spring: The Transformation of the American Environmental Movement. (Island Press, 1993(). ISBN 978-1559638326
Guha, Ramachandra. Environmentalism: A Global History, (Longman, 1999)
Kennedy, Emily Huddart. Eco-Types: Five Ways of Caring about the Environment (Princeton UP, 2013) Finds five responses: Eco-Engaged (highly engaged moralistic liberals); Self-Effacing (concerned, but doubt they can do much); Optimists (conservatives comfortable with today's environment); Fatalists (pessimists); and the Indifferent (who just don't care)., Longman.
Hawken, Paul. Blessed Unrest, (Penguin., 2007)
Martin, Laura. 2022. Wild by Design: The Rise of Ecological Restoration. (Harvard UP, 2022) ISBN 9780674979420
Kamieniecki, Sheldon, ed. Environmental Politics in the International Arena: Movements, Parties, Organizations, and Policy, (SUNY Press, 1993)
Kennedy, Emily Huddart. Eco-Types: Five Ways of Caring about the Environment (Princeton UP, 2013) Finds five responses: Eco-Engaged (highly engaged moralistic liberals); Self-Effacing (concerned, but doubt they can do much); Optimists (conservatives comfortable with today's environment); Fatalists (pessimists); and the Indifferent (who just don't care).
Kline, Benjamin. First Along the River: A brief history of the U.S. environmental movement (4th ed. 2011)
McCormick, John. 1995. The Global Environmental Movement, London: John Wiley.
Rosier, Paul C. Environmental Justice in North America (Routledge, 2024) online book review
Shabecoff, Philip. A Fierce Green Fire: The American Environmental Movement, (Island Press, 2003)
Taylor, Dorceta. The Rise of the American Conservation Movement. (Duke UP, 2016.), ISBN 978-0-8223-6198-5
Wapner, Paul. Environmental Activism and World Civil Politics(SUNY Press, 1996)
Environmental movements
Environmentalism
Environmental social science concepts | Environmental movement | [
"Environmental_science"
] | 8,136 | [
"Environmental social science concepts",
"Environmental social science"
] |
45,437 | https://en.wikipedia.org/wiki/Probability%20measure | In mathematics, a probability measure is a real-valued function defined on a set of events in a σ-algebra that satisfies measure properties such as countable additivity. The difference between a probability measure and the more general notion of measure (which includes concepts like area or volume) is that a probability measure must assign value 1 to the entire space.
Intuitively, the additivity property says that the probability assigned to the union of two disjoint (mutually exclusive) events by the measure should be the sum of the probabilities of the events; for example, the value assigned to the outcome "1 or 2" in a throw of a dice should be the sum of the values assigned to the outcomes "1" and "2".
Probability measures have applications in diverse fields, from physics to finance and biology.
Definition
The requirements for a set function to be a probability measure on a σ-algebra are that:
must return results in the unit interval returning for the empty set and for the entire space.
must satisfy the countable additivity property that for all countable collections of pairwise disjoint sets:
For example, given three elements 1, 2 and 3 with probabilities and the value assigned to is as in the diagram on the right.
The conditional probability based on the intersection of events defined as:
satisfies the probability measure requirements so long as is not zero.
Probability measures are distinct from the more general notion of fuzzy measures in which there is no requirement that the fuzzy values sum up to and the additive property is replaced by an order relation based on set inclusion.
Example applications
Market measures which assign probabilities to financial market spaces based on actual market movements are examples of probability measures which are of interest in mathematical finance; for example, in the pricing of financial derivatives. For instance, a risk-neutral measure is a probability measure which assumes that the current value of assets is the expected value of the future payoff taken with respect to that same risk neutral measure (i.e. calculated using the corresponding risk neutral density function), and discounted at the risk-free rate. If there is a unique probability measure that must be used to price assets in a market, then the market is called a complete market.
Not all measures that intuitively represent chance or likelihood are probability measures. For instance, although the fundamental concept of a system in statistical mechanics is a measure space, such measures are not always probability measures. In general, in statistical physics, if we consider sentences of the form "the probability of a system S assuming state A is p" the geometry of the system does not always lead to the definition of a probability measure under congruence, although it may do so in the case of systems with just one degree of freedom.
Probability measures are also used in mathematical biology. For instance, in comparative sequence analysis a probability measure may be defined for the likelihood that a variant may be permissible for an amino acid in a sequence.
Ultrafilters can be understood as -valued probability measures, allowing for many intuitive proofs based upon measures. For instance, Hindman's Theorem can be proven from the further investigation of these measures, and their convolution in particular.
See also
Probability distribution
References
Further reading
Distinguishing probability measure, function and distribution, Math Stack Exchange
External links
Experiment (probability theory)
Measures (measure theory)
pl:Miara probabilistyczna | Probability measure | [
"Physics",
"Mathematics"
] | 696 | [
"Measures (measure theory)",
"Quantity",
"Physical quantities",
"Size"
] |
45,441 | https://en.wikipedia.org/wiki/Conservation%20movement | The conservation movement, also known as nature conservation, is a political, environmental, and social movement that seeks to manage and protect natural resources, including animal, fungus, and plant species as well as their habitat for the future. Conservationists are concerned with leaving the environment in a better state than the condition they found it in. Evidence-based conservation seeks to use high quality scientific evidence to make conservation efforts more effective.
The early conservation movement evolved out of necessity to maintain natural resources such as fisheries, wildlife management, water, soil, as well as conservation and sustainable forestry. The contemporary conservation movement has broadened from the early movement's emphasis on use of sustainable yield of natural resources and preservation of wilderness areas to include preservation of biodiversity. Some say the conservation movement is part of the broader and more far-reaching environmental movement, while others argue that they differ both in ideology and practice. Conservation is seen as differing from environmentalism and it is generally a conservative school of thought which aims to preserve natural resources expressly for their continued sustainable use by humans.
History
Early history
The conservation movement can be traced back to John Evelyn's work Sylva, which was presented as a paper to the Royal Society in 1662. Published as a book two years later, it was one of the most highly influential texts on forestry ever published. Timbre resources in England were becoming dangerously depleted at the time, and Evelyn advocated the importance of conserving the forests by managing the rate of depletion and ensuring that the cut down trees get replenished.
Khejarli massacre:
The Bishnoi narrate the story of Amrita Devi, a member of the sect who inspired as many as 363 other Bishnois to go to their deaths in protest of the cutting down of Khejri trees on 12 September 1730. The Maharaja of Jodhpur, Abhay Singh, requiring wood for the construction of a new palace, sent soldiers to cut trees in the village of Khejarli, which was called Jehnad at that time. Noticing their actions, Amrita Devi hugged a tree in an attempt to stop them. Her family then adopted the same strategy, as did other local people when the news spread. She told the soldiers that she considered their actions to be an insult to her faith and that she was prepared to die to save the trees. The soldiers did indeed kill her and others until Abhay Singh was informed of what was going on and intervened to stop the massacre.
Some of the 363 Bishnois who were killed protecting the trees were buried in Khejarli, where a simple grave with four pillars was erected. Every year, in September, i.e., Shukla Dashmi of Bhadrapad (Hindi month) the Bishnois assemble there to commemorate the sacrifice made by their people to preserve the trees.
The field developed during the 18th century, especially in Prussia and France where scientific forestry methods were developed. These methods were first applied rigorously in British India from the early 19th century. The government was interested in the use of forest produce and began managing the forests with measures to reduce the risk of wildfire in order to protect the "household" of nature, as it was then termed. This early ecological idea was in order to preserve the growth of delicate teak trees, which was an important resource for the Royal Navy.
Concerns over teak depletion were raised as early as 1799 and 1805 when the Navy was undergoing a massive expansion during the Napoleonic Wars; this pressure led to the first formal conservation Act, which prohibited the felling of small teak trees. The first forestry officer was appointed in 1806 to regulate and preserve the trees necessary for shipbuilding.
This promising start received a setback in the 1820s and 30s, when laissez-faire economics and complaints from private landowners brought these early conservation attempts to an end.
In 1837, American poet George Pope Morris published "Woodman, Spare that Tree!", a Romantic poem urging a lumberjack to avoid an oak tree that has sentimental value. The poem was set to music later that year by Henry Russell. Lines from the song have been quoted by environmentalists.
Origins of the modern conservation movement
Conservation was revived in the mid-19th century, with the first practical application of scientific conservation principles to the forests of India. The conservation ethic that began to evolve included three core principles: that human activity damaged the environment, that there was a civic duty to maintain the environment for future generations, and that scientific, empirically based methods should be applied to ensure this duty was carried out. Sir James Ranald Martin was prominent in promoting this ideology, publishing many medico-topographical reports that demonstrated the scale of damage wrought through large-scale deforestation and desiccation, and lobbying extensively for the institutionalization of forest conservation activities in British India through the establishment of Forest Departments. Edward Percy Stebbing warned of desertification of India. The Madras Board of Revenue started local conservation efforts in 1842, headed by Alexander Gibson, a professional botanist who systematically adopted a forest conservation program based on scientific principles. This was the first case of state management of forests in the world.
These local attempts gradually received more attention by the British government as the unregulated felling of trees continued unabated. In 1850, the British Association in Edinburgh formed a committee to study forest destruction at the behest of Hugh Cleghorn a pioneer in the nascent conservation movement.
He had become interested in forest conservation in Mysore in 1847 and gave several lectures at the Association on the failure of agriculture in India. These lectures influenced the government under Governor-General Lord Dalhousie to introduce the first permanent and large-scale forest conservation program in the world in 1855, a model that soon spread to other colonies, as well the United States. In the same year, Cleghorn organised the Madras Forest Department and in 1860 the department banned the use shifting cultivation. Cleghorn's 1861 manual, The forests and gardens of South India, became the definitive work on the subject and was widely used by forest assistants in the subcontinent. In 1861, the Forest Department extended its remit into the Punjab.
Sir Dietrich Brandis, a German forester, joined the British service in 1856 as superintendent of the teak forests of Pegu division in eastern Burma. During that time Burma's teak forests were controlled by militant Karen tribals. He introduced the "taungya" system, in which Karen villagers provided labor for clearing, planting and weeding teak plantations. After seven years in Burma, Brandis was appointed Inspector General of Forests in India, a position he served in for 20 years. He formulated new forest legislation and helped establish research and training institutions. The Imperial Forest School at Dehradun was founded by him.
Germans were prominent in the forestry administration of British India. As well as Brandis, Berthold Ribbentrop and Sir William P.D. Schlich brought new methods to Indian conservation, the latter becoming the Inspector-General in 1883 after Brandis stepped down. Schlich helped to establish the journal Indian Forester in 1874, and became the founding director of the first forestry school in England at Cooper's Hill in 1885. He authored the five-volume Manual of Forestry (1889–96) on silviculture, forest management, forest protection, and forest utilization, which became the standard and enduring textbook for forestry students.
Conservation in the United States
The American movement received its inspiration from 19th century works that exalted the inherent value of nature, quite apart from human usage. Author Henry David Thoreau (1817–1862) made key philosophical contributions that exalted nature. Thoreau was interested in peoples' relationship with nature and studied this by living close to nature in a simple life. He published his experiences in the book Walden, which argued that people should become intimately close with nature. The ideas of Sir Brandis, Sir William P.D. Schlich and Carl A. Schenck were also very influential—Gifford Pinchot, the first chief of the USDA Forest Service, relied heavily upon Brandis' advice for introducing professional forest management in the U.S. and on how to structure the Forest Service.
Both conservationists and preservationists appeared in political debates during the Progressive Era (the 1890s–early 1920s). There were three main positions.
Laissez-faire: The laissez-faire position held that owners of private property, including lumber and mining companies, should be allowed to do anything they wished on their properties. Environmental protection therefore becomes their choice. Businesses are pressured somewhat by the incentive of occupational preservation which requires that they not wholly destroy or consume the resources they rely upon. Said businesses need to innovate or pivot in the event that the exhaustion of a resource is imminent.
Conservationists: The conservationists, led by future President Theodore Roosevelt and his close ally George Bird Grinnell, were motivated by the wanton waste that was taking place at the hand of market forces, including logging and hunting. This practice resulted in placing a large number of North American game species on the edge of extinction. Roosevelt believed that the laissez-faire approach of the U.S. Government was too wasteful and inefficient. In any case, they noted, most of the natural resources in the western states were already owned by the federal government. The best course of action, they argued, was a long-term plan devised by national experts to maximize the long-term economic benefits of natural resources. To accomplish the mission, Roosevelt and Grinnell formed the Boone and Crockett Club, whose members were some of the best minds and influential men of the day. Its contingency of conservationists, scientists, politicians, and intellectuals became Roosevelt's closest advisers during his march to preserve wildlife and habitat across North America.
Preservationists: Preservationists, led by John Muir (1838–1914), argued that the conservation policies were not strong enough to protect the interest of the natural world because they continued to focus on the natural world as a source of economic production.
The debate between conservation and preservation reached its peak in the public debates over the construction of California's Hetch Hetchy dam in Yosemite National Park which supplies the water supply of San Francisco. Muir, leading the Sierra Club, declared that the valley must be preserved for the sake of its beauty: "No holier temple has ever been consecrated by the heart of man."
President Roosevelt put conservationist issues high on the national agenda. He worked with all the major figures of the movement, especially his chief advisor on the matter, Gifford Pinchot and was deeply committed to conserving natural resources. He encouraged the Newlands Reclamation Act of 1902 to promote federal construction of dams to irrigate small farms and placed under federal protection. Roosevelt set aside more federal land for national parks and nature preserves than all of his predecessors combined.
Roosevelt established the United States Forest Service, signed into law the creation of five national parks, and signed the year 1906 Antiquities Act, under which he proclaimed 18 new national monuments. He also established the first 51 bird reserves, four game preserves, and 150 national forests, including Shoshone National Forest, the nation's first. The area of the United States that he placed under public protection totals approximately .
Gifford Pinchot had been appointed by McKinley as chief of Division of Forestry in the Department of Agriculture. In 1905, his department gained control of the national forest reserves. Pinchot promoted private use (for a fee) under federal supervision. In 1907, Roosevelt designated of new national forests just minutes before a deadline.
In May 1908, Roosevelt sponsored the Conference of Governors held in the White House, with a focus on natural resources and their most efficient use. Roosevelt delivered the opening address: "Conservation as a National Duty".
In 1903 Roosevelt toured the Yosemite Valley with John Muir, who had a very different view of conservation, and tried to minimize commercial use of water resources and forests. Working through the Sierra Club he founded, Muir succeeded in 1905 in having Congress transfer the Mariposa Grove and Yosemite Valley to the federal government. While Muir wanted nature preserved for its own sake, Roosevelt subscribed to Pinchot's formulation, "to make the forest produce the largest amount of whatever crop or service will be most useful, and keep on producing it for generation after generation of men and trees."
Theodore Roosevelt's view on conservationism remained dominant for decades; Franklin D. Roosevelt authorised the building of many large-scale dams and water projects, as well as the expansion of the National Forest System to buy out sub-marginal farms. In 1937, the Pittman–Robertson Federal Aid in Wildlife Restoration Act was signed into law, providing funding for state agencies to carry out their conservation efforts.
Since 1970
Environmental reemerged on the national agenda in 1970, with Republican Richard Nixon playing a major role, especially with his creation of the Environmental Protection Agency. The debates over the public lands and environmental politics played a supporting role in the decline of liberalism and the rise of modern environmentalism. Although Americans consistently rank environmental issues as "important", polling data indicates that in the voting booth voters rank the environmental issues low relative to other political concerns.
The growth of the Republican party's political power in the inland West (apart from the Pacific coast) was facilitated by the rise of popular opposition to public lands reform. Successful Democrats in the inland West and Alaska typically take more conservative positions on environmental issues than Democrats from the Coastal states. Conservatives drew on new organizational networks of think tanks, industry groups, and citizen-oriented organizations, and they began to deploy new strategies that affirmed the rights of individuals to their property, protection of extraction rights, to hunt and recreate, and to pursue happiness unencumbered by the federal government at the expense of resource conservation.
In 2019, convivial conservation was an idea proposed by Bram Büscher and Robert Fletcher. Convivial conservation draws on social movements and concepts like environmental justice and structural change to create a post-capitalist approach to conservation. Convivial conservation rejects both human-nature dichotomies and capitalistic political economies. Built on a politics of equity, structural change and environmental justice, convivial conservation is considered a radical theory as it focuses on the structural political-economy of modern nation states and the need to create structural change. Convivial conservation creates a more integrated approach which reconfigures the nature-human configuration to create a world in which humans are recognized as a part of nature. The emphasis on nature as for and by humans creates a human responsibility to care for the environment as a way of caring for themselves. It also redefines nature as not only being pristine and untouched, but cultivated by humans in everyday formats. The theory is a long-term process of structural change to move away from capitalist valuation in favor of a system emphasizing everyday and local living. Convivial conservation creates a nature which includes humans rather than excluding them from the necessity of conservation. While other conservation theories integrate some of the elements of convivial conservation, none move away from both dichotomies and capitalist valuation principles.
The five elements of convivial conservation
Source:
The promotion of nature for, to and by humans
The movement away from the concept of conservation as saving only nonhuman nature
Emphasis on the long-term democratic engagement with nature rather than elite access and tourism,
The movement away from the spectacle of nature and instead focusing on the mundane ‘everyday nature’
The democratic management of nature, with nature as commons and in context
Racism and the Conservation Movement
The early years of the environmental and conservation movements were rooted in the safeguarding of game to support the recreation activities of elite white men, such as sport hunting. This led to an economy to support and perpetuate these activities as well as the continued wilderness conservation to support the corporate interests supplying the hunters with the equipment needed for their sport. Game parks in England and the United States allowed wealthy hunters and fishermen to deplete wildlife, while hunting by Indigenous groups, laborers and the working class, and poor citizens - especially for the express use of sustenance - was vigorously monitored. Scholars have shown that the establishment of the U.S. national parks, while setting aside land for preservation, was also a continuation of preserving the land for the recreation and enjoyment of elite white hunters and nature enthusiasts.
While Theodore Roosevelt was one of the leading activists for the conservation movement in the United States, he also believed that the threats to the natural world were equally threats to white Americans. Roosevelt and his contemporaries held the belief that the cities, industries and factories that were overtaking the wilderness and threatening the native plants and animals were also consuming and threatening the racial vigor that they believed white Americans held which made them superior. Roosevelt was a big believer that white male virility depended on wildlife for its vigor, and that, consequently, depleting wildlife would result in a racially weaker nation. This lead Roosevelt to support the passing of many immigration restrictions, eugenics legislations and wildlife preservation laws. For instance, Roosevelt established the first national parks through the Antiquities Act of 1906 while also endorsing the removal of Indigenous Americans from their tribal lands within the parks. This move was promoted and endorsed by other leaders of the conservation movement, including Frederick Law Olmsted, a leading landscape architect, conservationist, and supporter of the national park system, and Gifford Pinchot, a leading eugenicist and conservationist. Furthering the economic exploitation of the environment and national parks for wealthy whites was the beginning of ecotourism in the parks, which included allowing some Indigenous Americans to remain so that the tourists could get what was to be considered the full "wilderness experience".
Another long-term supporter, partner, and inspiration to Roosevelt, Madison Grant, was a well known American eugenicist and conservationist. Grant worked alongside Roosevelt in the American conservation movement and was even secretary and president of the Boone and Crockett Club. In 1916, Grant published the book "The Passing of the Great Race, or The Racial Basis of European History", which based its premise on eugenics and outlined a hierarchy of races, with white, "Nordic" men at the top, and all other races below. The German translation of this book was used by Nazi Germany as the source for many of their beliefs and was even proclaimed by Hitler to be his "Bible".
One of the first established conservation agencies in the United States is the National Audubon Society. Founded in 1905, its priority was to protect and conserve various waterbird species. However, the first state-level Audubon group was created in 1896 by Harriet Hemenway and Minna B. Hall to convince women to refrain from buying hats made with bird feathers- a common practice at the time. The organization is named after John Audubon, a naturalist and legendary bird painter. Audubon was also a slaveholder who also included many racist tales in his books. Despite his views of racial inequality, Audubon did find black and Indigenous people to be scientifically useful, often using their local knowledge in his books and relying on them to collect specimens for him.
The ideology of the conservation movement in Germany paralleled that of the U.S. and England. Early German naturalists of the 20th century turned to the wilderness to escape the industrialization of cities. However, many of these early conservationists became part of and influenced the Nazi party. Like elite and influential Americans of the early 20th century, they embraced eugenics and racism and promoted the idea that Nordic people are superior.
Conservation in Costa Rica
World Wide Fund for Nature
The World Wide Fund for Nature (WWF) is an international non-governmental organization founded in 1961, working in the field of the wilderness preservation, and the reduction of human impact on the environment. It was formerly named the "World Wildlife Fund", which remains its official name in Canada and the United States.
WWF is the world's largest conservation organization with over five million supporters worldwide, working in more than 100 countries, supporting around 1,300 conservation and environmental projects. They have invested over $1 billion in more than 12,000 conservation initiatives since 1995. WWF is a foundation with 55% of funding from individuals and bequests, 19% from government sources (such as the World Bank, DFID, USAID) and 8% from corporations in 2014.
WWF aims to "stop the degradation of the planet's natural environment and to build a future in which humans live in harmony with nature." The Living Planet Report is published every two years by WWF since 1998; it is based on a Living Planet Index and ecological footprint calculation. In addition, WWF has launched several notable worldwide campaigns including Earth Hour and Debt-for-Nature Swap, and its current work is organized around these six areas: food, climate, freshwater, wildlife, forests, and oceans.
"Conservation Far" approach
Institutions such as the WWF have historically been the cause of the displacement and divide between Indigenous populations and the lands they inhabit. The reason is the organization's historically colonial, paternalistic, and neoliberal approaches to conservation. Claus, in her article "Drawing the Sea Near: Satoumi and Coral Reef Conservation in Okinawa", expands on this approach, called "conservation far", in which access to lands is open to external foreign entities, such as researchers or tourists, but prohibited to local populations. The conservation initiatives are therefore taking place "far" away. This entity is largely unaware of the customs and values held by those within the territory surrounding nature and their role within it.
"Conservation near" approach
In Japan, the town of Shiraho had traditional ways of tending to nature that were lost due to colonization and militarization by the United States. The return to traditional sustainability practices constituted a “conservation near” approach. This engages those near in proximity to the lands in the conservation efforts and holds them accountable for their direct effects on its preservation. While conservation-far drills visuals and sight as being the main interaction medium between people and the environment, conservation near includes a hands-on, full sensory experience permitted by conservation-near methodologies. An emphasis on observation only stems from a deeper association with intellect and observation. The alternative to this is more of a bodily or "primitive" consciousness, which is associated with lower-intelligence and people of color. A new, integrated approach to conservation is being investigated in recent years by institutions such as WWF. The socionatural relationships centered on the interactions based in reciprocity and empathy, making conservation efforts being accountable to the local community and ways of life, changing in response to values, ideals, and beliefs of the locals. Japanese seascapes are often integral to the identity of the residents and includes historical memories and spiritual engagements which need to be recognized and considered. The involvement of communities gives residents a stake in the issue, leading to a long-term solution which emphasizes sustainable resource usage and the empowerment of the communities. Conservation efforts are able to take into consideration cultural values rather than the foreign ideals that are often imposed by foreign activists.
Evidence-based conservation
Areas of concern
Deforestation and overpopulation are issues affecting all regions of the world. The consequent destruction of wildlife habitat has prompted the creation of conservation groups in other countries, some founded by local hunters who have witnessed declining wildlife populations first hand. Also, it was highly important for the conservation movement to solve problems of living conditions in the cities and the overpopulation of such places.
Boreal forest and the Arctic
The idea of incentive conservation is a modern one but its practice has clearly defended some of the sub Arctic wildernesses and the wildlife in those regions for thousands of years, especially by indigenous peoples such as the Evenk, Yakut, Sami, Inuit and Cree. The fur trade and hunting by these peoples have preserved these regions for thousands of years. Ironically, the pressure now upon them comes from non-renewable resources such as oil, sometimes to make synthetic clothing which is advocated as a humane substitute for fur. (See Raccoon dog for case study of the conservation of an animal through fur trade.) Similarly, in the case of the beaver, hunting and fur trade were thought to bring about the animal's demise, when in fact they were an integral part of its conservation. For many years children's books stated and still do, that the decline in the beaver population was due to the fur trade. In reality however, the decline in beaver numbers was because of habitat destruction and deforestation, as well as its continued persecution as a pest (it causes flooding). In Cree lands, however, where the population valued the animal for meat and fur, it continued to thrive. The Inuit defend their relationship with the seal in response to outside critics.
Latin America (Bolivia)
The Izoceño-Guaraní of Santa Cruz Department, Bolivia, is a tribe of hunters who were influential in establishing the Capitania del Alto y Bajo Isoso (CABI). CABI promotes economic growth and survival of the Izoceno people while discouraging the rapid destruction of habitat within Bolivia's Gran Chaco. They are responsible for the creation of the 34,000 square kilometre Kaa-Iya del Gran Chaco National Park and Integrated Management Area (KINP). The KINP protects the most biodiverse portion of the Gran Chaco, an ecoregion shared with Argentina, Paraguay and Brazil. In 1996, the Wildlife Conservation Society joined forces with CABI to institute wildlife and hunting monitoring programs in 23 Izoceño communities. The partnership combines traditional beliefs and local knowledge with the political and administrative tools needed to effectively manage habitats. The programs rely solely on voluntary participation by local hunters who perform self-monitoring techniques and keep records of their hunts. The information obtained by the hunters participating in the program has provided CABI with important data required to make educated decisions about the use of the land. Hunters have been willing participants in this program because of pride in their traditional activities, encouragement by their communities and expectations of benefits to the area.
Africa (Botswana)
In order to discourage illegal South African hunting parties and ensure future local use and sustainability, indigenous hunters in Botswana began lobbying for and implementing conservation practices in the 1960s. The Fauna Preservation Society of Ngamiland (FPS) was formed in 1962 by the husband and wife team: Robert Kay and June Kay, environmentalists working in conjunction with the Batawana tribes to preserve wildlife habitat.
The FPS promotes habitat conservation and provides local education for preservation of wildlife. Conservation initiatives were met with strong opposition from the Botswana government because of the monies tied to big-game hunting. In 1963, BaTawanga Chiefs and tribal hunter/adventurers in conjunction with the FPS founded Moremi National Park and Wildlife Refuge, the first area to be set aside by tribal people rather than governmental forces. Moremi National Park is home to a variety of wildlife, including lions, giraffes, elephants, buffalo, zebra, cheetahs and antelope, and covers an area of 3,000 square kilometers. Most of the groups involved with establishing this protected land were involved with hunting and were motivated by their personal observations of declining wildlife and habitat.
See also
References
Further reading
World
Barton, Gregory A. Empire, Forestry and the Origins of Environmentalism, (2002), covers British Empire
Clover, Charles. The End of the Line: How overfishing is changing the world and what we eat. (2004) Ebury Press, London.
Haq, Gary, and Alistair Paul. Environmentalism since 1945 (Routledge, 2013).
Jones, Eric L. "The History of Natural Resource Exploitation in the Western World," Research in Economic History, 1991 Supplement 6, pp 235–252
McNeill, John R. Something New Under the Sun: An Environmental History of the Twentieth Century (2000).
Regional studies
Africa
Adams, Jonathan S.; McShane, Thomas O. Myth of Wild Africa: Conservation without Illusion (1992) 266p; covers 1900 to 1980s
Anderson, David; Grove, Richard. Conservation in Africa: People, Policies & Practice (1988), 355pp
Bolaane, Maitseo. "Chiefs, Hunters & Adventurers: The Foundation of the Okavango/Moremi National Park, Botswana". Journal of Historical Geography. 31.2 (Apr. 2005): 241–259.
Carruthers, Jane. "Africa: Histories, Ecologies, and Societies," Environment and History, 10 (2004), pp. 379–406;
Showers, Kate B. Imperial Gullies: Soil Erosion and Conservation in Lesotho (2005) 346pp
Asia-Pacific
Bolton, Geoffrey. Spoils and Spoilers: Australians Make Their Environment, 1788-1980 (1981) 197pp
Economy, Elizabeth. The River Runs Black: The Environmental Challenge to China's Future (2010)
Elvin, Mark. The Retreat of the Elephants: An Environmental History of China (2006)
Grove, Richard H.; Damodaran, Vinita Jain; Sangwan, Satpal. Nature and the Orient: The Environmental History of South and Southeast Asia (1998) 1036pp
Johnson, Erik W., Saito, Yoshitaka, and Nishikido, Makoto. "Organizational Demography of Japanese Environmentalism," Sociological Inquiry, Nov 2009, Vol. 79 Issue 4, pp 481–504
Thapar, Valmik. Land of the Tiger: A Natural History of the Indian Subcontinent (1998) 288pp
Latin America
Boyer, Christopher. Political Landscapes: Forests, Conservation, and Community in Mexico. Duke University Press (2015)
Dean, Warren. With Broadax and Firebrand: The Destruction of the Brazilian Atlantic Forest (1997)
Evans, S. The Green Republic: A Conservation History of Costa Rica. University of Texas Press. (1999)
Funes Monzote, Reinaldo. From Rainforest to Cane Field in Cuba: An Environmental History since 1492 (2008)
Melville, Elinor G. K. A Plague of Sheep: Environmental Consequences of the Conquest of Mexico (1994)
Miller, Shawn William. An Environmental History of Latin America (2007)
Noss, Andrew and Imke Oetting. "Hunter Self-Monitoring by the Izoceño -Guarani in the Bolivian Chaco". Biodiversity & Conservation. 14.11 (2005): 2679–2693.
Simonian, Lane. Defending the Land of the Jaguar: A History of Conservation in Mexico (1995) 326pp
Wakild, Emily. An Unexpected Environment: National Park Creation, Resource Custodianship, and the Mexican Revolution. University of Arizona Press (2011).
Europe and Russia
Arnone Sipari, Lorenzo, Scritti scelti di Erminio Sipari sul Parco Nazionale d'Abruzzo (1922–1933) (2011), 360pp.
Barca, Stefania, and Ana Delicado. "Anti-nuclear mobilisation and environmentalism in Europe: A view from Portugal (1976–1986)." Environment and History 22.4 (2016): 497–520. online
Bonhomme, Brian. Forests, Peasants and Revolutionaries: Forest Conservation & Organization in Soviet Russia, 1917–1929 (2005) 252pp.
Cioc, Mark. The Rhine: An Eco-Biography, 1815–2000 (2002).
Dryzek, John S., et al. Green states and social movements: environmentalism in the United States, United Kingdom, Germany, and Norway (Oxford UP, 2003).
Jehlicka, Petr. "Environmentalism in Europe: an east-west comparison." in Social change and political transformation (Routledge, 2018) pp. 112–131.
Simmons, I.G. An Environmental History of Great Britain: From 10,000 Years Ago to the Present (2001).
Uekotter, Frank. The greenest nation?: A new history of German environmentalism (MIT Press, 2014).
Weiner, Douglas R. Models of Nature: Ecology, Conservation and Cultural Revolution in Soviet Russia (2000) 324pp; covers 1917 to 1939.
United States
Bates, J. Leonard. "Fulfilling American Democracy: The Conservation Movement, 1907 to 1921", The Mississippi Valley Historical Review, (1957), 44#1 pp. 29–57. in JSTOR
Brinkley, Douglas G. The Wilderness Warrior: Theodore Roosevelt and the Crusade for America, (2009) excerpt and text search
Cawley, R. McGreggor. Federal Land, Western Anger: The Sagebrush Rebellion and Environmental Politics (1993), on conservatives
Flippen, J. Brooks. Nixon and the Environment (2000).
Hays, Samuel P. Beauty, Health, and Permanence: Environmental Politics in the United States, 1955–1985 (1987), the standard scholarly history
Hays, Samuel P. A History of Environmental Politics since 1945 (2000), shorter standard history
Hays, Samuel P. Conservation and the Gospel of Efficiency (1959), on Progressive Era.
King, Judson. The Conservation Fight, From Theodore Roosevelt to the Tennessee Valley Authority (2009)
Nash, Roderick. Wilderness and the American Mind, (3rd ed. 1982), the standard intellectual history
Rothmun, Hal K. The Greening of a Nation? Environmentalism in the United States since 1945 (1998)
Scheffer, Victor B. The Shaping of Environmentalism in America (1991).
Sellers, Christopher. Crabgrass Crucible: Suburban Nature and the Rise of Environmentalism in Twentieth-Century America (2012)
Strong, Douglas H. Dreamers & Defenders: American Conservationists. (1988) online edition , good biographical studies of the major leaders
Taylor, Dorceta E. The Rise of the American Conservation Movement: Power, Privilege, and Environmental Protection (Duke U.P. 2016) x, 486 pp.
Turner, James Morton, "The Specter of Environmentalism": Wilderness, Environmental Politics, and the Evolution of the New Right. The Journal of American History 96.1 (2009): 123-47 online at History Cooperative
Vogel, David. California Greenin': How the Golden State Became an Environmental Leader (2018) 280 pp online review
Historiography
Cioc, Mark, Björn-Ola Linnér, and Matt Osborn, "Environmental History Writing in Northern Europe," Environmental History, 5 (2000), pp. 396–406
Bess, Michael, Mark Cioc, and James Sievert, "Environmental History Writing in Southern Europe," Environmental History, 5 (2000), pp. 545–56;
Coates, Peter. "Emerging from the Wilderness (or, from Redwoods to Bananas): Recent Environmental History in the United States and the Rest of the Americas," Environment and History, 10 (2004), pp. 407–38
Hay, Peter. Main Currents in Western Environmental Thought (2002), standard scholarly history excerpt and text search
McNeill, John R. "Observations on the Nature and Culture of Environmental History," History and Theory, 42 (2003), pp. 5–43.
Robin, Libby, and Tom Griffiths, "Environmental History in Australasia," Environment and History, 10 (2004), pp. 439–74
Worster, Donald, ed. The Ends of the Earth: Perspectives on Modern Environmental History (1988)
External links
A history of conservation in New Zealand
For Future Generations, a Canadian documentary on conservation and national parks
Environmental conservation
Environmental ethics
Environmental movements
el:Κίνημα Διατήρησης
fr:Conservation de la nature
sv:Naturskydd | Conservation movement | [
"Environmental_science"
] | 7,270 | [
"Environmental ethics"
] |
45,446 | https://en.wikipedia.org/wiki/Political%20ecology | Political ecology is the study of the relationships between political, economic and social factors with environmental issues and changes. Political ecology differs from apolitical ecological studies by politicizing environmental issues and phenomena.
The academic discipline offers wide-ranging studies integrating ecological social sciences with political economy in topics such as degradation and marginalization, environmental conflict, conservation and control, and environmental identities and social movements.
Origins
The term "political ecology" was first coined by Frank Thone in an article published in 1935. It has been widely used since then in the context of human geography and human ecology, but with no systematic definition. Anthropologist Eric R. Wolf gave it a second life in 1972 in an article entitled "Ownership and Political Ecology", in which he discusses how local rules of ownership and inheritance "mediate between the pressures emanating from the larger society and the exigencies of the local ecosystem", but did not develop the concept further. Other origins include other early works of Eric R. Wolf, Michael J. Watts, Susanna Hecht, and others in the 1970s and 1980s.
The origins of the field in the 1970s and 1980s were a result of the development of development geography and cultural ecology, particularly the work of Piers Blaikie on the sociopolitical origins of soil erosion. Historically, political ecology has focused on phenomena in and affecting the developing world; since the field's inception, "research has sought primarily to understand the political dynamics surrounding material and discursive struggles over the environment in the third world".
Scholars in political ecology are drawn from a variety of academic disciplines, including geography, anthropology, development studies, political science, economics, sociology, forestry, and environmental history.
Overview
Political ecology's broad scope and interdisciplinary nature lends itself to multiple definitions and understandings. However, common assumptions across the field give the term relevance. Raymond L. Bryant and Sinéad Bailey developed three fundamental assumptions in practising political ecology:
First, changes in the environment do not affect society in a homogenous way: political, social, and economic differences account for uneven distribution of costs and benefits.
Second, "any change in environmental conditions must affect the political and economic status quo."
Third, the unequal distribution of costs and benefits and the reinforcing or reducing of pre-existing inequalities has political implications in terms of the altered power relationships that then result.
In addition, political ecology attempts to provide critiques and alternatives in the interplay of the environment and political, economic and social factors. Paul Robbins asserts that the discipline has a "normative understanding that there are very likely better, less coercive, less exploitative, and more sustainable ways of doing things".
From these assumptions, political ecology can be used to:
inform policymakers and organizations of the complexities surrounding environment and development, thereby contributing to better environmental governance.
understand the decisions that communities make about the natural environment in the context of their political environment, economic pressure, and societal regulations.
look at how unequal relations in and among societies affect the natural environment, especially in context of government policy.
Scope and influences
Political ecology's movement as a field since its inception in the 1970s has complicated its scope and goals. Through the discipline's history, certain influences have grown more and less influential in determining the focus of study. Peter A. Walker traces the importance of the ecological sciences in political ecology. He points to the transition, for many critics, from a ‘structuralist’ approach through the 1970s and 1980s, in which ecology maintains a key position in the discipline, to a 'poststructuralist' approach with an emphasis on the 'politics' in political ecology. This turn has raised questions as to the differentiation with environmental politics as well as the field's use of the term of 'ecology'. Political ecological research has shifted from investigating political influence on the earth's surface to the focus on spatial-ecological influences on politics and power—a scope reminiscent of environmental politics.
Much has been drawn from cultural ecology, a form of analysis that showed how culture depends upon, and is influenced by, the material conditions of society (political ecology has largely eclipsed cultural ecology as a form of analysis according to Walker.) As Walker states, "whereas cultural ecology and systems theory emphasize[s] adaptation and homeostasis, political ecology emphasize[s] the role of political economy as a force of maladaptation and instability".
Political ecologists often use political economy frameworks to analyze environmental issues. Early and prominent examples of this were Silent Violence: Food, Famine and Peasantry in Northern Nigeria by Michael Watts in 1983, which traced the famine in northern Nigeria during the 1970s to the effects of colonialism, rather than an inevitable consequence of the drought in the Sahel, and The Political Economy of Soil Erosion in Developing Countries by Piers Blaikie in 1985, which traced land degradation in Africa to colonial policies of land appropriation, rather than over-exploitation by African farmers.
Relationship to anthropology and geography
Originating in the 18th and 19th centuries with philosophers such as Adam Smith, Karl Marx, and Thomas Malthus, political economy attempted to explain the relationships between economic production and political processes. It tended toward overly structuralist explanations, focusing on the role of individual economic relationships in the maintenance of social order. Eric Wolf used political economy in a neo-Marxist framework which began addressing the role of local cultures as a part of the world capitalist system, refusing to see those cultures as "primitive isolates". But environmental effects on political and economic processes were under-emphasised.
Conversely, Julian Steward and Roy Rappaport's theories of cultural ecology are sometimes credited with shifting the functionalist-oriented anthropology of the 1950s and 1960s and incorporating ecology and environment into ethnographic study.
Geographers and anthropologists worked with the strengths of both to form the basis of political ecology. PE focuses on issues of power, recognizing the importance of explaining environmental impacts on cultural processes without separating out political and economic contexts.
The application of political ecology in the work of anthropologists and geographers differs. While any approach will take both the political/economic and the ecological into account, the emphasis can be unequal. Some, such as geographer Michael Watts, focus on how the assertion of power impacts on access to environmental resources. His approach tends to see environmental harm as both a cause and an effect of “social marginalization”.
Political ecology has strengths and weaknesses. At its core, it contextualizes political and ecological explanations of human behavior. But as Walker points out, it has failed to offer “compelling counter-narratives” to “widely influential and popular yet deeply flawed and unapologetic neo-Malthusian rants such as Robert Kaplan's (1994) 'The coming anarchy' and Jared Diamond's (2005) Collapse (385). Ultimately, applying political ecology to policy decisions – especially in the US and Western Europe – will remain problematic as long as there is a resistance to Marxist and neo-Marxist theory.
Andrew Vayda and Bradley Walters (1999) criticize political ecologists for presupposing “the importance ... of certain kinds of political factors in the explanation of environmental changes” (167). Vayda and Walter's response to overly political approaches in political ecology is to encourage what they call “event ecology”, focusing on human responses to environmental events without presupposing the impact of political processes on environmental events. The critique has not been taken up widely. One example of work that builds on event ecology, in order to add a more explicit focus on the role of power dynamics and the need for including local peoples' voices is Penna-Firme (2013) "Political and Event Ecology: critiques and opportunities for collaboration".
Relationship to conservation
There is a divergence of ideas between conservation science and political ecology. With conservationists establishing protected areas to conserve biodiversity, "political ecologists have devoted some energy to the study of protected areas, which is unsurprising given political ecology's overall interest in forms of access to, and control over resources". The arguments against enclosure of land for conservation is that it harms local people and their livelihood systems, by denying them access. As Dove and Carpenter state, "indigenous people have important environmental knowledge which could contribute to conservation". The objection by political ecologists is that land use regulations are made by NGOs and the government, denying access, denying the ability of local people to conserve species and areas themselves, and rendering them more vulnerable through dispossession.
Power perspective in political ecology
Power is inevitable at the core of political ecology. Political ecology in the view Greenberg and Park is a way of creating a synergy between a political economy that aligns power distribution with ecological analysis and economic activities in a wider version of bio-environmental relations. Political ecology explained by Bryant is the dynamic in politics that is associated with "discursive struggle" and material in the environment of less developed nations, showing how unequal relation in power makes up a political environment. In the view of Robbins, empirical exploration that shows the changes occurring in an environment in clear connection to power is termed political ecology.
With power taking the central role in political ecology, there is a need to clarify the perspectives of power and the contributors to these perspectives.
Actor-oriented power perspectives:
According to the actor-oriented power perspectives, power is exercised by actors which are contrary to the presumption of power being perceived as a force likely to pass individuals with no consciousness. Fredrick Engelstad, a Norwegian sociologist explained the concept of power as the combination of relationality, causality, and intentionality. The implication of this is that actors are perceived as power carriers in a significant way by which through action a certain intention (intentionality) is achieved, action occurs between at least two actors (relationality), and intended results are produced by action (causality). Viewing the power perspective from the angle of actor-oriented, Dowding submitted that power is linked to the agency, and this does not take away the importance of structure. Rather, while seen actor's use of power as a constraint, it is also propelled by structures.
The contributions made by actor-oriented power theory are given by Max Weber (1964) where he explained power to be people’s ability to the realization of their wills irrespective of the resistance posed by others. An instance given by Robert Dahl is the case where actor A exercises power over actor B by getting actor B to execute a task that actor B will otherwise not do. The extreme case of this is when some group of individuals is mandated to carry out the task contrary to their thought or will.
Svarstad, Benjaminsen, and Overå held that the theory of actor-oriented power help in providing conceptual distinctions with useful insight into the theoretical elements that are vital in studying political ecology. While there are actors who either exercise or try to put power into use in diverse ways, there are also actors who encounter resistance from their oppositions and other forces. An instance of these forces is resisting the fulfilment of actors' intentions by other opposition who are more powerful. It can also come in the form of institutional structural constraints emanating from the outcome of intended actions.
The use of power by actors who exercise environmental interventions and actors who resist such interventions are oftentimes the emphasis of scholars of political ecology. However, when environmental interventions result in environmental degradations, scholars of political ecology throw their support to actors who resist such exercise of environmental interventions. Actors exercising environmental interventions include corporate organizations, governmental and non-governmental organizations while actors that resist them include groups such as peasants, fishermen, or pastoralists, by exercising counter-power using various kinds of resistance, or active involvement.
Neo-Marxist power perspectives
Amongst the foundations of political ecology is the political economy thought of Marxist which centered on the inequalities that emerged from global capitalism. However, the power perspectives of Marx are most likely highlighted even though there are several perspectives of power in political ecology influenced directly or indirectly by Marx. The Marxist main focus under capitalism is in relation to class and the stability of reproducing this class relation. Marx also placed human agency as the most important of his power concept with the human agency being socially conditioned as seen in his quote below:
"Men make their history, but they do not make it as they please; they do not make it under self-selected circumstances, but under circumstances existing already, given and transmitted from the past (Marx 1852:5)".
Thus, Marx's power theory which formed his perspective of power is the understanding of human agency as being constrained by social structure. As structure produces the potential and extent for power exertion, the human agency is reproducing the structure. This is illustrated by Isaac (1987) using the powerful David Rockefeller (1915 to 2017) as quoted below:
"But a social theory of power must explain what kinds of social relations exist and how power is distributed by these relations, such that it is possible for David Rockefeller to have the power that he has. To do this is not to deny that it is he who possesses this power, nor to deny those personal attributes determining the particular manner in which he exercises it. It is simply to insist that the power individuals possess has social conditions of existence and that it is these conditions that should be the primary focus of theoretical analysis".
Poststructuralist power perspectives
The poststructuralist power perspective is the domain of Michel Foucault’s work with its application in political ecology. The poststructuralist power perspectives can be in three dimensions such as; biopower, governmentality, and discursive power.
Biopower indicates that to secure life, governments are concerned with the improvement of health and quality of life among populations. Foucault in his work explained how through the knowledge of power, people have learned how they should behave. In so doing, Foucault separates sovereign power from bio-power. Where sovereign power is termed "take life or let live", the bio-power "make life or let die". While human as specie is continuously elaboration in conformity to nature, the superior one will intervene, acting on the environmental condition if the species of human are to be altered. Therefore, bio-power aim in terms of governance and knowledge is to ascertain environmental issues as core concerns.
Political ecology emphasized that understanding how power works in environmental governance follows Foucault’s notion of “governmentality”. Foucault sees governmentality as the means employed by the government to make its citizens behave in line with the priorities of government. Fletcher separates governmentality into four kinds. First is "discipline" which ensures that the citizens internalize specific manners like ethical standards and social norms. The second is the "truth" which is a way of governing citizens using truth-defining standards like religion. The third is "Neoliberal rationality" which is a motivational structure formed and used to improve outcomes. The fourth is "Sovereign power" used to govern based on rules and punishment for faulting the rules. According to Fletcher, these governmentalities may conflict, work alone, or overlap. Also, the first two are dependent on humans believing government priorities, the second two do not but are seen as of importance.
Lastly, "discursive power" manifest when actors (corporate organization, governmental, and non-governmental organizations) make people or groups imbibe and add to the reproduction of the discourses they produce. Unlike in other fields, in political ecology, discourses are studied in line with a critical realist epistemology. There are instances where the formation of discursive power is traced to a state’s colonial era when efforts are made in the appropriation of new territories. Going by the basis of Foucault's political-ecological discursive power, it becomes imperative to mention that, there exist various perspectives to those of Foucault with wider space for human agency.
Comparing between bio-power, governmentality, and discursive power, both governmentality, and discursive power can be regarded as a theoretical perspective with significant importance while bio-power can be regarded as a topical concern identified by Foucault as the core of modern-day governments.
Political ecologists
Some prominent contemporary scholars include:
Anthony Bebbington
Piers Blaikie
Murray Bookchin
Harold Brookfield
Raymond L. Bryant
Michael R. Dove
Robyn Eckersley
Arturo Escobar
Andre Gorz
Félix Guattari
Susanna Hecht
Ivan Illich
Giorgos Kallis
Alain Lipietz
William Moseley
Richard Peet
Paul Robbins
Ariel Salleh
Farhana Sultana
Erik Swyngedouw
Bhaskar Vira
Michael Watts
Karl Zimmerer
Related journals
Scholarly journals that have been key to the development (and critique) of this field include:
Annals of the Association of American Geographers
Antipode
Capitalism Nature Socialism
Development and Change
Journal of Peasant Studies
Ecological Economics
Ecology
Economic Geography
Environment and Planning
Futures
Gender, Place & Culture
Geoforum
Human Ecology
Journal of Political Ecology
New Left Review
Progress in Human Geography
Progress in Physical Geography
Oryx (journal)
See also
Agroecology
Criticism of capitalism
Cultural ecology
Development geography
Ecofeminism
Ecological crisis
Eco-socialism
Ecogovernmentality
Environmental justice
Environmental Politics
Environmental racism
Environmental sociology
Feminist political ecology
Green nationalism
Human behavioral ecology
List of ecology topics
Political economy
Social ecology
Social-ecology
References
Notes
Bibliography
Blaikie, P., and Brookfield, H. Land Degradation and Society. Methuen: 1987.
Blaikie, Piers. 1985. The Political Economy of Soil Erosion in Developing Countries. London; New York: Longman.
Bryant, Raymond L. 1998. Power, knowledge and political ecology in the third world: a review, Progress in Physical Geography 22(1):79-94.
Bryant, R. (ed.) 2015. International Handbook of Political Ecology. Edward Elgar
Bryant, Raymond L. and Sinead Bailey. 1997. Third World Political Ecology. Routledge.
Dove, Michael R., and Carol Carpenter, eds. 2008. Environmental Anthropology: A Historical Reader. MA: Blackwell.
Escobar, Arturo. 1996. “Construction Nature: elements for a post-structuralist political ecology”. Futures 28(4): 325-343.
Garí, Josep A. 2000. The Political Ecology of Biodiversity: Biodiversity conservation and rural development at the indigenous and peasant grassroots. D.Phil. Dissertation, University of Oxford. British Library No. 011720099 (DSC D213318).
Garí, Josep A. 2000. La ecología política de la biodiversidad. Ecología Política 20: 15-24.
Greenberg, James B. and Thomas K. Park. 1994. Political Ecology, Journal of Political Ecology 1: 1-12.
Hecht, Susanna & Alexander Cockburn. 1990 [Updated edition 2010]. Fate of the Forest: Developers, Destroyers, and Defenders of the Amazon. University of Chicago Press.
Hershkovitz, Linda. 1993. Political Ecology and Environmental Management in the Loess Plateau, China, Human Ecology 21(4): 327-353.
Martinez-Alier, Joan. 2002. The Environmentalism of the Poor: A Study of Ecological Conflicts and Valuation. Edward Elgar.
Milstein, T. & Castro-Sotomayor, J. 2020. "Routledge Handbook of Ecocultural Identity." London, UK: Routledge.
Paulson, Susan, Lisa L. Gezon, and Michael Watts. 2003. Locating the Political in Political Ecology: An Introduction, Human Organization 62(3): 205-217.
Peet, Richard and Michael Watts. 1993. Introduction: Development Theory and Environment in an Age of Market Triumphalism, Economic Geography 68(3): 227-253.
Peet, Richard, Paul Robbins, and Michael Watts. (eds.) 2011. Global Political Ecology. Routledge.
Peet, Richard and Michael Watts. eds. 1996. Liberation ecologies: environment, development, social movements. Routledge.
Peluso, Nancy Lee. 1992. Rich Forests, Poor People: Resource Control and Resistance in Java. University of California Press.
Peluso Nancy Lee & Michael Watts (eds.). 2001. Violent Environments. Cornell University Press.
Perreault, T., G. Bridge and J. McCarthy (eds.). 2015. Routledge Handbook of Political Ecology. Routledge.
Perry, Richard J. 2003. Five Key Concepts in Anthropological Thinking. Upper Saddle River, NJ: Prentice Hall.
Ritzer, George. 2008. Modern Sociological Theory. Boston: McGraw-Hill.
Robbins, Paul. 2012. Political Ecology: A Critical Introduction. 2nd ed. Blackwell.
Rocheleau, D. 1995. Gender and a Feminist Political Ecology Perspective, IDS Institute for Development Studies 26(1): 9-16.
Salleh, Ariel (ed.) 2009. Eco-Sufficiency & Global Justice: Women write Political Ecology. London: Pluto Press.
Salleh, Ariel. 2017. Ecofeminism in Clive Spash (ed.) Routledge Handbook of Ecological Economics. London: Routledge.
Sayre, Nathan. 2002. Species of Capital: Ranching, Endangered Species, and Urbanization in the Southwest. University of Arizona Press.
Sutton, Mark Q. and E. N. Anderson. 2004. Introduction to Cultural Ecology. Altamira.
Vayda, Andrew P. and Bradley B. Walters. 1999. Against Political Ecology, Human Ecology 27(1): 167-179.
Walker, Peter A. 2005. Political ecology: where is the ecology? Progress in Human Geography 29(1):73–82.
Walker, Peter A. 2006. Political ecology: where is the policy? Progress in Human Geography 30(3): 382-395.
Watts, Michael. 1983 [reprinted 2013]. Silent Violence: Food, Famine and Peasantry in Northern Nigeria. University of California Press.
Watts, Michael. 2000. “Political Ecology.” In Sheppard, E. and T. Barnes (eds.), A Companion to Economic Geography. Blackwell.
Wolf, Eric. 1972. Ownership and Political Ecology, Anthropological Quarterly 45(3): 201-205.
External links
Cultural and Political Ecology Specialty Group of the Association of American Geographers. Archive of newsletters, officers, award and honor recipients, as well as other resources associated with this community of scholars.
Ecology
Anthropology
Political geography
Ecology terminology
Environmental policy
Human-Environment interaction | Political ecology | [
"Biology",
"Environmental_science"
] | 4,596 | [
"Ecology terminology",
"Environmental social science",
"Political ecology",
"Ecology"
] |
45,456 | https://en.wikipedia.org/wiki/Crossbar%20switch | In electronics and telecommunications, a crossbar switch (cross-point switch, matrix switch) is a collection of switches arranged in a matrix configuration. A crossbar switch has multiple input and output lines that form a crossed pattern of interconnecting lines between which a connection may be established by closing a switch located at each intersection, the elements of the matrix. Originally, a crossbar switch consisted literally of crossing metal bars that provided the input and output paths. Later implementations achieved the same switching topology in solid-state electronics. The crossbar switch is one of the principal telephone exchange architectures, together with a rotary switch, memory switch, and a crossover switch.
General properties
A crossbar switch is an assembly of individual switches between a set of inputs and a set of outputs. The switches are arranged in a matrix. If the crossbar switch has M inputs and N outputs, then a crossbar has a matrix with M × N cross-points or places where connections can be made. At each crosspoint is a switch; when closed, it connects one of the inputs to one of the outputs. A given crossbar is a single layer, non-blocking switch. A crossbar switching system is also called a coordinate switching system.
Collections of crossbars can be used to implement multiple layer and blocking switches. A blocking switch prevents connecting more than one input. A non-blocking switch allows other concurrent connections from inputs to other outputs.
Applications
Crossbar switches are commonly used in information processing applications such as telephony and circuit switching, but they are also used in applications such as mechanical sorting machines.
The matrix layout of a crossbar switch is also used in some semiconductor memory devices which enables the data transmission. Here the bars are extremely thin metal wires, and the switches are fusible links. The fuses are blown or opened using high voltage and read using low voltage. Such devices are called programmable read-only memory. At the 2008 NSTI Nanotechnology Conference a paper was presented that discussed a nanoscale crossbar implementation of an adding circuit used as an alternative to logic gates for computation.
Matrix arrays are fundamental to modern flat-panel displays. Thin-film-transistor LCDs have a transistor at each crosspoint, so they could be considered to include a crossbar switch as part of their structure.
For video switching in home and professional theater applications, a crossbar switch (or a matrix switch, as it is more commonly called in this application) is used to distribute the output of multiple video appliances simultaneously to every monitor or every room throughout a building. In a typical installation, all the video sources are located on an equipment rack, and are connected as inputs to the matrix switch.
Where central control of the matrix is practical, a typical rack-mount matrix switch offers front-panel buttons to allow manual connection of inputs to outputs. An example of such a usage might be a sports bar, where numerous programs are displayed simultaneously. Ordinarily, a sports bar would install a separate desk top box for each display for which independent control is desired. The matrix switch enables the operator to route signals at will, so that only enough set top boxes are needed to cover the total number of unique programs to be viewed, while making it easier to control sound from any program in the overall sound system.
Such switches are used in high-end home theater applications. Video sources typically shared include set-top receivers or DVD changers; the same concept applies to audio. The outputs are wired to televisions in individual rooms. The matrix switch is controlled via an Ethernet or RS-232 connection by a whole-house automation controller, such as those made by AMX, Crestron, or Control4, which provides the user interface that enables the user in each room to select which appliance to watch. The actual user interface varies by system brand, and might include a combination of on-screen menus, touch-screens, and handheld remote controls. The system is necessary to enable the user to select the program they wish to watch from the same room they will watch it from, otherwise it would be necessary for them to walk to the equipment rack.
The special crossbar switches used in distributing satellite TV signals are called multiswitches.
Implementations
Historically, a crossbar switch consisted of metal bars associated with each input and output, together with some means of controlling movable contacts at each cross-point. The first switches used metal pins or plugs to bridge a vertical and horizontal bar. In the later part of the 20th century, the use of mechanical crossbar switches declined and the term described any rectangular array of switches in general. Modern crossbar switches are usually implemented with semiconductor technology. An important emerging class of optical crossbars is implemented with microelectromechanical systems (MEMS) technology.
Mechanical
A type of mid-20th-century telegraph exchange consisted of a grid of vertical and horizontal brass bars with a hole at each intersection (c.f. top picture). The operator inserted a metal pin to connect one telegraph line to another.
Electromechanical switching in telephony
A telephony crossbar switch is an electromechanical device for switching telephone calls. The first design of what is now called a crossbar switch was the Bell company Western Electric's coordinate selector of 1915. To save money on control systems, this system was organized on the stepping switch or selector principle rather than the link principle. It was little used in America, but the Televerket Swedish governmental agency manufactured its own design (the Gotthilf Betulander design from 1919, inspired by the Western Electric system), and used it in Sweden from 1926 until the digitization in the 1980s in small and medium-sized A204 model switches. The system design used in AT&T Corporation's 1XB crossbar exchanges, which entered revenue service from 1938, developed by Bell Telephone Labs, was inspired by the Swedish design but was based on the rediscovered link principle. In 1945, a similar design by Swedish Televerket was installed in Sweden, making it possible to increase the capacity of the A204 model switch. Delayed by the Second World War, several millions of urban 1XB lines were installed from the 1950s in the United States.
In 1950, the Swedish Ericsson company developed their own versions of the 1XB and A204 systems for the international market. In the early 1960s, the company's sales of crossbar switches exceeded those of their rotating 500-switching system, as measured in the number of lines. Crossbar switching quickly spread to the rest of the world, replacing most earlier designs like the Strowger (step-by-step) and Panel systems in larger installations in the U.S. Graduating from entirely electromechanical control on introduction, they were gradually elaborated to have full electronic control and a variety of calling features including short-code and speed-dialing. In the UK the Plessey Company produced a range of TXK crossbar exchanges, but their widespread rollout by the British Post Office began later than in other countries, and then was inhibited by the parallel development of TXE reed relay and electronic exchange systems, so they never achieved a large number of customer connections although they did find some success as tandem switch exchanges.
Crossbar switches use switching matrices made from a two-dimensional array of contacts arranged in an x–y format. These switching matrices are operated by a series of horizontal bars arranged over the contacts. Each such select bar can be rocked up or down by electromagnets to provide access to two levels of the matrix. A second set of vertical hold bars is set at right angles to the first (hence the name, "crossbar") and also operated by electromagnets. The select bars carry spring-loaded wire fingers that enable the hold bars to operate the contacts beneath the bars. When the select and then the hold electromagnets operate in sequence to move the bars, they trap one of the spring fingers to close the contacts beneath the point where two bars cross. This then makes the connection through the switch as part of setting up a calling path through the exchange. Once connected, the select magnet is then released so it can use its other fingers for other connections, while the hold magnet remains energized for the duration of the call to maintain the connection. The crossbar switching interface was referred to as the TXK or TXC (telephone exchange crossbar) switch in the UK.
However, the Bell System Type B crossbar switch of the 1960s was made in the largest quantity. The majority were 200-point switches, with twenty verticals and ten levels of three wires. Each select bar carries ten fingers so that any of the ten circuits assigned to the ten verticals can connect to either of two levels. Five select bars, each able to rotate up or down, mean a choice of ten links to the next stage of switching. Each crosspoint in this particular model connected six wires. The vertical off-normal contacts next to the hold magnets are lined up along the bottom of the switch. They perform logic and memory functions, and the hold bar keeps them in the active position as long as the connection is up. The horizontal off-normals on the sides of the switch are activated by the horizontal bars when the butterfly magnets rotate them. This only happens while the connection is being set up, since the butterflies are only energized then.
The majority of Bell System switches were made to connect three wires including the tip and ring of a balanced pair circuit and a sleeve lead for control. Many connected six wires, either for two distinct circuits or for a four wire circuit or other complex connection. The Bell System Type C miniature crossbar of the 1970s was similar, but the fingers projected forward from the back and the select bars held paddles to move them. The majority of type C had twelve levels; these were the less common ten level ones. The Northern Electric Minibar used in SP1 switch was similar but even smaller. The ITT Pentaconta Multiswitch of the same era had usually 22 verticals, 26 levels, and six to twelve wires. Ericsson crossbar switches sometimes had only five verticals.
Instrumentation
For instrumentation use, James Cunningham, Son and Company made high-speed, very-long-life crossbar switches with physically small mechanical parts which permitted faster operation than telephone-type crossbar switches. Many of their switches had the mechanical Boolean AND function of telephony crossbar switches, but other models had individual relays (one coil per crosspoint) in matrix arrays, connecting the relay contacts to [x] and [y] buses. These latter types were equivalent to separate relays; there was no logical AND function built in. Cunningham crossbar switches had precious-metal contacts capable of handling millivolt signals.
Telephone exchange
Early crossbar exchanges were divided into an originating side and a terminating side, while the later and prominent Canadian and US SP1 switch and 5XB switch were not. When a user picked up the telephone handset, the resulting line loop operating the user's line relay caused the exchange to connect the user's telephone to an originating sender, which returned the user a dial tone. The sender then recorded the dialed digits and passed them to the originating marker, which selected an outgoing trunk and operated the various crossbar switch stages to connect the calling user to it. The originating marker then passed the trunk call completion requirements (type of pulsing, resistance of the trunk, etc.) and the called party's details to the sender and released. The sender then relayed this information to a terminating sender (which could be on either the same or a different exchange). This sender then used a terminating marker to connect the calling user, via the selected incoming trunk, to the called user, and caused the controlling relay set to send the ring signal to the called user's phone, and return ringing tone to the caller.
The crossbar switch itself was simple: exchange design moved all the logical decision-making to the common control elements, which were very reliable as relay sets. The design criteria specified only two hours of downtime for service every forty years, which was a large improvement over earlier electromechanical systems. The exchange design concept lent itself to incremental upgrades, as the control elements could be replaced separately from the call switching elements. The minimum size of a crossbar exchange was comparatively large, but in city areas with a large installed line capacity the whole exchange occupied less space than other exchange technologies of equivalent capacity. For this reason they were also typically the first switches to be replaced with digital systems, which were even smaller and more reliable.
Two principles of crossbar switching existed. An early method was based on the selector principle, which used crossbar switches to implement the same switching fabric used with Strowger switches. In this principle, each crossbar switch would receive one dialed digit, corresponding to one of several groups of switches or trunks. The switch would then find an idle switch or trunk among those selected and connect to it. Each crossbar switch could only handle one call at a time; thus, an exchange with a hundred 10×10 switches in five stages could only have twenty conversations in progress. Distributed control meant there was no common point of failure, but also meant that the setup stage lasted for the ten seconds or so the caller took to dial the required number. In control occupancy terms this comparatively long interval degrades the traffic capacity of a switch.
Starting with the 1XB switch, the later and more common method was based on the link principle, and used the switches as crosspoints. Each moving contact was to the other contacts on the same level by bare-strip wiring, often nicknamed banjo wiring. to a link on one of the inputs of a switch in the next stage. The switch could handle its portion of as many calls as it had levels or verticals. Thus an exchange with forty 10×10 switches in four stages could have one hundred conversations in progress. The link principle was more efficient, but required a complex control system to find idle links through the switching fabric.
This meant common control, as described above: all the digits were recorded, then passed to the common control equipment, the marker, to establish the call at all the separate switch stages simultaneously. A marker-controlled crossbar system had in the marker a highly vulnerable central control; this was invariably protected by having duplicate markers. The great advantage was that the control occupancy on the switches was of the order of one second or less, representing the operate and release lags of the X-then-Y armatures of the switches. The only downside of common control was the need to provide digit recorders enough to deal with the greatest forecast originating traffic level on the exchange.
The Plessey TXK1 or 5005 design used an intermediate form, in which a clear path was marked through the switching fabric by distributed logic, and then closed through all at once.
Crossbar exchanges remain in revenue service only in a few telephone networks. Preserved installations are maintained in museums, such as the Museum of Communications in Seattle, Washington, and the Science Museum in London.
Semiconductor
Semiconductor implementations of crossbar switches typically consist of a set of input amplifiers or retimers connected to a series of interconnects within a semiconductor device. A similar set of interconnects are connected to output amplifiers or retimers. At each cross-point where the bars cross, a pass transistor is implemented which connects the bars. When the pass transistor is enabled, the input is connected to the output.
As computer technologies have improved, crossbar switches have found uses in systems such as the multistage interconnection networks that connect the various processing units in a uniform memory access parallel processor to the array of memory elements.
Arbitration
A standard problem in using crossbar switches is that of setting the crosspoints. In the classic telephony application of crossbars, the crosspoints are closed, and open as the telephone calls come and go. In Asynchronous Transfer Mode or packet switching applications, the crosspoints must be made and broken at each decision interval. In high-speed switches, the settings of all of the crosspoints must be determined and then set millions or billions of times per second. One approach for making these decisions quickly is through the use of a wavefront arbiter.
See also
Matrix mixer
Nonblocking minimal spanning switch - describes how to combine crossbar switches into larger switches.
RF switch matrix
References
Further reading
External links
Images on an Ericsson ARF crossbar switch
Switches
Telephone exchange equipment
Electronic circuits | Crossbar switch | [
"Engineering"
] | 3,403 | [
"Electronic engineering",
"Electronic circuits"
] |
45,459 | https://en.wikipedia.org/wiki/Control%20flow | In computer science, control flow (or flow of control) is the order in which individual statements, instructions or function calls of an imperative program are executed or evaluated. The emphasis on explicit control flow distinguishes an imperative programming language from a declarative programming language.
Within an imperative programming language, a control flow statement is a statement that results in a choice being made as to which of two or more paths to follow. For non-strict functional languages, functions and language constructs exist to achieve the same result, but they are usually not termed control flow statements.
A set of statements is in turn generally structured as a block, which in addition to grouping, also defines a lexical scope.
Interrupts and signals are low-level mechanisms that can alter the flow of control in a way similar to a subroutine, but usually occur as a response to some external stimulus or event (that can occur asynchronously), rather than execution of an in-line control flow statement.
At the level of machine language or assembly language, control flow instructions usually work by altering the program counter. For some central processing units (CPUs), the only control flow instructions available are conditional or unconditional branch instructions, also termed jumps.
Categories
The kinds of control flow statements supported by different languages vary, but can be categorized by their effect:
Continuation at a different statement (unconditional branch or jump)
Executing a set of statements only if some condition is met (choice - i.e., conditional branch)
Executing a set of statements zero or more times, until some condition is met (i.e., loop - the same as conditional branch)
Executing a set of distant statements, after which the flow of control usually returns (subroutines, coroutines, and continuations)
Stopping the program, preventing any further execution (unconditional halt)
Primitives
Labels
A label is an explicit name or number assigned to a fixed position within the source code, and which may be referenced by control flow statements appearing elsewhere in the source code. A label marks a position within source code and has no other effect.
Line numbers are an alternative to a named label used in some languages (such as BASIC). They are whole numbers placed at the start of each line of text in the source code. Languages which use these often impose the constraint that the line numbers must increase in value in each following line, but may not require that they be consecutive. For example, in BASIC:
10 LET X = 3
20 PRINT X
In other languages such as C and Ada, a label is an identifier, usually appearing at the start of a line and immediately followed by a colon. For example, in C:
Success: printf("The operation was successful.\n");
The language ALGOL 60 allowed both whole numbers and identifiers as labels (both linked by colons to the following statement), but few if any other ALGOL variants allowed whole numbers. Early Fortran compilers only allowed whole numbers as labels. Beginning with Fortran-90, alphanumeric labels have also been allowed.
Goto
The goto statement (a combination of the English words go and to, and pronounced accordingly) is the most basic form of unconditional transfer of control.
Although the keyword may either be in upper or lower case depending on the language, it is usually written as:
goto label
The effect of a goto statement is to cause the next statement to be executed to be the statement appearing at (or immediately after) the indicated label.
Goto statements have been considered harmful by many computer scientists, notably Dijkstra.
Subroutines
The terminology for subroutines varies; they may alternatively be known as routines, procedures, functions (especially if they return results) or methods (especially if they belong to classes or type classes).
In the 1950s, computer memories were very small by current standards so subroutines were used mainly to reduce program size. A piece of code was written once and then used many times from various other places in a program.
Today, subroutines are more often used to help make a program more structured, e.g., by isolating some algorithm or hiding some data access method. If many programmers are working on one program, subroutines are one kind of modularity that can help divide the work.
Sequence
In structured programming, the ordered sequencing of successive commands is considered one of the basic control structures, which is used as a building block for programs alongside iteration, recursion and choice.
Minimal structured control flow
In May 1966, Böhm and Jacopini published an article in Communications of the ACM which showed that any program with gotos could be transformed into a goto-free form involving only choice (IF THEN ELSE) and loops (WHILE condition DO xxx), possibly with duplicated code and/or the addition of Boolean variables (true/false flags). Later authors showed that choice can be replaced by loops (and yet more Boolean variables).
That such minimalism is possible does not mean that it is necessarily desirable; computers theoretically need only one machine instruction (subtract one number from another and branch if the result is negative), but practical computers have dozens or even hundreds of machine instructions.
Other research showed that control structures with one entry and one exit were much easier to understand than any other form, mainly because they could be used anywhere as a statement without disrupting the control flow. In other words, they were composable. (Later developments, such as non-strict programming languages – and more recently, composable software transactions – have continued this strategy, making components of programs even more freely composable.)
Some academics took a purist approach to the Böhm–Jacopini result and argued that even instructions like break and return from the middle of loops are bad practice as they are not needed in the Böhm–Jacopini proof, and thus they advocated that all loops should have a single exit point. This purist approach is embodied in the language Pascal (designed in 1968–1969), which up to the mid-1990s was the preferred tool for teaching introductory programming in academia. The direct application of the Böhm–Jacopini theorem may result in additional local variables being introduced in the structured chart, and may also result in some code duplication. Pascal is affected by both of these problems and according to empirical studies cited by Eric S. Roberts, student programmers had difficulty formulating correct solutions in Pascal for several simple problems, including writing a function for searching an element in an array. A 1980 study by Henry Shapiro cited by Roberts found that using only the Pascal-provided control structures, the correct solution was given by only 20% of the subjects, while no subject wrote incorrect code for this problem if allowed to write a return from the middle of a loop.
Control structures in practice
Most programming languages with control structures have an initial keyword which indicates the type of control structure involved. Languages then divide as to whether or not control structures have a final keyword.
No final keyword: ALGOL 60, C, C++, Go, Haskell, Java, Pascal, Perl, PHP, PL/I, Python, PowerShell. Such languages need some way of grouping statements together:
ALGOL 60 and Pascal: begin ... end
C, C++, Go, Java, Perl, PHP, and PowerShell: curly brackets { ... }
PL/I: DO ... END
Python: uses indent level (see Off-side rule)
Haskell: either indent level or curly brackets can be used, and they can be freely mixed
Lua: uses do ... end
Final keyword: Ada, APL, ALGOL 68, Modula-2, Fortran 77, Mythryl, Visual Basic. The forms of the final keyword vary:
Ada: final keyword is end + space + initial keyword e.g., if ... end if, loop ... end loop
APL: final keyword is :End optionally + initial keyword, e.g., :If ... :End or :If ... :EndIf, Select ... :End or :Select ... :EndSelect, however, if adding an end condition, the end keyword becomes :Until
ALGOL 68, Mythryl: initial keyword spelled backwards e.g., if ... fi, case ... esac
Fortran 77: final keyword is END + initial keyword e.g., IF ... ENDIF, DO ... ENDDO
Modula-2: same final keyword END for everything
Visual Basic: every control structure has its own keyword. If ... End If; For ... Next; Do ... Loop; While ... Wend
Choice
If-then-(else) statements
Conditional expressions and conditional constructs are features of a programming language that perform different computations or actions depending on whether a programmer-specified Boolean condition evaluates to true or false.
IF..GOTO. A form found in unstructured languages, mimicking a typical machine code instruction, would jump to (GOTO) a label or line number when the condition was met.
IF..THEN..(ENDIF). Rather than being restricted to a jump, any simple statement, or nested block, could follow the THEN key keyword. This a structured form.
IF..THEN..ELSE..(ENDIF). As above, but with a second action to be performed if the condition is false. This is one of the most common forms, with many variations. Some require a terminal ENDIF, others do not. C and related languages do not require a terminal keyword, or a 'then', but do require parentheses around the condition.
Conditional statements can be and often are nested inside other conditional statements. Some languages allow ELSE and IF to be combined into ELSEIF, avoiding the need to have a series of ENDIF or other final statements at the end of a compound statement.
Less common variations include:
Some languages, such as early Fortran, have a three-way or arithmetic if, testing whether a numeric value is negative, zero, or positive.
Some languages have a functional form of an if statement, for instance Lisp's cond.
Some languages have an operator form of an if statement, such as C's ternary operator.
Perl supplements a C-style if with when and unless.
Smalltalk uses ifTrue and ifFalse messages to implement conditionals, rather than any fundamental language construct.
Case and switch statements
Switch statements (or case statements, or multiway branches) compare a given value with specified constants and take action according to the first constant to match. There is usually a provision for a default action ("else", "otherwise") to be taken if no match succeeds. Switch statements can allow compiler optimizations, such as lookup tables. In dynamic languages, the cases may not be limited to constant expressions, and might extend to pattern matching, as in the shell script example on the right, where the *) implements the default case as a glob matching any string. Case logic can also be implemented in functional form, as in SQL's decode statement.
Loops
A loop is a sequence of statements which is specified once but which may be carried out several times in succession. The code "inside" the loop (the body of the loop, shown below as xxx) is obeyed a specified number of times, or once for each of a collection of items, or until some condition is met, or indefinitely. When one of those items is itself also a loop, it is called a "nested loop".
In functional programming languages, such as Haskell and Scheme, both recursive and iterative processes are expressed with tail recursive procedures instead of looping constructs that are syntactic.
Count-controlled loops
Most programming languages have constructions for repeating a loop a certain number of times.
In most cases counting can go downwards instead of upwards and step sizes other than 1 can be used.
In these examples, if N < 1 then the body of loop may execute once (with I having value 1) or not at all, depending on the programming language.
In many programming languages, only integers can be reliably used in a count-controlled loop. Floating-point numbers are represented imprecisely due to hardware constraints, so a loop such as
for X := 0.1 step 0.1 to 1.0 do
might be repeated 9 or 10 times, depending on rounding errors and/or the hardware and/or the compiler version. Furthermore, if the increment of X occurs by repeated addition, accumulated rounding errors may mean that the value of X in each iteration can differ quite significantly from the expected sequence 0.1, 0.2, 0.3, ..., 1.0.
Condition-controlled loops
Most programming languages have constructions for repeating a loop until some condition changes. Some variations test the condition at the start of the loop; others test it at the end. If the test is at the start, the body may be skipped completely; if it is at the end, the body is always executed at least once.
A control break is a value change detection method used within ordinary loops to trigger processing for groups of values. Values are monitored within the loop and a change diverts program flow to the handling of the group event associated with them.
DO UNTIL (End-of-File)
IF new-zipcode <> current-zipcode
display_tally(current-zipcode, zipcount)
current-zipcode = new-zipcode
zipcount = 0
ENDIF
zipcount++
LOOP
Collection-controlled loops
Several programming languages (e.g., Ada, D, C++11, Smalltalk, PHP, Perl, Object Pascal, Java, C#, MATLAB, Visual Basic, Ruby, Python, JavaScript, Fortran 95 and later) have special constructs which allow implicit looping through all elements of an array, or all members of a set or collection.
someCollection do: [:eachElement |xxx].
for Item in Collection do begin xxx end;
foreach (item; myCollection) { xxx }
foreach someArray { xxx }
foreach ($someArray as $k => $v) { xxx }
Collection<String> coll; for (String s : coll) {}
foreach (string s in myStringCollection) { xxx }
someCollection | ForEach-Object { $_ }
forall ( index = first:last:step... )
Scala has for-expressions, which generalise collection-controlled loops, and also support other uses, such as asynchronous programming. Haskell has do-expressions and comprehensions, which together provide similar function to for-expressions in Scala.
General iteration
General iteration constructs such as C's for statement and Common Lisp's do form can be used to express any of the above sorts of loops, and others, such as looping over some number of collections in parallel. Where a more specific looping construct can be used, it is usually preferred over the general iteration construct, since it often makes the purpose of the expression clearer.
Infinite loops
Infinite loops are used to assure a program segment loops forever or until an exceptional condition arises, such as an error. For instance, an event-driven program (such as a server) should loop forever, handling events as they occur, only stopping when the process is terminated by an operator.
Infinite loops can be implemented using other control flow constructs. Most commonly, in unstructured programming this is jump back up (goto), while in structured programming this is an indefinite loop (while loop) set to never end, either by omitting the condition or explicitly setting it to true, as while (true) .... Some languages have special constructs for infinite loops, typically by omitting the condition from an indefinite loop. Examples include Ada (loop ... end loop), Fortran (DO ... END DO), Go (for { ... }), and Ruby (loop do ... end).
Often, an infinite loop is unintentionally created by a programming error in a condition-controlled loop, wherein the loop condition uses variables that never change within the loop.
Continuation with next iteration
Sometimes within the body of a loop there is a desire to skip the remainder of the loop body and continue with the next iteration of the loop. Some languages provide a statement such as continue (most languages), skip, cycle (Fortran), or next (Perl and Ruby), which will do this. The effect is to prematurely terminate the innermost loop body and then resume as normal with the next iteration. If the iteration is the last one in the loop, the effect is to terminate the entire loop early.
Redo current iteration
Some languages, like Perl and Ruby, have a redo statement that restarts the current iteration from the start.
Restart loop
Ruby has a retry statement that restarts the entire loop from the initial iteration.
Early exit from loops
When using a count-controlled loop to search through a table, it might be desirable to stop searching as soon as the required item is found. Some programming languages provide a statement such as break (most languages), Exit (Visual Basic), or last (Perl), which effect is to terminate the current loop immediately, and transfer control to the statement immediately after that loop. Another term for early-exit loops is loop-and-a-half.
The following example is done in Ada which supports both early exit from loops and loops with test in the middle. Both features are very similar and comparing both code snippets will show the difference: early exit must be combined with an if statement while a condition in the middle is a self-contained construct.
with Ada.Text IO;
with Ada.Integer Text IO;
procedure Print_Squares is
X : Integer;
begin
Read_Data : loop
Ada.Integer Text IO.Get(X);
exit Read_Data when X = 0;
Ada.Text IO.Put (X * X);
Ada.Text IO.New_Line;
end loop Read_Data;
end Print_Squares;
Python supports conditional execution of code depending on whether a loop was exited early (with a break statement) or not by using an else-clause with the loop. For example,
for n in set_of_numbers:
if isprime(n):
print("Set contains a prime number")
break
else:
print("Set did not contain any prime numbers")
The else clause in the above example is linked to the for statement, and not the inner if statement. Both Python's for and while loops support such an else clause, which is executed only if early exit of the loop has not occurred.
Some languages support breaking out of nested loops; in theory circles, these are called multi-level breaks. One common use example is searching a multi-dimensional table. This can be done either via multilevel breaks (break out of N levels), as in bash and PHP, or via labeled breaks (break out and continue at given label), as in Go, Java and Perl. Alternatives to multilevel breaks include single breaks, together with a state variable which is tested to break out another level; exceptions, which are caught at the level being broken out to; placing the nested loops in a function and using return to effect termination of the entire nested loop; or using a label and a goto statement. C does not include a multilevel break, and the usual alternative is to use a goto to implement a labeled break. Python does not have a multilevel break or continue – this was proposed in PEP 3136, and rejected on the basis that the added complexity was not worth the rare legitimate use.
The notion of multi-level breaks is of some interest in theoretical computer science, because it gives rise to what is today called the Kosaraju hierarchy. In 1973 S. Rao Kosaraju refined the structured program theorem by proving that it is possible to avoid adding additional variables in structured programming, as long as arbitrary-depth, multi-level breaks from loops are allowed. Furthermore, Kosaraju proved that a strict hierarchy of programs exists: for every integer n, there exists a program containing a multi-level break of depth n that cannot be rewritten as a program with multi-level breaks of depth less than n without introducing added variables.
One can also return out of a subroutine executing the looped statements, breaking out of both the nested loop and the subroutine. There are other proposed control structures for multiple breaks, but these are generally implemented as exceptions instead.
In his 2004 textbook, David Watt uses Tennent's notion of sequencer to explain the similarity between multi-level breaks and return statements. Watt notes that a class of sequencers known as escape sequencers, defined as "sequencer that terminates execution of a textually enclosing command or procedure", encompasses both breaks from loops (including multi-level breaks) and return statements. As commonly implemented, however, return sequencers may also carry a (return) value, whereas the break sequencer as implemented in contemporary languages usually cannot.
Loop variants and invariants
Loop variants and loop invariants are used to express correctness of loops.
In practical terms, a loop variant is an integer expression which has an initial non-negative value. The variant's value must decrease during each loop iteration but must never become negative during the correct execution of the loop. Loop variants are used to guarantee that loops will terminate.
A loop invariant is an assertion which must be true before the first loop iteration and remain true after each iteration. This implies that when a loop terminates correctly, both the exit condition and the loop invariant are satisfied. Loop invariants are used to monitor specific properties of a loop during successive iterations.
Some programming languages, such as Eiffel contain native support for loop variants and invariants. In other cases, support is an add-on, such as the Java Modeling Language's specification for loop statements in Java.
Loop sublanguage
Some Lisp dialects provide an extensive sublanguage for describing Loops. An early example can be found in Conversional Lisp of Interlisp. Common Lisp provides a Loop macro which implements such a sublanguage.
Loop system cross-reference table
while (true) does not count as an infinite loop for this purpose, because it is not a dedicated language structure.
C's for (init; test; increment) loop is a general loop construct, not specifically a counting one, although it is often used for that.
Deep breaks may be accomplished in APL, C, C++ and C# through the use of labels and gotos.
Iteration over objects was added in PHP 5.
A counting loop can be simulated by iterating over an incrementing list or generator, for instance, Python's range().
Deep breaks may be accomplished through the use of exception handling.
There is no special construct, since the while function can be used for this.
There is no special construct, but users can define general loop functions.
The C++11 standard introduced the range-based for. In the STL, there is a std::for_each template function which can iterate on STL containers and call a unary function for each element. The functionality also can be constructed as macro on these containers.
Count-controlled looping is effected by iteration across an integer interval; early exit by including an additional condition for exit.
Eiffel supports a reserved word retry, however it is used in exception handling, not loop control.
Requires Java Modeling Language (JML) behavioral interface specification language.
Requires loop variants to be integers; transfinite variants are not supported.
D supports infinite collections, and the ability to iterate over those collections. This does not require any special construct.
Deep breaks can be achieved using GO TO and procedures.
Common Lisp predates the concept of generic collection type.
Structured non-local control flow
Many programming languages, especially those favoring more dynamic styles of programming, offer constructs for non-local control flow. These cause the flow of execution to jump out of a given context and resume at some predeclared point. Conditions, exceptions and continuations are three common sorts of non-local control constructs; more exotic ones also exist, such as generators, coroutines and the async keyword.
Conditions
The earliest Fortran compilers had statements for testing exceptional conditions. These included the IF ACCUMULATOR OVERFLOW, IF QUOTIENT OVERFLOW, and IF DIVIDE CHECK statements. In the interest of machine independence, they were not included in FORTRAN IV and the Fortran 66 Standard. However since Fortran 2003 it is possible to test for numerical issues via calls to functions in the IEEE_EXCEPTIONS module.
PL/I has some 22 standard conditions (e.g., ZERODIVIDE SUBSCRIPTRANGE ENDFILE) which can be raised and which can be intercepted by: ON condition action; Programmers can also define and use their own named conditions.
Like the unstructured if, only one statement can be specified so in many cases a GOTO is needed to decide where flow of control should resume.
Unfortunately, some implementations had a substantial overhead in both space and time (especially SUBSCRIPTRANGE), so many programmers tried to avoid using conditions.
Common Syntax examples:
ON condition GOTO label
Exceptions
Modern languages have a specialized structured construct for exception handling which does not rely on the use of GOTO or (multi-level) breaks or returns. For example, in C++ one can write:
try {
xxx1 // Somewhere in here
xxx2 // use: '''throw''' someValue;
xxx3
} catch (someClass& someId) { // catch value of someClass
actionForSomeClass
} catch (someType& anotherId) { // catch value of someType
actionForSomeType
} catch (...) { // catch anything not already caught
actionForAnythingElse
}
Any number and variety of catch clauses can be used above. If there is no catch matching a particular throw, control percolates back through subroutine calls and/or nested blocks until a matching catch is found or until the end of the main program is reached, at which point the program is forcibly stopped with a suitable error message.
Via C++'s influence, catch is the keyword reserved for declaring a pattern-matching exception handler in other languages popular today, like Java or C#. Some other languages like Ada use the keyword exception to introduce an exception handler and then may even employ a different keyword (when in Ada) for the pattern matching. A few languages like AppleScript incorporate placeholders in the exception handler syntax to automatically extract several pieces of information when the exception occurs. This approach is exemplified below by the on error construct from AppleScript:
try
set myNumber to myNumber / 0
on error e number n from f to t partial result pr
if ( e = "Can't divide by zero" ) then display dialog "You must not do that"
end try
David Watt's 2004 textbook also analyzes exception handling in the framework of sequencers (introduced in this article in the section on early exits from loops). Watt notes that an abnormal situation, generally exemplified with arithmetic overflows or input/output failures like file not found, is a kind of error that "is detected in some low-level program unit, but [for which] a handler is more naturally located in a high-level program unit". For example, a program might contain several calls to read files, but the action to perform when a file is not found depends on the meaning (purpose) of the file in question to the program and thus a handling routine for this abnormal situation cannot be located in low-level system code. Watts further notes that introducing status flags testing in the caller, as single-exit structured programming or even (multi-exit) return sequencers would entail, results in a situation where "the application code tends to get cluttered by tests of status flags" and that "the programmer might forgetfully or lazily omit to test a status flag. In fact, abnormal situations represented by status flags are by default ignored!" Watt notes that in contrast to status flags testing, exceptions have the opposite default behavior, causing the program to terminate unless the program deals with the exception explicitly in some way, possibly by adding explicit code to ignore it. Based on these arguments, Watt concludes that jump sequencers or escape sequencers are less suitable as a dedicated exception sequencer with the semantics discussed above.
In Object Pascal, D, Java, C#, and Python a finally clause can be added to the try construct. No matter how control leaves the try the code inside the finally clause is guaranteed to execute. This is useful when writing code that must relinquish an expensive resource (such as an opened file or a database connection) when finished processing:
FileStream stm = null; // C# example
try
{
stm = new FileStream("logfile.txt", FileMode.Create);
return ProcessStuff(stm); // may throw an exception
}
finally
{
if (stm != null)
stm.Close();
}
Since this pattern is fairly common, C# has a special syntax:
using (var stm = new FileStream("logfile.txt", FileMode.Create))
{
return ProcessStuff(stm); // may throw an exception
}
Upon leaving the using-block, the compiler guarantees that the stm object is released, effectively binding the variable to the file stream while abstracting from the side effects of initializing and releasing the file. Python's with statement and Ruby's block argument to File.open are used to similar effect.
All the languages mentioned above define standard exceptions and the circumstances under which they are thrown. Users can throw exceptions of their own; C++ allows users to throw and catch almost any type, including basic types like int, whereas other languages like Java are less permissive.
Continuations
Async
C# 5.0 introduced the async keyword for supporting asynchronous I/O in a "direct style".
Generators
Generators, also known as semicoroutines, allow control to be yielded to a consumer method temporarily, typically using a keyword (yield description) . Like the async keyword, this supports programming in a "direct style".
Coroutines
Coroutines are functions that can yield control to each other - a form of co-operative multitasking without threads.
Coroutines can be implemented as a library if the programming language provides either continuations or generators - so the distinction between coroutines and generators in practice is a technical detail.
Non-local control flow cross reference
Proposed control structures
In a spoof Datamation article in 1973, R. Lawrence Clark suggested that the GOTO statement could be replaced by the COMEFROM statement, and provides some entertaining examples. COMEFROM was implemented in one esoteric programming language named INTERCAL.
Donald Knuth's 1974 article "Structured Programming with go to Statements", identifies two situations which were not covered by the control structures listed above, and gave examples of control structures which could handle these situations. Despite their utility, these constructs have not yet found their way into mainstream programming languages.
Loop with test in the middle
The following was proposed by Dahl in 1972:
loop loop
xxx1 read(char);
while test; while not atEndOfFile;
xxx2 write(char);
repeat; repeat;
If xxx1 is omitted, we get a loop with the test at the top (a traditional while loop). If xxx2 is omitted, we get a loop with the test at the bottom, equivalent to a do while loop in many languages. If while is omitted, we get an infinite loop. The construction here can be thought of as a do loop with the while check in the middle. Hence this single construction can replace several constructions in most programming languages.
Languages lacking this construct generally emulate it using an equivalent infinite-loop-with-break idiom:
while (true) {
xxx1
if (not test)
break
xxx2
}
A possible variant is to allow more than one while test; within the loop, but the use of exitwhen (see next section) appears to cover this case better.
In Ada, the above loop construct (loop-while-repeat) can be represented using a standard infinite loop (loop - end loop) that has an exit when clause in the middle (not to be confused with the exitwhen statement in the following section).
with Ada.Text_IO;
with Ada.Integer_Text_IO;
procedure Print_Squares is
X : Integer;
begin
Read_Data : loop
Ada.Integer_Text_IO.Get(X);
exit Read_Data when X = 0;
Ada.Text IO.Put (X * X);
Ada.Text IO.New_Line;
end loop Read_Data;
end Print_Squares;
Naming a loop (like Read_Data in this example) is optional but permits leaving the outer loop of several nested loops.
Multiple early exit/exit from nested loops
This construct was proposed by Zahn in 1974. A modified version is presented here.
exitwhen EventA or EventB or EventC;
xxx
exits
EventA: actionA
EventB: actionB
EventC: actionC
endexit;
exitwhen is used to specify the events which may occur within xxx,
their occurrence is indicated by using the name of the event as a statement. When some event does occur, the relevant action is carried out, and then control passes just after . This construction provides a very clear separation between determining that some situation applies, and the action to be taken for that situation.
exitwhen is conceptually similar to exception handling, and exceptions or similar constructs are used for this purpose in many languages.
The following simple example involves searching a two-dimensional table for a particular item.
exitwhen found or missing;
for I := 1 to N do
for J := 1 to M do
if table[I,J] = target then found;
missing;
exits
found: print ("item is in table");
missing: print ("item is not in table");
endexit;
Security
One way to attack a piece of software is to redirect the flow of execution of a program. A variety of control-flow integrity techniques, including stack canaries, buffer overflow protection, shadow stacks, and vtable pointer verification, are used to defend against these attacks.
See also
Branch (computer science)
Control-flow analysis
Control-flow diagram
Control-flow graph
Control table
Coroutine
Cyclomatic complexity
Drakon-chart
Flowchart
Goto
Jeroo, helps learn control structures
Main loop
Recursion
Scheduling (computing)
Spaghetti code
Structured programming
Subroutine
Switch statement, alters control flow conditionally
Zahn's construct
Notes
References
Further reading
Hoare, C. A. R. "Partition: Algorithm 63," "Quicksort: Algorithm 64," and "Find: Algorithm 65." Comm. ACM 4, 321–322, 1961.
External links
Go To Statement Considered Harmful
A Linguistic Contribution of GOTO-less Programming
Iteration in programming
Programming language comparisons
Recursion
Articles with example Ada code
Articles with example ALGOL 60 code
Articles with example ALGOL 68 code
Articles with example C code
Articles with example C++ code
Articles with example C Sharp code
Articles with example D code
Articles with example Fortran code
Articles with example Haskell code
Articles with example Java code
Articles with example JavaScript code
Articles with example Lisp (programming language) code
Articles with example MATLAB/Octave code
Articles with example Pascal code
Articles with example Perl code
Articles with example PHP code
Articles with example Python (programming language) code
Articles with example Ruby code
Articles with example Smalltalk code | Control flow | [
"Mathematics",
"Technology"
] | 7,549 | [
"Mathematical logic",
"Recursion",
"Computing comparisons",
"Programming language comparisons"
] |
45,468 | https://en.wikipedia.org/wiki/Pareto%20efficiency | In welfare economics, a Pareto improvement formalizes the idea of an outcome being "better in every possible way". A change is called a Pareto improvement if it leaves at least one person in society better-off without leaving anyone else worse off than they were before. A situation is called Pareto efficient or Pareto optimal if all possible Pareto improvements have already been made; in other words, there are no longer any ways left to make one person better-off, without making some other person worse-off.
In social choice theory, the same concept is sometimes called the unanimity principle, which says that if everyone in a society (non-strictly) prefers A to B, society as a whole also non-strictly prefers A to B. The Pareto front consists of all Pareto-efficient situations.
In addition to the context of efficiency in allocation, the concept of Pareto efficiency also arises in the context of efficiency in production vs. x-inefficiency: a set of outputs of goods is Pareto-efficient if there is no feasible re-allocation of productive inputs such that output of one product increases while the outputs of all other goods either increase or remain the same.
Besides economics, the notion of Pareto efficiency has also been applied to selecting alternatives in engineering and biology. Each option is first assessed, under multiple criteria, and then a subset of options is identified with the property that no other option can categorically outperform the specified option. It is a statement of impossibility of improving one variable without harming other variables in the subject of multi-objective optimization (also termed Pareto optimization).
History
The concept is named after Vilfredo Pareto (1848–1923), an Italian civil engineer and economist, who used the concept in his studies of economic efficiency and income distribution.
Pareto originally used the word "optimal" for the concept, but this is somewhat of a misnomer: Pareto's concept more closely aligns with an idea of "efficiency", because it does not identify a single "best" (optimal) outcome. Instead, it only identifies a set of outcomes that might be considered optimal, by at least one person.
Overview
Formally, a state is Pareto-optimal if there is no alternative state where at least one participant's well-being is higher, and nobody else's well-being is lower. If there is a state change that satisfies this condition, the new state is called a "Pareto improvement". When no Pareto improvements are possible, the state is a "Pareto optimum".
In other words, Pareto efficiency is when it is impossible to make one party better off without making another party worse off. This state indicates that resources can no longer be allocated in a way that makes one party better off without harming other parties. In a state of Pareto Efficiency, resources are allocated in the most efficient way possible.
Pareto efficiency is mathematically represented when there is no other strategy profile s such that ui (s') ≥ ui (s) for every player i and uj (s') > uj (s) for some player j. In this equation s represents the strategy profile, u represents the utility or benefit, and j represents the player.
Efficiency is an important criterion for judging behavior in a game. In a notable and often analyzed game known as Prisoner's Dilemma, depicted below as a normal-form game, this concept of efficiency can be observed, in that the strategy profile (Cooperate, Cooperate) is more efficient than (Defect, Defect).
Using the definition above, let s = (-2, -2) (Both Defect) and s' = (-1, -1) (Both Cooperate). Then ui(s') > ui(s) for all i. Thus Both Cooperate is a Pareto improvement over Both Defect, which means that Both Defect is not Pareto-efficient. Furthermore, neither of the remaining strategy profiles, (0, -5) or (-5, 0), is a Pareto improvement over Both Cooperate, since -5 < -1. Thus Both Cooperate is Pareto-efficient.
In zero-sum games, every outcome is Pareto-efficient.
A special case of a state is an allocation of resources. The formal presentation of the concept in an economy is the following: Consider an economy with agents and goods. Then an allocation , where for all i, is Pareto-optimal if there is no other feasible allocation where, for utility function for each agent , for all with for some . Here, in this simple economy, "feasibility" refers to an allocation where the total amount of each good that is allocated sums to no more than the total amount of the good in the economy. In a more complex economy with production, an allocation would consist both of consumption vectors and production vectors, and feasibility would require that the total amount of each consumed good is no greater than the initial endowment plus the amount produced.
Under the assumptions of the first welfare theorem, a competitive market leads to a Pareto-efficient outcome. This result was first demonstrated mathematically by economists Kenneth Arrow and Gérard Debreu. However, the result only holds under the assumptions of the theorem: markets exist for all possible goods, there are no externalities, markets are perfectly competitive, and market participants have perfect information.
In the absence of perfect information or complete markets, outcomes will generally be Pareto-inefficient, per the Greenwald–Stiglitz theorem.
The second welfare theorem is essentially the reverse of the first welfare theorem. It states that under similar, ideal assumptions, any Pareto optimum can be obtained by some competitive equilibrium, or free market system, although it may also require a lump-sum transfer of wealth.
Pareto efficiency and market failure
An ineffective distribution of resources in a free market is known as market failure. Given that there is room for improvement, market failure implies Pareto inefficiency.
For instance, excessive use of negative commodities (such as drugs and cigarettes) results in expenses to non-smokers as well as early mortality for smokers. Cigarette taxes may help individuals stop smoking while also raising money to address ailments brought on by smoking.
Pareto efficiency and equity
A Pareto improvement may be seen, but this does not always imply that the result is desirable or equitable. After a Pareto improvement, inequality could still exist. However, it does imply that any change will violate the "do no harm" principle, because at least one person will be worse off.
A society may be Pareto efficient but have significant levels of inequality. The most equitable course of action would be to split the pie into three equal portions if there were three persons and a pie. The third person does not lose out (even if he does not partake in the pie), hence splitting it in half and giving it to two individuals would be considered Pareto efficient.
On a frontier of production possibilities, Pareto efficiency will happen. It is impossible to raise the output of products without decreasing the output of services when an economy is functioning on a basic production potential frontier, such as at point A, B, or C.
Pareto order
If multiple sub-goals (with ) exist, combined into a vector-valued objective function , generally, finding a unique optimum becomes challenging. This is due to the absence of a total order relation for which would not always prioritize one target over another target (like the lexicographical order). In the multi-objective optimization setting, various solutions can be "incomparable" as there is no total order relation to facilitate the comparison . Only the Pareto order is applicable:
Consider a vector-valued minimization problem: Pareto dominates if and only if: : and We then write , where is the Pareto order. This means that is not worse than in any goal but is better (since smaller) in at least one goal . The Pareto order is a strict partial order, though it is not a product order (neither non-strict nor strict).
If , then this defines a preorder in the search space and we say Pareto dominates the alternative and we write .
Variants
Weak Pareto efficiency Weak Pareto efficiency is a situation that cannot be strictly improved for every individual.
Formally, a strong Pareto improvement is defined as a situation in which all agents are strictly better-off (in contrast to just "Pareto improvement", which requires that one agent is strictly better-off and the other agents are at least as good). A situation is weak Pareto-efficient if it has no strong Pareto improvements.
Any strong Pareto improvement is also a weak Pareto improvement. The opposite is not true; for example, consider a resource allocation problem with two resources, which Alice values at {10, 0}, and George values at {5, 5}. Consider the allocation giving all resources to Alice, where the utility profile is (10, 0):
It is a weak PO, since no other allocation is strictly better to both agents (there are no strong Pareto improvements).
But it is not a strong PO, since the allocation in which George gets the second resource is strictly better for George and weakly better for Alice (it is a weak Pareto improvement) its utility profile is (10, 5).
A market does not require local nonsatiation to get to a weak Pareto optimum.
Constrained Pareto efficiency Constrained Pareto efficiency is a weakening of Pareto optimality, accounting for the fact that a potential planner (e.g., the government) may not be able to improve upon a decentralized market outcome, even if that outcome is inefficient. This will occur if it is limited by the same informational or institutional constraints as are individual agents.
An example is of a setting where individuals have private information (for example, a labor market where the worker's own productivity is known to the worker but not to a potential employer, or a used-car market where the quality of a car is known to the seller but not to the buyer) which results in moral hazard or an adverse selection and a sub-optimal outcome. In such a case, a planner who wishes to improve the situation is unlikely to have access to any information that the participants in the markets do not have. Hence, the planner cannot implement allocation rules which are based on the idiosyncratic characteristics of individuals; for example, "if a person is of type A, they pay price p1, but if of type B, they pay price p2" (see Lindahl prices). Essentially, only anonymous rules are allowed (of the sort "Everyone pays price p") or rules based on observable behavior; "if any person chooses x at price px, then they get a subsidy of ten dollars, and nothing otherwise". If there exists no allowed rule that can successfully improve upon the market outcome, then that outcome is said to be "constrained Pareto-optimal".
Fractional Pareto efficiency Fractional Pareto efficiency is a strengthening of Pareto efficiency in the context of fair item allocation. An allocation of indivisible items is fractionally Pareto-efficient (fPE or fPO) if it is not Pareto-dominated even by an allocation in which some items are split between agents. This is in contrast to standard Pareto efficiency, which only considers domination by feasible (discrete) allocations.
As an example, consider an item allocation problem with two items, which Alice values at {3, 2} and George values at {4, 1}. Consider the allocation giving the first item to Alice and the second to George, where the utility profile is (3, 1):
It is Pareto-efficient, since any other discrete allocation (without splitting items) makes someone worse-off.
However, it is not fractionally Pareto-efficient, since it is Pareto-dominated by the allocation giving to Alice 1/2 of the first item and the whole second item, and the other 1/2 of the first item to George its utility profile is (3.5, 2).
Ex-ante Pareto efficiency
When the decision process is random, such as in fair random assignment or random social choice or fractional approval voting, there is a difference between ex-post and ex-ante Pareto efficiency:
Ex-post Pareto efficiency means that any outcome of the random process is Pareto-efficient.
Ex-ante Pareto efficiency means that the lottery determined by the process is Pareto-efficient with respect to the expected utilities. That is: no other lottery gives a higher expected utility to one agent and at least as high expected utility to all agents.
If some lottery L is ex-ante PE, then it is also ex-post PE. Proof: suppose that one of the ex-post outcomes x of L is Pareto-dominated by some other outcome y. Then, by moving some probability mass from x to y, one attains another lottery L that ex-ante Pareto-dominates L.
The opposite is not true: ex-ante PE is stronger that ex-post PE. For example, suppose there are two objects a car and a house. Alice values the car at 2 and the house at 3; George values the car at 2 and the house at 9. Consider the following two lotteries:
With probability 1/2, give car to Alice and house to George; otherwise, give car to George and house to Alice. The expected utility is for Alice and for George. Both allocations are ex-post PE, since the one who got the car cannot be made better-off without harming the one who got the house.
With probability 1, give car to Alice, then with probability 1/3 give the house to Alice, otherwise give it to George. The expected utility is for Alice and for George. Again, both allocations are ex-post PE.
While both lotteries are ex-post PE, the lottery 1 is not ex-ante PE, since it is Pareto-dominated by lottery 2.
Another example involves dichotomous preferences. There are 5 possible outcomes and 6 voters. The voters' approval sets are . All five outcomes are PE, so every lottery is ex-post PE. But the lottery selecting c, d, e with probability 1/3 each is not ex-ante PE, since it gives an expected utility of 1/3 to each voter, while the lottery selecting a, b with probability 1/2 each gives an expected utility of 1/2 to each voter.
Bayesian Pareto efficiency Bayesian efficiency is an adaptation of Pareto efficiency to settings in which players have incomplete information regarding the types of other players.
Ordinal Pareto efficiency Ordinal Pareto efficiency is an adaptation of Pareto efficiency to settings in which players report only rankings on individual items, and we do not know for sure how they rank entire bundles.
Pareto efficiency and equity
Although an outcome may be a Pareto improvement, this does not imply that the outcome is equitable. It is possible that inequality persists even after a Pareto improvement. Despite the fact that it is frequently used in conjunction with the idea of Pareto optimality, the term "efficiency" refers to the process of increasing societal productivity. It is possible for a society to have Pareto efficiency while also have high levels of inequality. Consider the following scenario: there is a pie and three persons; the most equitable way would be to divide the pie into three equal portions. However, if the pie is divided in half and shared between two people, it is considered Pareto efficient meaning that the third person does not lose out (despite the fact that he does not receive a piece of the pie). When making judgments, it is critical to consider a variety of aspects, including social efficiency, overall welfare, and issues such as diminishing marginal value.
Pareto efficiency and market failure
In order to fully understand market failure, one must first comprehend market success, which is defined as the ability of a set of idealized competitive markets to achieve an equilibrium allocation of resources that is Pareto-optimal in terms of resource allocation. According to the definition of market failure, it is a circumstance in which the conclusion of the first fundamental theorem of welfare is erroneous; that is, when the allocations made through markets are not efficient. In a free market, market failure is defined as an inefficient allocation of resources. Due to the fact that it is feasible to improve, market failure implies Pareto inefficiency. For example, excessive consumption of depreciating items (drugs/tobacco) results in external costs to non-smokers, as well as premature death for smokers who do not quit. An increase in the price of cigarettes could motivate people to quit smoking while also raising funds for the treatment of smoking-related ailments.
Approximate Pareto efficiency
Given some ε > 0, an outcome is called ε-Pareto-efficient if no other outcome gives all agents at least the same utility, and one agent a utility at least (1 + ε) higher. This captures the notion that improvements smaller than (1 + ε) are negligible and should not be considered a breach of efficiency.
Pareto-efficiency and welfare-maximization
Suppose each agent i is assigned a positive weight ai. For every allocation x, define the welfare of x as the weighted sum of utilities of all agents in x:
Let xa be an allocation that maximizes the welfare over all allocations:
It is easy to show that the allocation xa is Pareto-efficient: since all weights are positive, any Pareto improvement would increase the sum, contradicting the definition of xa.
Japanese neo-Walrasian economist Takashi Negishi proved that, under certain assumptions, the opposite is also true: for every Pareto-efficient allocation x, there exists a positive vector a such that x maximizes Wa. A shorter proof is provided by Hal Varian.
Use in engineering
The notion of Pareto efficiency has been used in engineering. Given a set of choices and a way of valuing them, the Pareto front (or Pareto set or Pareto frontier''') is the set of choices that are Pareto-efficient. By restricting attention to the set of choices that are Pareto-efficient, a designer can make trade-offs within this set, rather than considering the full range of every parameter.
Use in public policy
Modern microeconomic theory has drawn heavily upon the concept of Pareto efficiency for inspiration. Pareto and his successors have tended to describe this technical definition of optimal resource allocation in the context of it being an equilibrium that can theoretically be achieved within an abstract model of market competition. It has therefore very often been treated as a corroboration of Adam Smith's "invisible hand" notion. More specifically, it motivated the debate over "market socialism" in the 1930s.
However, because the Pareto-efficient outcome is difficult to assess in the real world when issues including asymmetric information, signalling, adverse selection, and moral hazard are introduced, most people do not take the theorems of welfare economics as accurate descriptions of the real world. Therefore, the significance of the two welfare theorems of economics is in their ability to generate a framework that has dominated neoclassical thinking about public policy. That framework is that the welfare economics theorems allow the political economy to be studied in the following two situations: "market failure" and "the problem of redistribution".
Analysis of "market failure" can be understood by the literature surrounding externalities. When comparing the "real" economy to the complete contingent markets economy (which is considered efficient), the inefficiencies become clear. These inefficiencies, or externalities, are then able to be addressed by mechanisms, including property rights and corrective taxes.
Analysis of "the problem with redistribution" deals with the observed political question of how income or commodity taxes should be utilized. The theorem tells us that no taxation is Pareto-efficient and that taxation with redistribution is Pareto-inefficient. Because of this, most of the literature is focused on finding solutions where given there is a tax structure, how can the tax structure prescribe a situation where no person could be made better off by a change in available taxes.
Use in biology
Pareto optimisation has also been studied in biological processes. In bacteria, genes were shown to be either inexpensive to make (resource-efficient) or easier to read (translation-efficient). Natural selection acts to push highly expressed genes towards the Pareto frontier for resource use and translational efficiency. Genes near the Pareto frontier were also shown to evolve more slowly (indicating that they are providing a selective advantage).
Common misconceptions
It would be incorrect to treat Pareto efficiency as equivalent to societal optimization, as the latter is a normative concept, which is a matter of interpretation that typically would account for the consequence of degrees of inequality of distribution. An example would be the interpretation of one school district with low property tax revenue versus another with much higher revenue as a sign that more equal distribution occurs with the help of government redistribution.
Criticism
Some commentators contest that Pareto efficiency could potentially serve as an ideological tool. With it implying that capitalism is self-regulated thereof, it is likely that the embedded structural problems such as unemployment would be treated as deviating from the equilibrium or norm, and thus neglected or discounted.
Pareto efficiency does not require a totally equitable distribution of wealth, which is another aspect that draws in criticism. An economy in which a wealthy few hold the vast majority of resources can be Pareto-efficient. A simple example is the distribution of a pie among three people. The most equitable distribution would assign one third to each person. However, the assignment of, say, a half section to each of two individuals and none to the third is also Pareto-optimal despite not being equitable, because none of the recipients could be made better off without decreasing someone else's share; and there are many other such distribution examples. An example of a Pareto-inefficient distribution of the pie would be allocation of a quarter of the pie to each of the three, with the remainder discarded.
The liberal paradox elaborated by Amartya Sen shows that when people have preferences about what other people do, the goal of Pareto efficiency can come into conflict with the goal of individual liberty.
Lastly, it is proposed that Pareto efficiency to some extent inhibited discussion of other possible criteria of efficiency. As Wharton School professor Ben Lockwood argues, one possible reason is that any other efficiency criteria established in the neoclassical domain will reduce to Pareto efficiency at the end.
See also
Admissible decision rule, analog in decision theory
Arrow's impossibility theorem
Bayesian efficiency
Fundamental theorems of welfare economics
Deadweight loss
Economic efficiency
Highest and best use
Kaldor–Hicks efficiency
Marginal utility
Market failure, when a market result is not Pareto-optimal
Maximal element, concept in order theory
Maxima of a point set
Multi-objective optimization
Nash equilibrium
Pareto-efficient envy-free division
Social Choice and Individual Values'' for the "(weak) Pareto principle"
TOTREP
Welfare economics
References
Pareto, V (1906). Manual of Political Economy. Oxford University Press. https://global.oup.com/academic/product/manual-of-political-economy-9780199607952?cc=ca&lang=en&.
Further reading
Book preview.
Game theory
Law and economics
Welfare economics
Management theory
Mathematical optimization
Electoral system criteria
Vilfredo Pareto | Pareto efficiency | [
"Mathematics"
] | 4,865 | [
"Mathematical optimization",
"Mathematical analysis",
"Game theory"
] |
45,473 | https://en.wikipedia.org/wiki/Lynn%20Margulis | Lynn Margulis (born Lynn Petra Alexander; March 5, 1938 – November 22, 2011) was an American evolutionary biologist, and was the primary modern proponent for the significance of symbiosis in evolution. In particular, Margulis transformed and fundamentally framed current understanding of the evolution of cells with nuclei by proposing it to have been the result of symbiotic mergers of bacteria. Margulis was also the co-developer of the Gaia hypothesis with the British chemist James Lovelock, proposing that the Earth functions as a single self-regulating system, and was the principal defender and promulgator of the five kingdom classification of Robert Whittaker.
Throughout her career, Margulis' work could arouse intense objections, and her formative paper, "On the Origin of Mitosing Cells", appeared in 1967 after being rejected by about fifteen journals. Still a junior faculty member at Boston University at the time, her theory that cell organelles such as mitochondria and chloroplasts were once independent bacteria was largely ignored for another decade, becoming widely accepted only after it was powerfully substantiated through genetic evidence. Margulis was elected a member of the US National Academy of Sciences in 1983. President Bill Clinton presented her the National Medal of Science in 1999. The Linnean Society of London awarded her the Darwin-Wallace Medal in 2008.
Margulis was a strong critic of neo-Darwinism. Her position sparked lifelong debate with leading neo-Darwinian biologists, including Richard Dawkins, George C. Williams, and John Maynard Smith. Margulis' work on symbiosis and her endosymbiotic theory had important predecessors, going back to the mid-19th century – notably Andreas Franz Wilhelm Schimper, Konstantin Mereschkowski, Boris Kozo-Polyansky, and Ivan Wallin – and Margulis not only promoted greater recognition for their contributions, but personally oversaw the first English translation of Kozo-Polyansky's Symbiogenesis: A New Principle of Evolution, which appeared the year before her death. Many of her major works, particularly those intended for a general readership, were collaboratively written with her son Dorion Sagan.
In 2002, Discover magazine recognized Margulis as one of the 50 most important women in science.
Early life and education
Lynn Petra Alexander was born on March 5, 1938 in Chicago, to a Jewish family. Her parents were Morris Alexander and Leona Wise Alexander. She was the eldest of four daughters. Her father was an attorney who also ran a company that made road paints. Her mother operated a travel agency. She entered the Hyde Park Academy High School in 1952, describing herself as a bad student who frequently had to stand in the corner.
A precocious child, she was accepted at the University of Chicago Laboratory Schools at the age of fifteen. In 1957, at age 19, she earned a BA from the University of Chicago in Liberal Arts. She joined the University of Wisconsin to study biology under Hans Ris and Walter Plaut, her supervisor, and graduated in 1960 with an MS in genetics and zoology. (Her first publication, published with Plaut in 1958 in the Journal of Protozoology, was on the genetics of Euglena, flagellates which have features of both animals and plants.) She then pursued research at the University of California, Berkeley, under the zoologist Max Alfert. Before she could complete her dissertation, she was offered research associateship and then lectureship at Brandeis University in Massachusetts in 1964. It was while working there that she obtained her PhD from the University of California, Berkeley in 1965. Her thesis was An Unusual Pattern of Thymidine Incorporation in Euglena.
Career
In 1966 she moved to Boston University, where she taught biology for twenty-two years. She was initially an Adjunct Assistant Professor, then was appointed to Assistant Professor in 1967. She was promoted to Associate Professor in 1971, to full Professor in 1977, and to University Professor in 1986. In 1988 she was appointed Distinguished Professor of Botany at the University of Massachusetts at Amherst. She was Distinguished Professor of Biology in 1993. In 1997 she transferred to the Department of Geosciences at UMass Amherst to become Distinguished Professor of Geosciences "with great delight", the post which she held until her death.
Endosymbiosis theory
In 1966, as a young faculty member at Boston University, Margulis wrote a theoretical paper titled "On the Origin of Mitosing Cells". The paper, however, was "rejected by about fifteen scientific journals," she recalled. It was finally accepted by Journal of Theoretical Biology and is considered today a landmark in modern endosymbiotic theory. Weathering constant criticism of her ideas for decades, Margulis was famous for her tenacity in pushing her theory forward, despite the opposition she faced at the time. The descent of mitochondria from bacteria and of chloroplasts from cyanobacteria was experimentally demonstrated in 1978 by Robert Schwartz and Margaret Dayhoff. This formed the first experimental evidence for the symbiogenesis theory. The endosymbiosis theory of organogenesis became widely accepted in the early 1980s, after the genetic material of mitochondria and chloroplasts had been found to be significantly different from that of the symbiont's nuclear DNA.
In 1995, English evolutionary biologist Richard Dawkins had this to say about Lynn Margulis and her work:
I greatly admire Lynn Margulis's sheer courage and stamina in sticking by the endosymbiosis theory, and carrying it through from being an unorthodoxy to an orthodoxy. I'm referring to the theory that the eukaryotic cell is a symbiotic union of primitive prokaryotic cells. This is one of the great achievements of twentieth-century evolutionary biology, and I greatly admire her for it.
Symbiosis as evolutionary force
Margulis opposed competition-oriented views of evolution, stressing the importance of symbiotic or cooperative relationships between species.
She later formulated a theory that proposed symbiotic relationships between organisms of different phyla, or kingdoms, as the driving force of evolution, and explained genetic variation as occurring mainly through transfer of nuclear information between bacterial cells or viruses and eukaryotic cells. Her organelle genesis ideas are now widely accepted, but the proposal that symbiotic relationships explain most genetic variation is still something of a fringe idea.
Margulis also held a negative view of certain interpretations of Neo-Darwinism that she felt were excessively focused on competition between organisms, as she believed that history will ultimately judge them as comprising "a minor twentieth-century religious sect within the sprawling religious persuasion of Anglo-Saxon Biology."
She wrote that proponents of the standard theory "wallow in their zoological, capitalistic, competitive, cost-benefit interpretation of Darwin – having mistaken him ... Neo-Darwinism, which insists on [the slow accrual of mutations by gene-level natural selection], is in a complete funk."
Gaia hypothesis
Margulis initially sought out the advice of James Lovelock for her own research: she explained that, "In the early seventies, I was trying to align bacteria by their metabolic pathways. I noticed that all kinds of bacteria produced gases. Oxygen, hydrogen sulfide, carbon dioxide, nitrogen, ammonia—more than thirty different gases are given off by the bacteria whose evolutionary history I was keen to reconstruct. Why did every scientist I asked believe that atmospheric oxygen was a biological product but the other atmospheric gases—nitrogen, methane, sulfur, and so on—were not? 'Go talk to Lovelock,' at least four different scientists suggested. Lovelock believed that the gases in the atmosphere were biological."
Margulis met with Lovelock, who explained his Gaia hypothesis to her, and very soon they began an intense collaborative effort on the concept. One of the earliest significant publications on Gaia was a 1974 paper co-authored by Lovelock and Margulis, which succinctly defined the hypothesis as follows: "The notion of the biosphere as an active adaptive control system able to maintain the Earth in homeostasis we are calling the 'Gaia hypothesis.'"
Like other early presentations of Lovelock's idea, the Lovelock-Margulis 1974 paper seemed to give living organisms complete agency in creating planetary self-regulation, whereas later, as the idea matured, this planetary-scale self-regulation was recognized as an emergent property of the Earth system, life and its physical environment taken together. When climatologist Stephen Schneider convened the 1989 American Geophysical Union Chapman Conference around the issue of Gaia, the idea of "strong Gaia" and "weak Gaia" was introduced by James Kirchner, after which Margulis was sometimes associated with the idea of "weak Gaia", incorrectly (her essay "Gaia is a Tough Bitch" dates from 1995 – and it stated her own distinction from Lovelock as she saw it, which was primarily that she did not like the metaphor of Earth as a single organism, because, she said, "No organism eats its own waste"). In her 1998 book Symbiotic Planet, Margulis explored the relationship between Gaia and her work on symbiosis.
Five kingdoms of life
In 1969, life on earth was classified into five kingdoms, as introduced by Robert Whittaker. Margulis became the most important supporter, as well as critic – while supporting parts, she was the first to recognize the limitations of Whittaker's classification of microbes. But later discoveries of new organisms, such as archaea, and emergence of molecular taxonomy challenged the concept. By the mid-2000s, most scientists began to agree that there are more than five kingdoms. Margulis became the most important defender of the five kingdom classification. She rejected the three-domain system introduced by Carl Woese in 1990, which gained wide acceptance. She introduced a modified classification by which all life forms, including the newly discovered, could be integrated into the classical five kingdoms. According to Margulis, the main problem, archaea, falls under the kingdom Prokaryotae alongside bacteria (in contrast to the three-domain system, which treats archaea as a higher taxon than kingdom, or the six-kingdom system, which holds that it is a separate kingdom). Margulis' concept is given in detail in her book Five Kingdoms, written with Karlene V. Schwartz. It has been suggested that it is mainly because of Margulis that the five-kingdom system survives.
Metamorphosis theory
In 2009, via a then-standard publication-process known as "communicated submission" (which bypassed traditional peer review), she was instrumental in getting the Proceedings of the National Academy of Sciences (PNAS) to publish a paper by Donald I. Williamson rejecting "the Darwinian assumption that larvae and their adults evolved from a single common ancestor." Williamson's paper provoked immediate response from the scientific community, including a countering paper in PNAS. Conrad Labandeira of the Smithsonian National Museum of Natural History said, "If I was reviewing [Williamson's paper] I would probably opt to reject it," he says, "but I'm not saying it's a bad thing that this is published. What it may do is broaden the discussion on how metamorphosis works and [...] [on] the origin of these very radical life cycles." But Duke University insect developmental biologist Fred Nijhout said that the paper was better suited for the "National Enquirer than the National Academy." In September it was announced that PNAS would eliminate communicated submissions in July 2010. PNAS stated that the decision had nothing to do with the Williamson controversy.
AIDS/HIV theory
In 2009 Margulis and seven others authored a position paper concerning research on the viability of round body forms of some spirochetes, "Syphilis, Lyme disease, & AIDS: Resurgence of 'the great imitator'?" which states that, "Detailed research that correlates life histories of symbiotic spirochetes to changes in the immune system of associated vertebrates is sorely needed", and urging the "reinvestigation of the natural history of mammalian, tick-borne, and venereal transmission of spirochetes in relation to impairment of the human immune system". The paper went on to suggest "that the possible direct causal involvement of spirochetes and their round bodies to symptoms of immune deficiency be carefully and vigorously investigated".
In a Discover Magazine interview, Margulis explained her reason for interest in the topic of the 2009 "AIDS" paper: "I'm interested in spirochetes only because of our ancestry. I'm not interested in the diseases", and stated that she had called them "symbionts" because both the spirochete which causes syphilis (Treponema) and the spirochete which causes Lyme disease (Borrelia) only retain about 20% of the genes they would need to live freely, outside of their human hosts.
However, in the Discover Magazine interview Margulis said that "the set of symptoms, or syndrome, presented by syphilitics overlaps completely with another syndrome: AIDS", and also noted that Kary Mullis said that "he went looking for a reference substantiating that HIV causes AIDS and discovered, 'There is no such document' ".
This provoked a widespread supposition that Margulis had been an "AIDS denialist". Jerry Coyne reacted on his Why Evolution is True blog against his interpretation that Margulis believed "that AIDS is really syphilis, not viral in origin at all." Seth Kalichman, a social psychologist who studies behavioral and social aspects of AIDS, cited her [Margulis] 2009 paper as an example of AIDS denialism "flourishing", and asserted that her [Margulis] "endorsement of HIV/AIDS denialism defies understanding".
Reception
Historian Jan Sapp has said that "Lynn Margulis's name is as synonymous with symbiosis as Charles Darwin's is with evolution." She has been called "science's unruly earth mother", a "vindicated heretic", or a scientific "rebel", It has been suggested that initial rejection of Margulis' work on the endosymbiotic theory, and the controversial nature of it as well as Gaia theory, made her identify throughout her career with scientific mavericks, outsiders, and unaccepted theories generally.
In the last decade of her life, while key components of her life's work began to be understood as fundamental to a modern scientific viewpoint – the widespread adoption of Earth System Science and the incorporation of key parts of endosymbiotic theory into biology curricula worldwide – Margulis if anything became more embroiled in controversy, not less. Journalist John Wilson explained this by saying that Lynn Margulis "defined herself by oppositional science," and in the commemorative collection of essays Lynn Margulis: The Life and Legacy of a Scientific Rebel, commentators again and again depict her as a modern embodiment of the "scientific rebel", akin to Freeman Dyson's 1995 essay The Scientist as Rebel, a tradition Dyson saw embodied in Benjamin Franklin, and which Dyson believed to be essential to good science.
Awards and recognitions
1975, Elected Fellow of the American Association for the Advancement of Science.
1978, Guggenheim Fellowship.
1983, Elected to the National Academy of Sciences.
1985, Guest Hagey Lecturer, University of Waterloo.
1986, Miescher-Ishida Prize.
1989, conferred the Commandeur de l'Ordre des Palmes Académiques de France.
1992, recipient of Chancellor's Medal for Distinguished Faculty of the University of Massachusetts at Amherst.
1995, elected Fellow of the World Academy of Art and Science.
1997, elected to the Russian Academy of Natural Sciences.
1998, papers permanently archived in the Library of Congress, Washington, D.C.
1998, recipient of the Distinguished Service Award of the American Institute of Biological Sciences.
1998, elected Fellow of the American Academy of Arts and Sciences.
1999, recipient of the William Procter Prize for Scientific Achievement.
1999, recipient of the National Medal of Science, awarded by President William J. Clinton.
2001, Golden Plate Award of the American Academy of Achievement
2002–05, Alexander von Humboldt Prize.
2005, elected President of Sigma Xi, The Scientific Research Society.
2006, Founded Sciencewriters Books with her son Dorion.
2008, one of thirteen recipients in 2008 of the Darwin-Wallace Medal, heretofore bestowed every 50 years, by the Linnean Society of London.
2010, inductee into the Leonardo da Vinci Society of Thinking at the University of Advancing Technology in Tempe, Arizona.
2010, NASA Public Service Award for Astrobiology.
2012, Lynn Margulis Symposium: Celebrating a Life in Science, University of Massachusetts, Amherst, March 23–25, 2012.
2017, the Journal of Theoretical Biology 434, 1–114 commemorated the 50th anniversary of "The origin of mitosing cells" with a special issue
Honorary doctorate from 15 universities.
Personal life
Margulis married astronomer Carl Sagan in 1957 soon after she got her bachelor's degree. Sagan was then a graduate student in physics at the University of Chicago. Their marriage ended in 1964, just before she completed her PhD. They had two sons, Dorion Sagan, who later became a popular science writer and her collaborator, and Jeremy Sagan, software developer and founder of Sagan Technology.
In 1967 she married Thomas N. Margulis, a crystallographer. They had a son named Zachary Margulis-Ohnuma, a New York City criminal defense lawyer, and a daughter Jennifer Margulis, teacher and author. They divorced in 1980.
She commented, "I quit my job as a wife twice," and, "it's not humanly possible to be a good wife, a good mother, and a first-class scientist. No one can do it — something has to go."
In the 2000s she had a relationship with fellow biologist Ricardo Guerrero.
Margulis argued that the September 11 attacks were a "false-flag operation, which has been used to justify the wars in Afghanistan and Iraq as well as unprecedented assaults on [...] civil liberties." She wrote that there was "overwhelming evidence that the three buildings [of the World Trade Center] collapsed by controlled demolition."
She was a religious agnostic, and a staunch evolutionist, but rejected the modern evolutionary synthesis, and said: "I remember waking up one day with an epiphanous revelation: I am not a neo-Darwinist! I recalled an earlier experience, when I realized that I wasn't a humanistic Jew. Although I greatly admire Darwin's contributions and agree with most of his theoretical analysis and I am a Darwinist, I am not a neo-Darwinist." She argued that "Natural selection eliminates and maybe maintains, but it doesn't create", and maintained that symbiosis was the major driver of evolutionary change.
Margulis died on November 22, 2011, at home in Amherst, Massachusetts, five days after suffering a hemorrhagic stroke. As her wish, she was cremated and her ashes were scattered in her favorite research areas, near her home.
Works
Books
Margulis, Lynn (1970). Origin of Eukaryotic Cells, Yale University Press,
Margulis, Lynn (1982). Early Life, Science Books International,
Margulis, Lynn, and Dorion Sagan (1986). Origins of Sex: Three Billion Years of Genetic Recombination, Yale University Press,
Margulis, Lynn, and Dorion Sagan (1987). Microcosmos: Four Billion Years of Evolution from Our Microbial Ancestors, HarperCollins,
Margulis, Lynn, and Dorion Sagan (1991). Mystery Dance: On the Evolution of Human Sexuality, Summit Books,
Margulis, Lynn, ed. (1991). Symbiosis as a Source of Evolutionary Innovation: Speciation and Morphogenesis, The MIT Press,
Margulis, Lynn (1992). Symbiosis in Cell Evolution: Microbial Communities in the Archean and Proterozoic Eons, W.H. Freeman,
Sagan, Dorion, and Margulis, Lynn (1993). The Garden of Microbial Delights: A Practical Guide to the Subvisible World, Kendall/Hunt,
Margulis, Lynn, Dorion Sagan and Niles Eldredge (1995) What Is Life?, Simon and Schuster,
Margulis, Lynn, and Dorion Sagan (1997). Slanted Truths: Essays on Gaia, Symbiosis, and Evolution, Copernicus Books,
Margulis, Lynn, and Dorion Sagan (1997). What Is Sex?, Simon and Schuster,
Margulis, Lynn, and Karlene V. Schwartz (1997). Five Kingdoms: An Illustrated Guide to the Phyla of Life on Earth, W.H. Freeman & Company,
Margulis, Lynn (1998). Symbiotic Planet: A New Look at Evolution, Basic Books,
Margulis, Lynn, et al. (2002). The Ice Chronicles: The Quest to Understand Global Climate Change, University of New Hampshire,
Margulis, Lynn, and Dorion Sagan (2002). Acquiring Genomes: A Theory of the Origins of Species, Perseus Books Group,
Margulis, Lynn (2007). Luminous Fish: Tales of Science and Love, Sciencewriters Books,
Margulis, Lynn, and Eduardo Punset, eds. (2007). Mind, Life and Universe: Conversations with Great Scientists of Our Time, Sciencewriters Books,
Margulis, Lynn, and Dorion Sagan (2007). Dazzle Gradually: Reflections on the Nature of Nature, Sciencewriters Books,
Journals
Explanatory notes
References
External links
1938 births
2011 deaths
20th-century American biologists
20th-century American women scientists
20th-century American zoologists
21st-century American women scientists
21st-century American zoologists
21st-century American biologists
American agnostics
American Jews
American conspiracy theorists
Boston University faculty
Carl Sagan
American evolutionary biologists
Jewish women scientists
Lyme disease researchers
HIV/AIDS denialists
Members of the United States National Academy of Sciences
National Medal of Science laureates
Sagan family
Scientists from Massachusetts
Symbiogenesis researchers
Symbiosis
Theoretical biologists
University of California, Berkeley alumni
University of Chicago Laboratory Schools alumni
University of Massachusetts Amherst faculty
University of Wisconsin–Madison College of Letters and Science alumni
American women evolutionary biologists
American women zoologists
Jewish humanists
Jewish agnostics | Lynn Margulis | [
"Biology"
] | 4,751 | [
"Biological interactions",
"Behavior",
"Symbiosis"
] |
45,515 | https://en.wikipedia.org/wiki/Free%20good | In economics, a free good is a good that is not scarce, and therefore is available without limit. A free good is available in as great a quantity as desired with zero opportunity cost to society.
A good that is made available at zero price is not necessarily a free good. For example, a shop might give away its stock in its promotion, but producing these goods would still have required the use of scarce resources.
Examples of free goods are ideas and works that are reproducible at zero cost, or almost zero cost. For example, if someone invents a new device, many people could copy this invention, with no danger of this "resource" running out.
Earlier schools of economic thought stated that resources that are enough for everyone to have as much as they want are free goods. Examples in textbooks included seawater and air.
Intellectual property laws such as copyrights and patents have the effect of converting some intangible goods to scarce goods. Even though these works are free goods by definition and can be reproduced at minimal cost, the production of these works does require scarce resources, such as skilled labour. Thus these laws are used to give exclusive rights to the creators, in order to encourage resources to be appropriately allocated to these activities.
Futurists like Jeremy Rifkin theorize that advanced nanotechnology with the ability to turn any kind of material automatically into any other combination of equal mass will make all goods essentially free goods, since all raw materials and manufacturing time will become perfectly interchangeable.
See also
Open source
Open data
Artificial scarcity
References
Goods (economics) | Free good | [
"Physics"
] | 319 | [
"Materials",
"Goods (economics)",
"Matter"
] |
45,527 | https://en.wikipedia.org/wiki/Linear%20congruential%20generator | A linear congruential generator (LCG) is an algorithm that yields a sequence of pseudo-randomized numbers calculated with a discontinuous piecewise linear equation. The method represents one of the oldest and best-known pseudorandom number generator algorithms. The theory behind them is relatively easy to understand, and they are easily implemented and fast, especially on computer hardware which can provide modular arithmetic by storage-bit truncation.
The generator is defined by the recurrence relation:
where is the sequence of pseudo-random values, and
— the "modulus"
— the "multiplier"
— the "increment"
— the "seed" or "start value"
are integer constants that specify the generator. If c = 0, the generator is often called a multiplicative congruential generator (MCG), or Lehmer RNG. If c ≠ 0, the method is called a mixed congruential generator.
When c ≠ 0, a mathematician would call the recurrence an affine transformation, not a linear one, but the misnomer is well-established in computer science.
History
The Lehmer generator was published in 1951 and the Linear congruential generator was published in 1958 by W. E. Thomson and A. Rotenberg.
Period length
A benefit of LCGs is that an appropriate choice of parameters results in a period which is both known and long. Although not the only criterion, too short a period is a fatal flaw in a pseudorandom number generator.
While LCGs are capable of producing pseudorandom numbers which can pass formal tests for randomness, the quality of the output is extremely sensitive to the choice of the parameters m and a. For example, a = 1 and c = 1 produces a simple modulo-m counter, which has a long period, but is obviously non-random. Other values of c coprime to m produce a Weyl sequence, which is better distributed but still obviously non-random.
Historically, poor choices for a have led to ineffective implementations of LCGs. A particularly illustrative example of this is RANDU, which was widely used in the early 1970s and led to many results which are currently being questioned because of the use of this poor LCG.
There are three common families of parameter choice:
m prime, c = 0
This is the original Lehmer RNG construction. The period is m−1 if the multiplier a is chosen to be a primitive element of the integers modulo m. The initial state must be chosen between 1 and m−1.
One disadvantage of a prime modulus is that the modular reduction requires a double-width product and an explicit reduction step. Often a prime just less than a power of 2 is used (the Mersenne primes 231−1 and 261−1 are popular), so that the reduction modulo m = 2e − d can be computed as (ax mod 2e) + d . This must be followed by a conditional subtraction of m if the result is too large, but the number of subtractions is limited to ad/m, which can be easily limited to one if d is small.
If a double-width product is unavailable, and the multiplier is chosen carefully, Schrage's method may be used. To do this, factor m = qa+r, i.e. and . Then compute ax mod m = . Since x mod q < q ≤ m/a, the first term is strictly less than am/a = m. If a is chosen so that r ≤ q (and thus r/q ≤ 1), then the second term is also less than m: r ≤ rx/q = x(r/q) ≤ x < m. Thus, both products can be computed with a single-width product, and the difference between them lies in the range [1−m, m−1], so can be reduced to [0, m−1] with a single conditional add.
A second disadvantage is that it is awkward to convert the value 1 ≤ x < m to uniform random bits. If a prime just less than a power of 2 is used, sometimes the missing values are simply ignored.
m a power of 2, c = 0
Choosing m to be a power of two, most often m = 232 or m = 264, produces a particularly efficient LCG, because this allows the modulus operation to be computed by simply truncating the binary representation. In fact, the most significant bits are usually not computed at all. There are, however, disadvantages.
This form has maximal period m/4, achieved if a ≡ ±3 (mod 8) and the initial state X0 is odd. Even in this best case, the low three bits of X alternate between two values and thus only contribute one bit to the state. X is always odd (the lowest-order bit never changes), and only one of the next two bits ever changes. If a ≡ +3, X alternates ±1↔±3, while if a ≡ −3, X alternates ±1↔∓3 (all modulo 8).
It can be shown that this form is equivalent to a generator with modulus m/4 and c ≠ 0.
A more serious issue with the use of a power-of-two modulus is that the low bits have a shorter period than the high bits. Its simplicity of implementation comes from the fact that bits are never affected by higher-order bits, so the low b bits of such a generator form a modulo-2b LCG by themselves, repeating with a period of 2b−2. Only the most significant bit of X achieves the full period.
m a power of 2, c ≠ 0
When c ≠ 0, correctly chosen parameters allow a period equal to m, for all seed values. This will occur if and only if:
and are coprime,
is divisible by all prime factors of ,
is divisible by 4 if is divisible by 4.
These three requirements are referred to as the Hull–Dobell Theorem.
This form may be used with any m, but only works well for m with many repeated prime factors, such as a power of 2; using a computer's word size is the most common choice. If m were a square-free integer, this would only allow a ≡ 1 (mod m), which makes a very poor PRNG; a selection of possible full-period multipliers is only available when m has repeated prime factors.
Although the Hull–Dobell theorem provides maximum period, it is not sufficient to guarantee a good generator. For example, it is desirable for a − 1 to not be any more divisible by prime factors of m than necessary. If m is a power of 2, then a − 1 should be divisible by 4 but not divisible by 8, i.e. a ≡ 5 (mod 8).
Indeed, most multipliers produce a sequence which fails one test for non-randomness or another, and finding a multiplier which is satisfactory to all applicable criteria is quite challenging. The spectral test is one of the most important tests.
Note that a power-of-2 modulus shares the problem as described above for c = 0: the low k bits form a generator with modulus 2k and thus repeat with a period of 2k; only the most significant bit achieves the full period. If a pseudorandom number less than r is desired, is a much higher-quality result than X mod r. Unfortunately, most programming languages make the latter much easier to write (X % r), so it is very commonly used.
The generator is not sensitive to the choice of c, as long as it is relatively prime to the modulus (e.g. if m is a power of 2, then c must be odd), so the value c=1 is commonly chosen.
The sequence produced by other choices of c can be written as a simple function of the sequence when c=1. Specifically, if Y is the prototypical sequence defined by Y0 = 0 and Yn+1 = aYn + 1 mod m, then a general sequence Xn+1 = aXn + c mod m can be written as an affine function of Y:
More generally, any two sequences X and Z with the same multiplier and modulus are related by
In the common case where m is a power of 2 and a ≡ 5 (mod 8) (a desirable property for other reasons), it is always possible to find an initial value X0 so that the denominator X1 − X0 ≡ ±1 (mod m), producing an even simpler relationship. With this choice of X0, Xn = X0 ± Yn will remain true for all n. The sign is determined by c ≡ ±1 (mod 4), and the constant X0 is determined by 1 ∓ c ≡ (1 − a)X0 (mod m).
As a simple example, consider the generators Xn+1 = 157Xn + 3 mod 256 and Yn+1 = 157Yn + 1 mod 256; i.e. m = 256, a = 157, and c = 3. Because 3 ≡ −1 (mod 4), we are searching for a solution to 1 + 3 ≡ (1 − 157)X0 (mod 256). This is satisfied by X0 ≡ 41 (mod 64), so if we start with that, then Xn ≡ X0 − Yn (mod 256) for all n.
For example, using X0 = 233 = 3×64 + 41:
X = 233, 232, 75, 2, 61, 108, ...
Y = 0, 1, 158, 231, 172, 125, ...
X + Y mod 256 = 233, 233, 233, 233, 233, 233, ...
Parameters in common use
The following table lists the parameters of LCGs in common use, including built-in rand() functions in runtime libraries of various compilers. This table is to show popularity, not examples to emulate; many of these parameters are poor. Tables of good parameters are available.
As shown above, LCGs do not always use all of the bits in the values they produce. In general, they return the most significant bits. For example, the Java implementation operates with 48-bit values at each iteration but returns only their 32 most significant bits. This is because the higher-order bits have longer periods than the lower-order bits (see below). LCGs that use this truncation technique produce statistically better values than those that do not. This is especially noticeable in scripts that use the mod operation to reduce range; modifying the random number mod 2 will lead to alternating 0 and 1 without truncation.
Contrarily, some libraries use an implicit power-of-two modulus but never output or otherwise use the most significant bit, in order to limit the output to positive two's complement integers. The output is as if the modulus were one bit less than the internal word size, and such generators are described as such in the table above.
Advantages and disadvantages
LCGs are fast and require minimal memory (one modulo-m number, often 32 or 64 bits) to retain state. This makes them valuable for simulating multiple independent streams. LCGs are not intended, and must not be used, for cryptographic applications; use a cryptographically secure pseudorandom number generator for such applications.
Although LCGs have a few specific weaknesses, many of their flaws come from having too small a state. The fact that people have been lulled for so many years into using them with such small moduli can be seen as a testament to the strength of the technique. A LCG with large enough state can pass even stringent statistical tests; a modulo-264 LCG which returns the high 32 bits passes TestU01's SmallCrush suite, and a 96-bit LCG passes the most stringent BigCrush suite.
For a specific example, an ideal random number generator with 32 bits of output is expected (by the Birthday theorem) to begin duplicating earlier outputs after results. Any PRNG whose output is its full, untruncated state will not produce duplicates until its full period elapses, an easily detectable statistical flaw.
For related reasons, any PRNG should have a period longer than the square of the number of outputs required. Given modern computer speeds, this means a period of 264 for all but the least demanding applications, and longer for demanding simulations.
One flaw specific to LCGs is that, if used to choose points in an n-dimensional space, the points will lie on, at most, hyperplanes (Marsaglia's theorem, developed by George Marsaglia). This is due to serial correlation between successive values of the sequence Xn. Carelessly chosen multipliers will usually have far fewer, widely spaced planes, which can lead to problems. The spectral test, which is a simple test of an LCG's quality, measures this spacing and allows a good multiplier to be chosen.
The plane spacing depends both on the modulus and the multiplier. A large enough modulus can reduce this distance below the resolution of double precision numbers. The choice of the multiplier becomes less important when the modulus is large. It is still necessary to calculate the spectral index and make sure that the multiplier is not a bad one, but purely probabilistically it becomes extremely unlikely to encounter a bad multiplier when the modulus is larger than about 264.
Another flaw specific to LCGs is the short period of the low-order bits when m is chosen to be a power of 2. This can be mitigated by using a modulus larger than the required output, and using the most significant bits of the state.
Nevertheless, for some applications LCGs may be a good option. For instance, in an embedded system, the amount of memory available is often severely limited. Similarly, in an environment such as a video game console taking a small number of high-order bits of an LCG may well suffice. (The low-order bits of LCGs when m is a power of 2 should never be relied on for any degree of randomness whatsoever.) The low order bits go through very short cycles. In particular, any full-cycle LCG, when m is a power of 2, will produce alternately odd and even results.
LCGs should be evaluated very carefully for suitability in non-cryptographic applications where high-quality randomness is critical. For Monte Carlo simulations, an LCG must use a modulus greater and preferably much greater than the cube of the number of random samples which are required. This means, for example, that a (good) 32-bit LCG can be used to obtain about a thousand random numbers; a 64-bit LCG is good for about 221 random samples (a little over two million), etc. For this reason, in practice LCGs are not suitable for large-scale Monte Carlo simulations.
Sample code
Python code
The following is an implementation of an LCG in Python, in the form of a generator:
from collections.abc import Generator
def lcg(modulus: int, a: int, c: int, seed: int) -> Generator[int, None, None]:
"""Linear congruential generator."""
while True:
seed = (a * seed + c) % modulus
yield seed
Haskell code
The following is an implementation of an LCG in Haskell utilizing a lazy evaluation strategy to generate an infinite stream of output values in a list:
-- Allowing a generic choice for a, c, m and x_0
linearCongruentialGenerator :: Integer -> Integer -> Integer -> Integer -> [Integer]
linearCongruentialGenerator a c modulus seed = lcgacmx0
where lcgacmx0 = seed : map (\x -> (a*x + c) % modulus) lcgacmx0
-- Specific parameters can be easily specified (eg. Knuth's MMIX parameters):
mmixLCG :: Integer -> [Integer]
mmixLCG = linearCongruentialGenerator 6364136223846793005 1442695040888963407 (2^(64 ::Integer))
Free Pascal
Free Pascal uses a Mersenne Twister as its default pseudo random number generator whereas Delphi uses a LCG. Here is a Delphi compatible example in Free Pascal based on the information in the table above. Given the same RandSeed value it generates the same sequence of random numbers as Delphi.
unit lcg_random;
{$ifdef fpc}{$mode delphi}{$endif}
interface
function LCGRandom: extended; overload; inline;
function LCGRandom(const range:longint): longint; overload; inline;
implementation
function IM: cardinal; inline;
begin
RandSeed := RandSeed * 134775813 + 1;
Result := RandSeed;
end;
function LCGRandom: extended; overload; inline;
begin
Result := IM * 2.32830643653870e-10;
end;
function LCGRandom(const range: longint): longint; overload; inline;
begin
Result := IM * range shr 32;
end;
Like all pseudorandom number generators, a LCG needs to store state and alter it each time it generates a new number. Multiple threads may access this state simultaneously causing a race condition. Implementations should use different state each with unique initialization for different threads to avoid equal sequences of random numbers on simultaneously executing threads.
LCG derivatives
There are several generators which are linear congruential generators in a different form, and thus the techniques used to analyze LCGs can be applied to them.
One method of producing a longer period is to sum the outputs of several LCGs of different periods having a large least common multiple; the Wichmann–Hill generator is an example of this form. (We would prefer them to be completely coprime, but a prime modulus implies an even period, so there must be a common factor of 2, at least.) This can be shown to be equivalent to a single LCG with a modulus equal to the product of the component LCG moduli.
Marsaglia's add-with-carry and subtract-with-borrow PRNGs with a word size of b=2w and lags r and s (r > s) are equivalent to LCGs with a modulus of br ± bs ± 1.
Multiply-with-carry PRNGs with a multiplier of a are equivalent to LCGs with a large prime modulus of abr−1 and a power-of-2 multiplier b.
A permuted congruential generator begins with a power-of-2-modulus LCG and applies an output transformation to eliminate the short period problem in the low-order bits.
Comparison with other PRNGs
The other widely used primitive for obtaining long-period pseudorandom sequences is the linear-feedback shift register construction, which is based on arithmetic in GF(2)[x], the polynomial ring over GF(2). Rather than integer addition and multiplication, the basic operations are exclusive-or and carry-less multiplication, which is usually implemented as a sequence of logical shifts. These have the advantage that all of their bits are full-period; they do not suffer from the weakness in the low-order bits that plagues arithmetic modulo 2k.
Examples of this family include xorshift generators and the Mersenne twister. The latter provides a very long period (219937−1) and variate uniformity, but it fails some statistical tests. Lagged Fibonacci generators also fall into this category; although they use arithmetic addition, their period is ensured by an LFSR among the least-significant bits.
It is easy to detect the structure of a linear-feedback shift register with appropriate tests such as the linear complexity test implemented in the TestU01 suite; a Boolean circulant matrix initialized from consecutive bits of an LFSR will never have rank greater than the degree of the polynomial. Adding a non-linear output mixing function (as in the xoshiro256** and permuted congruential generator constructions) can greatly improve the performance on statistical tests.
Another structure for a PRNG is a very simple recurrence function combined with a powerful output mixing function. This includes counter mode block ciphers and non-cryptographic generators such as SplitMix64.
A structure similar to LCGs, but not equivalent, is the multiple-recursive generator: Xn = (a1Xn−1 + a2Xn−2 + ··· + akXn−k) mod m for k ≥ 2. With a prime modulus, this can generate periods up to mk−1, so is a useful extension of the LCG structure to larger periods.
A powerful technique for generating high-quality pseudorandom numbers is to combine two or more PRNGs of different structure; the sum of an LFSR and an LCG (as in the KISS or xorwow constructions) can do very well at some cost in speed.
See also
List of random number generators – other PRNGs including some with better statistical qualitites
ACORN generator – not to be confused with ACG which term appears to have been used for variants of LCG and LFSR generators
Permuted congruential generator
Full cycle
Inversive congruential generator
Multiply-with-carry
Lehmer RNG (sometimes called the Park–Miller RNG)
Combined linear congruential generator
Notes
References
Gentle, James E., (2003). Random Number Generation and Monte Carlo Methods, 2nd edition, Springer, .
(in this paper, efficient algorithms are given for inferring sequences produced by certain pseudo-random number generators).
External links
The simulation Linear Congruential Generator visualizes the correlations between the pseudo-random numbers when manipulating the parameters.
Security of Random Number Generation: An Annotated Bibliography
Linear Congruential Generators post to sci.math
The "Death of Art" computer art project at Goldstein Technologies LLC, uses an LCG to generate 33,554,432 images
P. L'Ecuyer and R. Simard, "TestU01: A C Library for Empirical Testing of Random Number Generators", May 2006, revised November 2006, ACM Transactions on Mathematical Software, 33, 4, Article 22, August 2007.
Article about another way of cracking LCG
Pseudorandom number generators
Modular arithmetic
Articles with example Python (programming language) code | Linear congruential generator | [
"Mathematics"
] | 4,761 | [
"Arithmetic",
"Modular arithmetic",
"Number theory"
] |
45,528 | https://en.wikipedia.org/wiki/Opportunity%20cost | In microeconomic theory, the opportunity cost of a choice is the value of the best alternative forgone where, given limited resources, a choice needs to be made between several mutually exclusive alternatives. Assuming the best choice is made, it is the "cost" incurred by not enjoying the benefit that would have been had if the second best available choice had been taken instead. The New Oxford American Dictionary defines it as "the loss of potential gain from other alternatives when one alternative is chosen". As a representation of the relationship between scarcity and choice, the objective of opportunity cost is to ensure efficient use of scarce resources. It incorporates all associated costs of a decision, both explicit and implicit. Thus, opportunity costs are not restricted to monetary or financial costs: the real cost of output forgone, lost time, pleasure, or any other benefit that provides utility should also be considered an opportunity cost.
Types
Explicit costs
Explicit costs are the direct costs of an action (business operating costs or expenses), executed through either a cash transaction or a physical transfer of resources. In other words, explicit opportunity costs are the out-of-pocket costs of a firm, that are easily identifiable. This means explicit costs will always have a dollar value and involve a transfer of money, e.g. paying employees. With this said, these particular costs can easily be identified under the expenses of a firm's income statement and balance sheet to represent all the cash outflows of a firm.
Examples are as follows:
Land and infrastructure costs
Operation and maintenance costs—wages, rent, overhead, materials
Scenarios are as follows:
If a person leaves work for an hour and spends $200 on office supplies, then the explicit costs for the individual equates to the total expenses for the office supplies of $200.
If a printer of a company malfunctions, then the explicit costs for the company equates to the total amount to be paid to the repair technician.
Implicit costs
Implicit costs (also referred to as implied, imputed or notional costs) are the opportunity costs of utilising resources owned by the firm that could be used for other purposes. These costs are often hidden to the naked eye and are not made known. Unlike explicit costs, implicit opportunity costs correspond to intangibles. Hence, they cannot be clearly identified, defined or reported. This means that they are costs that have already occurred within a project, without exchanging cash. This could include a small business owner not taking any salary in the beginning of their tenure as a way for the business to be more profitable. As implicit costs are the result of assets, they are also not recorded for the use of accounting purposes because they do not represent any monetary losses or gains. In terms of factors of production, implicit opportunity costs allow for depreciation of goods, materials and equipment that ensure the operations of a company.
Examples of implicit costs regarding production are mainly resources contributed by a business owner which includes:
Human labour
Infrastructure
Risk
Time spent: also involves considering other valuable activities that could have been undertaken in order to maximize the return on time invested
Scenarios are as follows:
If a person leaves work for an hour to spend $200 on office supplies, and has an hourly rate of $25, then the implicit costs for the individual equates to the $25 that he/she could have earned instead.
If a printer of a company malfunctions, the implicit cost equates to the total production time that could have been utilized if the machine did not break down.
Excluded from opportunity cost
Sunk costs
Sunk costs (also referred to as historical costs) are costs that have been incurred already and cannot be recovered. As sunk costs have already been incurred, they remain unchanged and should not influence present or future actions or decisions regarding benefits and costs. Decision makers who recognise the insignificance of sunk costs then understand that the "consequences of choices cannot influence choice itself".
From the traceability source of costs, sunk costs can be direct costs or indirect costs. If the sunk cost can be summarized as a single component, it is a direct cost; if it is caused by several products or departments, it is an indirect cost.
Analyzing from the composition of costs, sunk costs can be either fixed costs or variable costs. When a company abandons a certain component or stops processing a certain product, the sunk cost usually includes fixed costs such as rent for equipment and wages, but it also includes variable costs due to changes in time or materials. Usually, fixed costs are more likely to constitute sunk costs.
Generally speaking, the stronger the liquidity, versatility, and compatibility of the asset, the less its sunk cost will be.
A scenario is given below:
A company used $5,000 for marketing and advertising on its music streaming service to increase exposure to the target market and potential consumers. In the end, the campaign proved unsuccessful. The sunk cost for the company equates to the $5,000 that was spent on the market and advertising means. This expense is to be ignored by the company in its future decisions and highlights that no additional investment should be made.
Despite the fact that sunk costs should be ignored when making future decisions, people sometimes make the mistake of thinking sunk cost matters. This is sunk cost fallacy.
Example: Steven bought a game for $100, but when he started to play it, he found it was boring rather than interesting. But Steven thinks he paid $100 for the game, so he has to play it through.
Sunk cost: $100 and the cost of the time spent playing the game. Analysis: Steven spent $100 hoping to complete the whole game experience, and the game is an entertainment activity, but there is no pleasure during the game, which is already low efficiency, but Steven also chose to waste time. So it is adding more cost.
Marginal cost
The concept of marginal cost in economics is the incremental cost of each new product produced for the entire product line. For example, if you build a plane, it costs a lot of money, but when you build the 100th plane, the cost will be much lower. When building a new aircraft, the materials used may be more useful, so make as many aircraft as possible from as few materials as possible to increase the margin of profit. Marginal cost is abbreviated MC or MPC.
Marginal cost: The increase in cost caused by an additional unit of production is called marginal cost. By definition, marginal cost (MC) is equal to the change in total cost (△TC) divided by the corresponding change in output (△Q): MC(Q) = △TC(Q)/△Q or, taking the limit as △Q goes to zero,
MC(Q) = lim(△Q→0) △TC(Q)/△Q = dTC/dQ.
In theory marginal costs represent the increase in total costs (which include both constant and variable costs) as output increases by 1 unit.
Adjustment cost
The phrase "adjustment costs" gained significance in macroeconomic studies, referring to the expenses a company bears when altering its production levels in response to fluctuations in demand and/or input costs. These costs may encompass those related to acquiring, setting up, and mastering new capital equipment, as well as costs tied hiring, dismissing, and training employees to modify production. We use "adjustment costs" to describe shifts in the firm's product nature rather than merely changes in output volume. We expand the notion of adjustment costs in this manner because, to reposition itself in the market relative to rivals, a company usually needs to alter crucial features of its goods or services to enhance competition based on differentiation or cost. In line with the conventional concept, the adjustment costs experienced during repositioning may involve expenses linked to the reassignment of capital and/or labor resources. However, they might also include costs from other areas, such as changes in organizational abilities, assets, and expertise.
Uses
Economic profit versus accounting profit
The main objective of accounting profits is to give an account of a company's fiscal performance, typically reported on in quarters and annually. As such, accounting principles focus on tangible and measurable factors associated with operating a business such as wages and rent, and thus, do not "infer anything about relative economic profitability". Opportunity costs are not considered in accounting profits as they have no purpose in this regard.
The purpose of calculating economic profits (and thus, opportunity costs) is to aid in better business decision-making through the inclusion of opportunity costs. In this way, a business can evaluate whether its decision and the allocation of its resources is cost-effective or not and whether resources should be reallocated.
Economic profit does not indicate whether or not a business decision will make money. It signifies if it is prudent to undertake a specific decision against the opportunity of undertaking a different decision. As shown in the simplified example in the image, choosing to start a business would provide $10,000 in terms of accounting profits. However, the decision to start a business would provide −$30,000 in terms of economic profits, indicating that the decision to start a business may not be prudent as the opportunity costs outweigh the profit from starting a business. In this case, where the revenue is not enough to cover the opportunity costs, the chosen option may not be the best course of action. When economic profit is zero, all the explicit and implicit costs (opportunity costs) are covered by the total revenue and there is no incentive for reallocation of the resources. This condition is known as normal profit.
Several performance measures of economic profit have been derived to further improve business decision-making such as risk-adjusted return on capital (RAROC) and economic value added (EVA), which directly include a quantified opportunity cost to aid businesses in risk management and optimal allocation of resources. Opportunity cost, as such, is an economic concept in economic theory which is used to maximise value through better decision-making.
In accounting, collecting, processing, and reporting information on activities and events that occur within an organization is referred to as the accounting cycle. To encourage decision-makers to efficiently allocate the resources they have (or those who have trusted them), this information is being shared with them. As a result, the role of accounting has evolved in tandem with the rise of economic activity and the increasing complexity of economic structure. Accounting is not only the gathering and calculation of data that impacts a choice, but it also delves deeply into the decision-making activities of businesses through the measurement and computation of such data. In accounting, it is common practice to refer to the opportunity cost of a decision (option) as a cost. The discounted cash flow method has surpassed all others as the primary method of making investment decisions, and opportunity cost has surpassed all others as an essential metric of cash outflow in making investment decisions. For various reasons, the opportunity cost is critical in this form of estimation.
First and foremost, the discounted rate applied in DCF analysis is influenced by an opportunity cost, which impacts project selection and the choice of a discounting rate. Using the firm's original assets in the investment means there is no need for the enterprise to utilize funds to purchase the assets, so there is no cash outflow. However, the cost of the assets must be included in the cash outflow at the current market price. Even though the asset does not result in a cash outflow, it can be sold or leased in the market to generate income and be employed in the project's cash flow. The money earned in the market represents the opportunity cost of the asset utilized in the business venture. As a result, opportunity costs must be incorporated into project planning to avoid erroneous project evaluations. Only those costs directly relevant to the project will be considered in making the investment choice, and all other costs will be excluded from consideration. Modern accounting also incorporates the concept of opportunity cost into the determination of capital costs and capital structure of businesses, which must compute the cost of capital invested by the owner as a function of the ratio of human capital. In addition, opportunity costs are employed to determine to price for asset transfers between industries.
Comparative advantage versus absolute advantage
When a nation, organisation or individual can produce a product or service at a relatively lower opportunity cost compared to its competitors, it is said to have a comparative advantage. In other words, a country has comparative advantage if it gives up less of a resource to make the same number of products as the other country that has to give up more.
Using the simple example in the image, to make 100 tonnes of tea, Country A has to give up the production of 20 tonnes of wool which means for every 1 tonne of tea produced, 0.2 tonnes of wool has to be forgone. Meanwhile, to make 30 tonnes of tea, Country B needs to sacrifice the production of 100 tonnes of wool, so for each tonne of tea, 3.3 tonnes of wool is forgone. In this case, Country A has a comparative advantage over Country B for the production of tea because it has a lower opportunity cost. On the other hand, to make 1 tonne of wool, Country A has to give up 5 tonnes of tea, while Country B would need to give up 0.3 tonnes of tea, so Country B has a comparative advantage over the production of wool.
Absolute advantage on the other hand refers to how efficiently a party can use its resources to produce goods and services compared to others, regardless of its opportunity costs. For example, if Country A can produce 1 tonne of wool using less manpower compared to Country B, then it is more efficient and has an absolute advantage over wool production, even if it does not have a comparative advantage because it has a higher opportunity cost (5 tonnes of tea).
Absolute advantage refers to how efficiently resources are used whereas comparative advantage refers to how little is sacrificed in terms of opportunity cost. When a country produces what it has the comparative advantage of, even if it does not have an absolute advantage, and trades for those products it does not have a comparative advantage over, it maximises its output since the opportunity cost of its production is lower than its competitors. By focusing on specialising this way, it also maximises its level of consumption.
Governmental level
Similar to the way people make decisions, governments frequently have to take opportunity cost into account when passing legislation. The potential cost at the government level is fairly evident when we look at, for instance, government spending on war. Assume that entering a war would cost the government $840 billion. They are thereby prevented from using $840 billion to fund healthcare, education, or tax cuts or to diminish by that sum any budget deficit. In regard to this situation, the explicit costs are the wages and materials needed to fund soldiers and required equipment whilst an implicit cost would be the time that otherwise employed personnel will be engaged in war.
Another example of opportunity cost at government level is the effects of the Covid-19 pandemic. Governmental responses to the COVID-19 epidemic have resulted in considerable economic and social consequences, both implicit and apparent. Explicit costs are the expenses that the government incurred directly as a result of the pandemic which included $4.5 billion dollars on medical bills, vaccine distribution of over $17 billion dollars, and economic stimulus plans that cost $189 billion dollars. These costs, which are often simpler to measure, resulted in greater public debt, decreased tax income, and increased expenditure by the government. The opportunity costs associated with the epidemic, including lost productivity, slower economic growth, and weakened social cohesiveness, are known as implicit costs. Even while these costs might be more challenging to estimate, they are nevertheless crucial to comprehending the entire scope of the pandemic's effects. For instance, the implementation of lockdowns and other limitations to stop the spread of the virus resulted in a $158 billion dollar loss due to decreased economic activity, job losses, and a rise in mental health issues.
The impact of the Covid-19 pandemic that broke out in recent years on economic operations is unavoidable, the economic risks are not symmetrical, and the impact of Covid-19 is distributed differently in the global economy. Some industries have benefited from the pandemic, while others have almost gone bankrupt. One of the sectors most impacted by the COVID-19 pandemic is the public and private health system. Opportunity cost is the concept of ensuring efficient use of scarce resources, a concept that is central to health economics. The massive increase in the need for intensive care has largely limited and exacerbated the department's ability to address routine health problems. The sector must consider opportunity costs in decisions related to the allocation of scarce resources, premised on improving the health of the population.
However, the opportunity cost of implementing policies to the sector has limited impact in the health sector. Patients with severe symptoms of COVID-19 require close monitoring in the ICU and in therapeutic ventilator support, which is key to treating the disease. In this case, scarce resources include bed days, ventilation time, and therapeutic equipment. Temporary excess demand for hospital beds from patients exceeds the number of bed days provided by the health system. The increased demand for days in bed is due to the fact that infected hospitalized patients stay in bed longer, shifting the demand curve to the right (see curve D2 in Graph1.11). The number of bed days provided by the health system may be temporarily reduced as there may be a shortage of beds due to the widespread spread of the virus. If this situation becomes unmanageable, supply decreases and the supply curve shifts to the left (curve S2 in Graph1.11). A perfect competition model can be used to express the concept of opportunity cost in the health sector. In perfect competition, market equilibrium is understood as the point where supply and demand are exactly the same (points P and Q in Graph1.11). The balance is Pareto optimal equals marginal opportunity cost. Medical allocation may result in some people being better off and others worse off. At this point, it is assumed that the market has produced the maximum outcome associated with the Pareto partial order. As a result, the opportunity cost increases when other patients cannot be admitted to the ICU due to a shortage of beds.
See also
Austrian School
Best alternative to a negotiated agreement
Budget constraint
Dead-end job
Economies of scale
Econometrics
Fear of missing out
Lost sales
No such thing as a free lunch
Production–possibility frontier
Reduced cost
Time management
Time sink
Trade-off
Transaction cost
You can't have your cake and eat it
Perverse subsidies
References
External links
The Opportunity Cost of Economics Education by Robert H. Frank
Costs
Capital management
Economics and time | Opportunity cost | [
"Physics"
] | 3,805 | [
"Spacetime",
"Economics and time",
"Physical quantities",
"Time"
] |
45,535 | https://en.wikipedia.org/wiki/Alessi%20%28Italian%20company%29 | Alessi is a housewares and kitchen utensil company in Italy, manufacturing and marketing everyday items authored by a wide range of designers, architects and industrial designers — including Achille Castiglioni, Richard Sapper, Marco Zanuso, Alessandro Mendini, Ettore Sottsass, Wiel Arets, Zaha Hadid, Toyo Ito, Hani Rashid, Tom Kovac, Greg Lynn, MVRDV, Jean Nouvel, UN Studio, Michael Graves, and Philippe Starck. The Alessi company in the UK is worth around £2.4 million.
History
Alessi was founded in 1921 by Giovanni Alessi who was born in Italy and raised in Switzerland. A few years after World War I, Alessi started with producing a wide range of tableware items in nickel, chromium and silver-plated brass. The company began when Carlo Alessi (born 1916), the son of Giovanni, was named chief designer. Between 1935 and 1945 he developed most of the products Alessi released.
1950s and 1960s
In 1969 the company was under the leadership of Carlo Alessi. It was his brother Luigi who introduced the collaboration with external designers in 1955. With some architects, he designed several items that were created for the hotel needs. He helped introduce many best-sellers, such as the historical series of wire baskets. From 1957 by Luigi Massaroni and Carlo Mazzeri. This was designed in a series with an Ice bucket and Ice tongs as part of the Program 4 for the 11 Triennale in Milan. This was the first time that the Alessi products got shown with manufactured goods. The 1950s were a difficult time as it was only a few years after World War II and many people could not afford to buy designer objects.
1970s and 1980s
In 1970, Alberto Alessi was responsible for the third transformation of the company. Alessi was considered one of the "Italian Design Factories". In this decade under the leadership of Alberto Alessi the company collaborated with some design maestros like Achille Castiglioni, Richard Sapper, Alessandro Mendini, and Ettore Sottsass. In the 1970s, Alessi produced the Condiment set (salt, pepper and toothpicks) by Ettore Sottsass, the Espressomaker by Sapper.
The 1980s marked a period in which Italian design factories had to compete with mass production. These movements had a different view on design, for the Italian design factories the design and therefore the designer was the most important part of the process while for the mass production the design had to be functional and easy to be reproduced. Also in the 1980s, they changed their marketing image from factory to industrial research lab, a place for research and production. For Alessi the 1980s are marked with some designs like the Two tone kettle by Sapper, their first cutlery set Dry by Castiglioni. Alessi collaborated with new designers, including Aldo Rossi, Michael Graves, and Philippe Starck, who have been responsible for the some of Alessi's all-time bestsellers like the kettle with a bird whistle by Graves.
Alessi faced increasing competition from other international manufacturers, especially in lower-cost products mass-produced for retailers such as Target Corporation and J. C. Penney.
1990s
In the 1990s, Alessi started to work more with plastics, at the request of designers who found it an easier material to work with than metal, offering more design freedom and innovative possibilities. The 1990s were marked by the theme "Family Follows Fiction", with playful and imaginative objects. Artists designing for this theme included Stefano Giovannoni and Alessandro Mendini, who designed Fruit Mama and the bestseller Anna G. Metal still remained a popular material, for example the Girotondo family by King Kong.
2000s
During the 2000s, Alessi collaborated with several architects for its "coffee and tea towers", with a new generation of architects, including Wiel Arets, Zaha Hadid, Toyo Ito, Tom Kovac, Greg Lynn, MVRDV, Jean Nouvel, and UN Studio. These sets had a limited production of 99 copies. Another design in the 2000s was the Blow Up series by Fratelli Campana. The brothers played with form and shape to create baskets and other objects that look like they would fall apart when touched.
In 2006, the company reclassified its products under three lines: "A di Alessi", "Alessi", and "Officina Alessi". A di Alessi is more "democratic" and more "pop", the lower price range of Alessi. Officina Alessi is more exclusive, innovative, and experimental, marked by small-batch production series and limited series.
in 2007, Hani Rashid and Lise Anne Couture Asymptote Architecture (New York) designed the New York city flagship store for Alessi in the SoHo neighborhood on Greene Street. The space featured an Alessi gallery, espresso bar and retail in a renovated historic loft building. Asymptote was responsible for not only the interior design of the space but also branding and the graphic identity, updating Alessi's image from its 1980s Postmodern style to a contemporary architectural ethos.
Alessi products are on display in museums worldwide, including the Museum of Modern Art (New York), the Metropolitan Museum of Art, the Victoria and Albert Museum, the Pompidou Centre, the Design Museum Holon, and the Stedelijk Museum Italy. A collaboration with the National Palace Museum of Taiwan produced a collection of various kitchenware products with Asian themes.
Designers and their designs
From 1945 until today, Alessi has collaborated with designers and even other brands or companies for their products. Some key designs and their designers:
1945 Bombé tea and coffee service by Carlo Alessi Anghini
1978 Condiment set (salt, pepper and toothpicks) by Ettore Sottsass
1979 Programma 5 barware by Ettore Sottsass
1979 9090 espresso maker by Richard Sapper. This design won the first Compasso d'Oro award for Alessi.
1982 Dry flatware/cutlery by Achille Castiglioni the first cutlery produced by Alessi
1983 Bollitore whistling water kettle by Richard Sapper. Alessi's first "designer kettle", with a whistle whose pipe sings two tones, mi and si, in harmony
1984 La Conica by Aldo Rossi. This was Rossi's first mass-production design and its image earned immediate success for the then-new Officina Alessi brand, as well as becoming a 1980s design symbol. Therefore, it was soon to be followed by more Rossi designs in this theme.
1985 Kettle by Michael Graves. The Graves' kettle is known for its singing bird when the water boils. Followed by matching products like a sugar bowl and a creamer, also designed by Graves. This product has sold the greatest number of units in the history of the company.
1987 Nuovo Milano cutlery set designed by Ettore Sottsass with assistance of Alberto Gozzi. Won XVIth Compasso d'oro award in 1991.
1988 Pito kettle by Frank Gehry
1989 Girotondo family by King-Kong
1989 Hot Bertaa kettle by Philippe Starck
1990 Juicy Salif by Philippe Starck
1993 Fruit Mama by Stefano Giovannoni. The first design for the Family Follows Fiction series; noteworthy for its aesthetic juxtaposition of plastic and stainless steel.
1994 Anna G corkscrew by Alessandro Mendini. A best seller since it was first produced, with its smiling face, it engendered the Anna family of related objects for the table and kitchen.
1995 Mary Biscuit by Stefano Giovannoni. Biscuit or cookie container.
1997 Nono di Antonio garlic crusher by Guido Venturini
2004 Blow Up range of baskets and citrus holders by Fratelli Campana
2006 Kaj watch by Karim Rashid
2013 Dressed dinnerware and flatware by Marcel Wanders
2015 Collo Alto cutlery set by Inga Sempé
2017 Forma cheese grater by Zaha Hadid
Critical reception
Although Alessi is credited with introducing many iconic design objects, critics have also complained about items designed for everyday household use which do not perform their basic functions well.
See also
Neapolitan flip coffee pot
Chiara Luzzana
References
Further reading
External links
Alessi website
Interviews with Alessi Designers website
History of Alessi website
Design companies of Italy
Kitchenware brands
Italian brands
Companies based in Omegna
Design companies established in 1921
Manufacturing companies established in 1921
Italian companies established in 1921
Home appliance brands
B Lab-certified corporations
Compasso d'Oro Award recipients
Industrial design
Product design | Alessi (Italian company) | [
"Engineering"
] | 1,781 | [
"Industrial design",
"Design engineering",
"Design",
"Product design"
] |
45,562 | https://en.wikipedia.org/wiki/Cartagena%20Protocol%20on%20Biosafety | The Cartagena Protocol on Biosafety to the Convention on Biological Diversity is an international agreement on biosafety as a supplement to the Convention on Biological Diversity (CBD) effective since 2003. The Biosafety Protocol seeks to protect biological diversity from the potential risks posed by genetically modified organisms resulting from modern biotechnology.
The Biosafety Protocol makes clear that products from new technologies must be based on the precautionary principle and allow developing nations to balance public health against economic benefits. It will for example let countries ban imports of genetically modified organisms if they feel there is not enough scientific evidence that the product is safe and requires exporters to label shipments containing genetically altered commodities such as corn or cotton.
The required number of 50 instruments of ratification/accession/approval/acceptance by countries was reached in May 2003. In accordance with the provisions of its Article 37, the Protocol entered into force on 11 September 2003. As of July 2020, the Protocol had 173 parties, which includes 170 United Nations member states, the State of Palestine, Niue, and the European Union.
Background
The Cartagena Protocol on Biosafety, also known as the Biosafety Protocol, was adopted in January 2000, after a CBD Open-ended Ad Hoc Working Group on Biosafety had met six times between July 1996 and February 1999. The Working Group submitted a draft text of the Protocol, for consideration by Conference of the Parties at its first extraordinary meeting, which was convened for the express purpose of adopting a protocol on biosafety to the CBD. After a few delays, the Cartagena Protocol was eventually adopted on 29 January 2000 The Biosafety Protocol seeks to protect biological diversity from the potential risks posed by living modified organisms resulting from modern biotechnology.
Objective
In accordance with the precautionary approach, contained in Principle 15 of the Rio Declaration on Environment and Development, the objective of the Protocol is to contribute to ensuring an adequate level of protection in the field of the safe transfer, handling and use of 'living modified organisms resulting from modern biotechnology' that may have adverse effects on the conservation and sustainable use of biological diversity, taking also into account risks to human health, and specifically focusing on transboundary movements (Article 1 of the Protocol, SCBD 2000).
Living modified organisms (LMOs)
The protocol defines a 'living modified organism' as any living organism that possesses a novel combination of genetic material obtained through the use of modern biotechnology, and 'living organism' means any biological entity capable of transferring or replicating genetic material, including sterile organisms, viruses and viroids. 'Modern biotechnology' is defined in the Protocol to mean the application of in vitro nucleic acid techniques, or fusion of cells beyond the taxonomic family, that overcome natural physiological reproductive or recombination barriers and are not techniques used in traditional breeding and selection. 'Living modified organism (LMO) Products' are defined as processed material that are of living modified organism origin, containing detectable novel combinations of replicable genetic material obtained through the use of modern biotechnology. Common LMOs include agricultural crops that have been genetically modified for greater productivity or for resistance to pests or diseases. Examples of modified crops include tomatoes, cassava, corn, cotton and soybeans. 'Living modified organism intended for direct use as food or feed, or for processing (LMO-FFP)' are agricultural commodities from GM crops. Overall the term 'living modified organisms' is equivalent to genetically modified organism – the Protocol did not make any distinction between these terms and did not use the term 'genetically modified organism.'
Precautionary approach
One of the outcomes of the United Nations Conference on Environment and Development (also known as the Earth Summit) held in Rio de Janeiro, Brazil, in June 1992, was the adoption of the Rio Declaration on Environment and Development, which contains 27 principles to underpin sustainable development. Commonly known as the precautionary principle, Principle 15 states that "In order to protect the environment, the precautionary approach shall be widely applied by States according to their capabilities. Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation."
Elements of the precautionary approach are reflected in a number of the provisions of the Protocol, such as:
The preamble, reaffirming "the precautionary approach contained in Principle 15 of the Rio Declaration on environment and Development";
Article 1, indicating that the objective of the Protocol is "in accordance with the precautionary approach contained in Principle 15 of the Rio Declaration on Environment and Development";
Article 10.6 and 11.8, which states "Lack of scientific certainty due to insufficient relevant scientific information and knowledge regarding the extent of the potential adverse effects of an LMO on biodiversity, taking into account risks to human health, shall not prevent a Party of import from taking a decision, as appropriate, with regard to the import of the LMO in question, in order to avoid or minimize such potential adverse effects."; and
Annex III on risk assessment, which notes that "Lack of scientific knowledge or scientific consensus should not necessarily be interpreted as indicating a particular level of risk, an absence of risk, or an acceptable risk."
Application
The Protocol applies to the transboundary movement, transit, handling and use of all living modified organisms that may have adverse effects on the conservation and sustainable use of biological diversity, taking also into account risks to human health (Article 4 of the Protocol, SCBD 2000).
Parties and non-parties
The governing body of the Protocol is called the Conference of the Parties to the Convention serving as the meeting of the Parties to the Protocol (also the COP-MOP). The main function of this body is to review the implementation of the Protocol and make decisions necessary to promote its effective operation. Decisions under the Protocol can only be taken by Parties to the Protocol. Parties to the Convention that are not Parties to the Protocol may only participate as observers in the proceedings of meetings of the COP-MOP.
The Protocol addresses the obligations of Parties in relation to the transboundary movements of LMOs to and from non-Parties to the Protocol. The transboundary movements between Parties and non-Parties must be carried out in a manner that is consistent with the objective of the Protocol. Parties are required to encourage non-Parties to adhere to the Protocol and to contribute information to the Biosafety Clearing-House.
Relationship with the WTO
A number of agreements under the World Trade Organization (WTO), such as the Agreement on the Application of Sanitary and Phytosanitary Measures (SPS Agreement) and the Agreement on Technical Barriers to Trade (TBT Agreement), and the Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPs), contain provisions that are relevant to the Protocol. This Protocol states in its preamble that parties:
Recognize that trade and environment agreements should be mutually supportive;
Emphasize that the Protocol is not interpreted as implying a change in the rights and obligations under any existing agreements; and
Understand that the above recital is not intended to subordinate the Protocol to other international agreements.
Main features
Overview of features
The Protocol promotes biosafety by establishing rules and procedures for the safe transfer, handling, and use of LMOs, with specific focus on transboundary movements of LMOs. It features a set of procedures including one for LMOs that are to be intentionally introduced into the environment called the advance informed agreement procedure, and one for LMOs that are intended to be used directly as food or feed or for processing. Parties to the Protocol must ensure that LMOs are handled, packaged and transported under conditions of safety. Furthermore, the shipment of LMOs subject to transboundary movement must be accompanied by appropriate documentation specifying, among other things, identity of LMOs and contact point for further information. These procedures and requirements are designed to provide importing Parties with the necessary information needed for making informed decisions about whether or not to accept LMO imports and for handling them in a safe manner.
The Party of import makes its decisions in accordance with scientifically sound risk assessments. The Protocol sets out principles and methodologies on how to conduct a risk assessment. In case of insufficient relevant scientific information and knowledge, the Party of import may use precaution in making their decisions on import. Parties may also take into account, consistent with their international obligations, socio-economic considerations in reaching decisions on import of LMOs.
Parties must also adopt measures for managing any risks identified by the risk assessment, and they must take necessary steps in the event of accidental release of LMOs.
To facilitate its implementation, the Protocol establishes a Biosafety Clearing-House for Parties to exchange information, and contains a number of important provisions, including capacity-building, a financial mechanism, compliance procedures, and requirements for public awareness and participation.
Procedures for moving LMOs across borders
Advance Informed Agreement
The "Advance Informed Agreement" (AIA) procedure applies to the first intentional transboundary movement of LMOs for intentional introduction into the environment of the Party of import. It includes four components: notification by the Party of export or the exporter, acknowledgment of receipt of notification by the Party of import, the decision procedure, and opportunity for review of decisions. The purpose of this procedure is to ensure that importing countries have both the opportunity and the capacity to assess risks that may be associated with the LMO before agreeing to its import. The Party of import must indicate the reasons on which its decisions are based (unless consent is unconditional). A Party of import may, at any time, in light of new scientific information, review and change a decision. A Party of export or a notifier may also request the Party of import to review its decisions.
However, the Protocol's AIA procedure does not apply to certain categories of LMOs:
LMOs in transit;
LMOs destined for contained use;
LMOs intended for direct use as food or feed or for processing
While the Protocol's AIA procedure does not apply to certain categories of LMOs, Parties have the right to regulate the importation on the basis of domestic legislation. There are also allowances in the Protocol to declare certain LMOs exempt from application of the AIA procedure.
LMOs intended for food or feed, or for processing
LMOs intended for direct use as food or feed, or processing (LMOs-FFP) represent a large category of agricultural commodities. The Protocol, instead of using the AIA procedure, establishes a more simplified procedure for the transboundary movement of LMOs-FFP. Under this procedure, A Party must inform other Parties through the Biosafety Clearing-House, within 15 days, of its decision regarding domestic use of LMOs that may be subject to transboundary movement.
Decisions by the Party of import on whether or not to accept the import of LMOs-FFP are taken under its domestic regulatory framework that is consistent with the objective of the Protocol. A developing country Party or a Party with an economy in transition may, in the absence of a domestic regulatory framework, declare through the Biosafety Clearing-House that its decisions on the first import of LMOs-FFP will be taken in accordance with risk assessment as set out in the Protocol and time frame for decision-making.
Handling, transport, packaging and identification
The Protocol provides for practical requirements that are deemed to contribute to the safe movement of LMOs. Parties are required to take measures for the safe handling, packaging and transportation of LMOs that are subject to transboundary movement. The Protocol specifies requirements on identification by setting out what information must be provided in documentation that should accompany transboundary shipments of LMOs. It also leaves room for possible future development of standards for handling, packaging, transport and identification of LMOs by the meeting of the Parties to the Protocol.
Each Party is required to take measures ensuring that LMOs subject to intentional transboundary movement are accompanied by documentation identifying the LMOs and providing contact details of persons responsible for such movement. The details of these requirements vary according to the intended use of the LMOs, and, in the case of LMOs for food, feed or for processing, they should be further addressed by the governing body of the Protocol. (Article 18 of the Protocol, SCBD 2000).
The first meeting of the Parties adopted decisions outlining identification requirements for different categories of LMOs (Decision BS-I/6, SCBD 2004). However, the second meeting of the Parties failed to reach agreement on the detailed requirements to identify LMOs intended for direct use as food, feed or for processing and will need to reconsider this issue at its third meeting in March 2006.
Biosafety Clearing-House
The Protocol established a Biosafety Clearing-House (BCH), in order to facilitate the exchange of scientific, technical, environmental and legal information on, and experience with, living modified organisms; and to assist Parties to implement the Protocol (Article 20 of the Protocol, SCBD 2000). It was established in a phased manner, and the first meeting of the Parties approved the transition from the pilot phase to the fully operational phase, and adopted modalities for its operations (Decision BS-I/3, SCBD 2004).
See also
Biosafety Clearing-House
Substantial equivalence
Nagoya Protocol, another supplementary protocol adopted by the CBD
References
Secretariat of the Convention on Biological Diversity (2000) Cartagena Protocol on Biosafety to the Convention on Biological Diversity: text and annexes. Montreal, Quebec, Canada.
Secretariat of the Convention on Biological Diversity (2004) Global Biosafety – From concepts to action: Decisions adopted by the first meeting of the Conference of the Parties to the Convention on Biological Diversity serving as the meeting of the Parties to the Cartagena Protocol on Biosafety. Montreal, Quebec, Canada.
External links
Biosafety Protocol Homepage
Ratifications at depositary
Biosafety Clearing-House Central Portal
Text of the Protocol
Map showing the state of the ratification of the Cartagena Protocol on Biosafety.
Introductory note by Laurence Boisson de Chazournes, procedural history note and audiovisual material on the Cartagena Protocol on Biosafety to the Convention on Biological Diversity in the Historic Archives of the United Nations Audiovisual Library of International Law
Health risk
Biodiversity
Environmental treaties
United Nations treaties
Treaties concluded in 2000
Treaties entered into force in 2003
2003 in the environment
Treaties of Afghanistan
Treaties of Albania
Treaties of Algeria
Treaties of Angola
Treaties of Antigua and Barbuda
Treaties of Argentina
Treaties of Armenia
Treaties of Austria
Treaties of Azerbaijan
Treaties of the Bahamas
Treaties of Bahrain
Treaties of Bangladesh
Treaties of Barbados
Treaties of Belarus
Treaties of Belgium
Treaties of Belize
Treaties of Benin
Treaties of Bhutan
Treaties of Bolivia
Treaties of Bosnia and Herzegovina
Treaties of Botswana
Treaties of Brazil
Treaties of Bulgaria
Treaties of Burkina Faso
Treaties of Burundi
Treaties of Cambodia
Treaties of Cameroon
Treaties of Cape Verde
Treaties of the Central African Republic
Treaties of Chad
Treaties of the People's Republic of China
Treaties of Colombia
Treaties of the Comoros
Treaties of the Republic of the Congo
Treaties of Costa Rica
Treaties of Ivory Coast
Treaties of Croatia
Treaties of Cuba
Treaties of Cyprus
Treaties of the Czech Republic
Treaties of North Korea
Treaties of the Democratic Republic of the Congo
Treaties of Denmark
Treaties of Djibouti
Treaties of Dominica
Treaties of the Dominican Republic
Treaties of Ecuador
Treaties of Egypt
Treaties of El Salvador
Treaties of Eritrea
Treaties of Estonia
Treaties of the Transitional Government of Ethiopia
Treaties of Fiji
Treaties of Finland
Treaties of France
Treaties of Gabon
Treaties of the Gambia
Treaties of Georgia (country)
Treaties of Germany
Treaties of Ghana
Treaties of Greece
Treaties of Grenada
Treaties of Guatemala
Treaties of Guinea
Treaties of Guinea-Bissau
Treaties of Guyana
Treaties of Honduras
Treaties of Hungary
Treaties of India
Treaties of Indonesia
Treaties of Iran
Treaties of Iraq
Treaties of Ireland
Treaties of Italy
Treaties of Jamaica
Treaties of Japan
Treaties of Jordan
Treaties of Kazakhstan
Treaties of Kenya
Treaties of Kiribati
Treaties of Kyrgyzstan
Treaties of Laos
Treaties of Latvia
Treaties of Lebanon
Treaties of Lesotho
Treaties of Liberia
Treaties of the Libyan Arab Jamahiriya
Treaties of Lithuania
Treaties of Luxembourg
Treaties of Madagascar
Treaties of Malawi
Treaties of Malaysia
Treaties of the Maldives
Treaties of Mali
Treaties of Malta
Treaties of the Marshall Islands
Treaties of Mauritania
Treaties of Mauritius
Treaties of Mexico
Treaties of Mongolia
Treaties of Montenegro
Treaties of Morocco
Treaties of Mozambique
Treaties of Myanmar
Treaties of Namibia
Treaties of Nauru
Treaties of the Netherlands
Treaties of New Zealand
Treaties of Nicaragua
Treaties of Niger
Treaties of Nigeria
Treaties of Norway
Treaties of Oman
Treaties of Pakistan
Treaties of Palau
Treaties of the State of Palestine
Treaties of Panama
Treaties of Papua New Guinea
Treaties of Paraguay
Treaties of Peru
Treaties of the Philippines
Treaties of Poland
Treaties of Portugal
Treaties of Qatar
Treaties of South Korea
Treaties of Moldova
Treaties of Romania
Treaties of Rwanda
Treaties of Samoa
Treaties of Saudi Arabia
Treaties of Senegal
Treaties of Serbia and Montenegro
Treaties of Seychelles
Treaties of Slovakia
Treaties of Slovenia
Treaties of the Solomon Islands
Treaties of the Transitional Federal Government of Somalia
Treaties of South Africa
Treaties of Spain
Treaties of Sri Lanka
Treaties of Saint Kitts and Nevis
Treaties of Saint Lucia
Treaties of Saint Vincent and the Grenadines
Treaties of the Republic of the Sudan (1985–2011)
Treaties of Suriname
Treaties of Eswatini
Treaties of Sweden
Treaties of Switzerland
Treaties of Syria
Treaties of Tajikistan
Treaties of Thailand
Treaties of North Macedonia
Treaties of Togo
Treaties of Tonga
Treaties of Trinidad and Tobago
Treaties of Tunisia
Treaties of Turkey
Treaties of Turkmenistan
Treaties of Uganda
Treaties of Ukraine
Treaties of the United Arab Emirates
Treaties of the United Kingdom
Treaties of Tanzania
Treaties of Uruguay
Treaties of Venezuela
Treaties of Vietnam
Treaties of Yemen
Treaties of Zambia
Treaties of Zimbabwe
Treaties entered into by the European Union
Treaties of Niue
2000 in Canada
Treaties extended to Hong Kong
Treaties extended to Gibraltar
Convention on Biological Diversity
Treaties of Kuwait | Cartagena Protocol on Biosafety | [
"Biology"
] | 3,636 | [
"Convention on Biological Diversity",
"Biodiversity"
] |
45,569 | https://en.wikipedia.org/wiki/Dedekind%20cut | In mathematics, Dedekind cuts, named after German mathematician Richard Dedekind (but previously considered by Joseph Bertrand), are а method of construction of the real numbers from the rational numbers. A Dedekind cut is a partition of the rational numbers into two sets A and B, such that each element of A is less than every element of B, and A contains no greatest element. The set B may or may not have a smallest element among the rationals. If B has a smallest element among the rationals, the cut corresponds to that rational. Otherwise, that cut defines a unique irrational number which, loosely speaking, fills the "gap" between A and B. In other words, A contains every rational number less than the cut, and B contains every rational number greater than or equal to the cut. An irrational cut is equated to an irrational number which is in neither set. Every real number, rational or not, is equated to one and only one cut of rationals.
Dedekind cuts can be generalized from the rational numbers to any totally ordered set by defining a Dedekind cut as a partition of a totally ordered set into two non-empty parts A and B, such that A is closed downwards (meaning that for all a in A, x ≤ a implies that x is in A as well) and B is closed upwards, and A contains no greatest element. See also completeness (order theory).
It is straightforward to show that a Dedekind cut among the real numbers is uniquely defined by the corresponding cut among the rational numbers. Similarly, every cut of reals is identical to the cut produced by a specific real number (which can be identified as the smallest element of the B set). In other words, the number line where every real number is defined as a Dedekind cut of rationals is a complete continuum without any further gaps.
Definition
A Dedekind cut is a partition of the rationals into two subsets and such that
is nonempty.
(equivalently, is nonempty).
If , , and , then . ( is "closed downwards".)
If , then there exists a such that . ( does not contain a greatest element.)
By omitting the first two requirements, we formally obtain the extended real number line.
Representations
It is more symmetrical to use the (A, B) notation for Dedekind cuts, but each of A and B does determine the other. It can be a simplification, in terms of notation if nothing more, to concentrate on one "half" — say, the lower one — and call any downward-closed set A without greatest element a "Dedekind cut".
If the ordered set S is complete, then, for every Dedekind cut (A, B) of S, the set B must have a minimal element b,
hence we must have that A is the interval (−∞, b), and B the interval [b, +∞).
In this case, we say that b is represented by the cut (A, B).
The important purpose of the Dedekind cut is to work with number sets that are not complete. The cut itself can represent a number not in the original collection of numbers (most often rational numbers). The cut can represent a number b, even though the numbers contained in the two sets A and B do not actually include the number b that their cut represents.
For example if A and B only contain rational numbers, they can still be cut at by putting every negative rational number in A, along with every non-negative rational number whose square is less than 2; similarly B would contain every positive rational number whose square is greater than or equal to 2. Even though there is no rational value for , if the rational numbers are partitioned into A and B this way, the partition itself represents an irrational number.
Ordering of cuts
Regard one Dedekind cut (A, B) as less than another Dedekind cut (C, D) (of the same superset) if A is a proper subset of C. Equivalently, if D is a proper subset of B, the cut (A, B) is again less than (C, D). In this way, set inclusion can be used to represent the ordering of numbers, and all other relations (greater than, less than or equal to, equal to, and so on) can be similarly created from set relations.
The set of all Dedekind cuts is itself a linearly ordered set (of sets). Moreover, the set of Dedekind cuts has the least-upper-bound property, i.e., every nonempty subset of it that has any upper bound has a least upper bound. Thus, constructing the set of Dedekind cuts serves the purpose of embedding the original ordered set S, which might not have had the least-upper-bound property, within a (usually larger) linearly ordered set that does have this useful property.
Construction of the real numbers
A typical Dedekind cut of the rational numbers is given by the partition with
This cut represents the irrational number in Dedekind's construction. The essential idea is that we use a set , which is the set of all rational numbers whose squares are less than 2, to "represent" number , and further, by defining properly arithmetic operators over these sets (addition, subtraction, multiplication, and division), these sets (together with these arithmetic operations) form the familiar real numbers.
To establish this, one must show that really is a cut (according to the definition) and the square of , that is (please refer to the link above for the precise definition of how the multiplication of cuts is defined), is (note that rigorously speaking this number 2 is represented by a cut ). To show the first part, we show that for any positive rational with , there is a rational with and . The choice works, thus is indeed a cut. Now armed with the multiplication between cuts, it is easy to check that (essentially, this is because ). Therefore to show that , we show that , and it suffices to show that for any , there exists , . For this we notice that if , then for the constructed above, this means that we have a sequence in whose square can become arbitrarily close to , which finishes the proof.
Note that the equality cannot hold since is not rational.
Relation to interval arithmetic
Given a Dedekind cut representing the real number by splitting the rationals into
where rationals in are less than and rationals in are greater than , it can be equivalently represented as the set of pairs with and , with the lower cut and the upper cut being given by projections. This corresponds exactly to the set of intervals approximating .
This allows the basic arithmetic operations on the real numbers to be defined in terms of interval arithmetic. This property and its relation with real numbers given only in terms of and is particularly important in weaker foundations such as constructive analysis.
Generalizations
Arbitrary linearly ordered sets
In the general case of an arbitrary linearly ordered set X, a cut is a pair such that and , imply . Some authors add the requirement that both A and B are nonempty.
If neither A has a maximum, nor B has a minimum, the cut is called a gap. A linearly ordered set endowed with the order topology is compact if and only if it has no gap.
Surreal numbers
A construction resembling Dedekind cuts is used for (one among many possible) constructions of surreal numbers. The relevant notion in this case is a Cuesta-Dutari cut, named after the Spanish mathematician .
Partially ordered sets
More generally, if S is a partially ordered set, a completion of S means a complete lattice L with an order-embedding of S into L. The notion of complete lattice generalizes the least-upper-bound property of the reals.
One completion of S is the set of its downwardly closed subsets, ordered by inclusion. A related completion that preserves all existing sups and infs of S is obtained by the following construction: For each subset A of S, let Au denote the set of upper bounds of A, and let Al denote the set of lower bounds of A. (These operators form a Galois connection.) Then the Dedekind–MacNeille completion of S consists of all subsets A for which (Au)l = A; it is ordered by inclusion. The Dedekind-MacNeille completion is the smallest complete lattice with S embedded in it.
Notes
References
Dedekind, Richard, Essays on the Theory of Numbers, "Continuity and Irrational Numbers," Dover Publications: New York, . Also available at Project Gutenberg.
External links
Order theory
Rational numbers
Real numbers | Dedekind cut | [
"Mathematics"
] | 1,801 | [
"Real numbers",
"Order theory",
"Mathematical objects",
"Numbers"
] |
45,570 | https://en.wikipedia.org/wiki/DNA%20vaccine | A DNA vaccine is a type of vaccine that transfects a specific antigen-coding DNA sequence into the cells of an organism as a mechanism to induce an immune response.
DNA vaccines work by injecting genetically engineered plasmid containing the DNA sequence encoding the antigen(s) against which an immune response is sought, so the cells directly produce the antigen, thus causing a protective immunological response. DNA vaccines have theoretical advantages over conventional vaccines, including the "ability to induce a wider range of types of immune response". Several DNA vaccines have been tested for veterinary use. In some cases, protection from disease in animals has been obtained, in others not. Research is ongoing over the approach for viral, bacterial and parasitic diseases in humans, as well as for cancers. In August 2021, Indian authorities gave emergency approval to ZyCoV-D. Developed by Cadila Healthcare, it is the first DNA vaccine approved for humans.
History
Conventional vaccines contain either specific antigens from a pathogen, or attenuated viruses which stimulate an immune response in the vaccinated organism. DNA vaccines are members of the genetic vaccines, because they contain a genetic information (DNA or RNA) that codes for the cellular production (protein biosynthesis) of an antigen. DNA vaccines contain DNA that codes for specific antigens from a pathogen. The DNA is injected into the body and taken up by cells, whose normal metabolic processes synthesize proteins based on the genetic code in the plasmid that they have taken up. Because these proteins contain regions of amino acid sequences that are characteristic of bacteria or viruses, they are recognized as foreign and when they are processed by the host cells and displayed on their surface, the immune system is alerted, which then triggers immune responses. Alternatively, the DNA may be encapsulated in protein to facilitate cell entry. If this capsid protein is included in the DNA, the resulting vaccine can combine the potency of a live vaccine without reversion risks.
In 1983, Enzo Paoletti and Dennis Panicali at the New York Department of Health devised a strategy to produce recombinant DNA vaccines by using genetic engineering to transform ordinary smallpox vaccine into vaccines that may be able to prevent other diseases. They altered the DNA of cowpox virus by inserting a gene from other viruses (namely Herpes simplex virus, hepatitis B and influenza). In 1993, Jeffrey Ulmer and co-workers at Merck Research Laboratories demonstrated that direct injection of mice with plasmid DNA encoding a flu antigen protected the animals against subsequent experimental infection with influenza virus. In 2016 a DNA vaccine for the Zika virus began testing in humans at the National Institutes of Health. The study was planned to involve up to 120 subjects aged between 18 and 35. Separately, Inovio Pharmaceuticals and GeneOne Life Science began tests of a different DNA vaccine against Zika in Miami. The NIH vaccine is injected into the upper arm under high pressure. Manufacturing the vaccines in volume remained unsolved as of August 2016. Clinical trials for DNA vaccines to prevent HIV are underway.
In August 2021, Indian authorities gave emergency approval to ZyCoV-D. Developed by Cadila Healthcare, it is the first DNA vaccine against COVID-19.
Applications
no DNA vaccines have been approved for human use in the United States. Few experimental trials have evoked a response strong enough to protect against disease and the technique's usefulness remains to be proven in humans.
A veterinary DNA vaccine to protect horses from West Nile virus has been approved. Another West Nile virus vaccine has been tested successfully on American robins.
DNA immunization is also being investigated as a means of developing antivenom sera. DNA immunization can be used as a technology platform for monoclonal antibody induction.
Advantages
No risk for infections
Antigen presentation by both MHC class I and class II molecules
Polarise T-cell response toward type 1 or type 2
Immune response focused on the antigen of interest
Ease of development and production
Stability for storage and shipping
Cost-effectiveness
Obviates need for peptide synthesis, expression and purification of recombinant proteins and use of toxic adjuvants
Long-term persistence of immunogen
In vivo expression ensures protein more closely resembles normal eukaryotic structure, with accompanying post-translational modifications
Disadvantages
Limited to protein immunogens (not useful for non-protein based antigens such as bacterial polysaccharides)
Potential for atypical processing of bacterial and parasite proteins
Potential when using nasal spray administration of plasmid DNA nanoparticles to transfect non-target cells, such as brain cells
Cross-contamination when manufacturing different types of live vaccines in same facility
Plasmid vectors
Vector design
DNA vaccines elicit the best immune response when high-expression vectors are used. These are plasmids that usually consist of a strong viral promoter to drive the in vivo transcription and translation of the gene (or complementary DNA) of interest. Intron A may sometimes be included to improve mRNA stability and hence increase protein expression. Plasmids also include a strong polyadenylation/transcriptional termination signal, such as bovine growth hormone or rabbit beta-globulin polyadenylation sequences. Polycistronic vectors (with multiple genes of interest) are sometimes constructed to express more than one immunogen, or to express an immunogen and an immunostimulatory protein.
Because the plasmidcarrying relatively small genetic code up to about 200 Kbpis the "vehicle" from which the immunogen is expressed, optimising vector design for maximal protein expression is essential. One way of enhancing protein expression is by optimising the codon usage of pathogenic mRNAs for eukaryotic cells. Pathogens often have different AT-contents than the target species, so altering the gene sequence of the immunogen to reflect the codons more commonly used in the target species may improve its expression.
Another consideration is the choice of promoter. The SV40 promoter was conventionally used until research showed that vectors driven by the Rous Sarcoma Virus (RSV) promoter had much higher expression rates. More recently, expression and immunogenicity have been further increased in model systems by the use of the cytomegalovirus (CMV) immediate early promoter, and a retroviral cis-acting transcriptional element. Additional modifications to improve expression rates include the insertion of enhancer sequences, synthetic introns, adenovirus tripartite leader (TPL) sequences and modifications to the polyadenylation and transcriptional termination sequences. An example of DNA vaccine plasmid is pVAC, which uses SV40 promoter.
Structural instability phenomena are of particular concern for plasmid manufacture, DNA vaccination and gene therapy. Accessory regions pertaining to the plasmid backbone may engage in a wide range of structural instability phenomena. Well-known catalysts of genetic instability include direct, inverted and tandem repeats, which are conspicuous in many commercially available cloning and expression vectors. Therefore, the reduction or complete elimination of extraneous noncoding backbone sequences would pointedly reduce the propensity for such events to take place and consequently the overall plasmid's recombinogenic potential.
Mechanism of plasmids
Once the plasmid inserts itself into the transfected cell nucleus, it codes for a peptide string of a foreign antigen. On its surface the cell displays the foreign antigen with both histocompatibility complex (MHC) classes I and class II molecules. The antigen-presenting cell then travels to the lymph nodes and presents the antigen peptide and costimulatory molecule signalling to T-cell, initiating the immune response.
Vaccine insert design
Immunogens can be targeted to various cellular compartments to improve antibody or cytotoxic T-cell responses. Secreted or plasma membrane-bound antigens are more effective at inducing antibody responses than cytosolic antigens, while cytotoxic T-cell responses can be improved by targeting antigens for cytoplasmic degradation and subsequent entry into the major histocompatibility complex (MHC) class I pathway. This is usually accomplished by the addition of N-terminal ubiquitin signals.
The conformation of the protein can also affect antibody responses. "Ordered" structures (such as viral particles) are more effective than unordered structures. Strings of minigenes (or MHC class I epitopes) from different pathogens raise cytotoxic T-cell responses to some pathogens, especially if a TH epitope is also included.
Delivery
DNA vaccines have been introduced into animal tissues by multiple methods. In 1999, the two most popular approaches were injection of DNA in saline: by using a standard hypodermic needle, or by using a gene gun delivery. Several other techniques have been documented in the intervening years.
Saline injection
Injection in saline is normally conducted intramuscularly (IM) in skeletal muscle, or intradermally (ID), delivering DNA to extracellular spaces. This can be assisted either 1) by electroporation; 2) by temporarily damaging muscle fibres with myotoxins such as bupivacaine; or 3) by using hypertonic solutions of saline or sucrose. Immune responses to this method can be affected by factors including needle type, needle alignment, speed of injection, volume of injection, muscle type, and age, sex and physiological condition of the recipient.
Gene gun
Gene gun delivery ballistically accelerates plasmid DNA (pDNA) that has been absorbed onto gold or tungsten microparticles into the target cells, using compressed helium as an accelerant.
Mucosal surface delivery
Alternatives included aerosol instillation of naked DNA on mucosal surfaces, such as the nasal and lung mucosa, and topical administration of pDNA to the eye and vaginal mucosa. Mucosal surface delivery has also been achieved using cationic liposome-DNA preparations, biodegradable microspheres, attenuated Salmonalla, Shigella or Listeria vectors for oral administration to the intestinal mucosa and recombinant adenovirus vectors.
Polymer vehicle
A hybrid vehicle composed of bacteria cell and synthetic polymers has been employed for DNA vaccine delivery. An E. coli inner core and poly(beta-amino ester) outer coat function synergistically to increase efficiency by addressing barriers associated with antigen-presenting cell gene delivery which include cellular uptake and internalization, phagosomal escape and intracellular cargo concentration. Tested in mice, the hybrid vector was found to induce immune response.
ELI immunization
Another approach to DNA vaccination is expression library immunization (ELI). Using this technique, potentially all the genes from a pathogen can be delivered at one time, which may be useful for pathogens that are difficult to attenuate or culture. ELI can be used to identify which genes induce a protective response. This has been tested with Mycoplasma pulmonis, a murine lung pathogen with a relatively small genome. Even partial expression libraries can induce protection from subsequent challenge.
Helpful tabular comparison
Dosage
The delivery method determines the dose required to raise an effective immune response. Saline injections require variable amounts of DNA, from 10 μg to 1 mg, whereas gene gun deliveries require 100 to 1000 times less. Generally, 0.2 μg – 20 μg are required, although quantities as low as 16 ng have been reported. These quantities vary by species. Mice for example, require approximately 10 times less DNA than primates. Saline injections require more DNA because the DNA is delivered to the extracellular spaces of the target tissue (normally muscle), where it has to overcome physical barriers (such as the basal lamina and large amounts of connective tissue) before it is taken up by the cells, while gene gun deliveries drive/force DNA directly into the cells, resulting in less "wastage".
Immune response
Helper T cell responses
DNA immunization can raise multiple TH responses, including lymphoproliferation and the generation of a variety of cytokine profiles. A major advantage of DNA vaccines is the ease with which they can be manipulated to bias the type of T-cell help towards a TH1 or TH2 response. Each type has distinctive patterns of lymphokine and chemokine expression, specific types of immunoglobulins, patterns of lymphocyte trafficking and types of innate immune responses.
Other types of T-cell help
The type of T-cell help raised is influenced by the delivery method and the type of immunogen expressed, as well as the targeting of different lymphoid compartments. Generally, saline needle injections (either IM or ID) tend to induce TH1 responses, while gene gun delivery raises TH2 responses. This is true for intracellular and plasma membrane-bound antigens, but not for secreted antigens, which seem to generate TH2 responses, regardless of the method of delivery.
Generally the type of T-cell help raised is stable over time, and does not change when challenged or after subsequent immunizations that would normally have raised the opposite type of response in a naïve specimen. However, Mor et al.. (1995) immunized and boosted mice with pDNA encoding the circumsporozoite protein of the mouse malarial parasite Plasmodium yoelii (PyCSP) and found that the initial TH2 response changed, after boosting, to a TH1 response.
Basis for different types of T-cell help
How these different methods operate, the forms of antigen expressed, and the different profiles of T-cell help is not understood. It was thought that the relatively large amounts of DNA used in IM injection were responsible for the induction of TH1 responses. However, evidence shows no dose-related differences in TH type. The type of T-cell help raised is determined by the differentiated state of antigen presenting cells. Dendritic cells can differentiate to secrete IL-12 (which supports TH1 cell development) or IL-4 (which supports TH2 responses). pDNA injected by needle is endocytosed into the dendritic cell, which is then stimulated to differentiate for TH1 cytokine (IL-12) production, while the gene gun bombards the DNA directly into the cell, thus bypassing TH1 stimulation.
Practical uses of polarised T-cell help
Polarisation in T-cell help is useful in influencing allergic responses and autoimmune diseases. In autoimmune diseases, the goal is to shift the self-destructive TH1 response (with its associated cytotoxic T cell activity) to a non-destructive TH2 response. This has been successfully applied in predisease priming for the desired type of response in preclinical models and is somewhat successful in shifting the response for an established disease.
Cytotoxic T-cell responses
One of the advantages of DNA vaccines is that they are able to induce cytotoxic T lymphocytes (CTL) without the inherent risk associated with live vaccines. CTL responses can be raised against immunodominant and immunorecessive CTL epitopes, as well as subdominant CTL epitopes, in a manner that appears to mimic natural infection. This may prove to be a useful tool in assessing CTL epitopes and their role in providing immunity.
Cytotoxic T-cells recognise small peptides (8-10 amino acids) complexed to MHC class I molecules. These peptides are derived from cytosolic proteins that are degraded and delivered to the nascent MHC class I molecule within the endoplasmic reticulum (ER). Targeting gene products directly to the ER (by the addition of an ER insertion signal sequence at the N-terminus) should thus enhance CTL responses. This was successfully demonstrated using recombinant vaccinia viruses expressing influenza proteins, but the principle should also be applicable to DNA vaccines. Targeting antigens for intracellular degradation (and thus entry into the MHC class I pathway) by the addition of ubiquitin signal sequences, or mutation of other signal sequences, was shown to be effective at increasing CTL responses.
CTL responses can be enhanced by co-inoculation with co-stimulatory molecules such as B7-1 or B7-2 for DNA vaccines against influenza nucleoprotein, or GM-CSF for DNA vaccines against the murine malaria model P. yoelii. Co-inoculation with plasmids encoding co-stimulatory molecules IL-12 and TCA3 were shown to increase CTL activity against HIV-1 and influenza nucleoprotein antigens.
Humoral (antibody) response
Antibody responses elicited by DNA vaccinations are influenced by multiple variables, including antigen type; antigen location (i.e. intracellular vs. secreted); number, frequency and immunization dose; site and method of antigen delivery.
Kinetics of antibody response
Humoral responses after a single DNA injection can be much longer-lived than after a single injection with a recombinant protein. Antibody responses against hepatitis B virus (HBV) envelope protein (HBsAg) have been sustained for up to 74 weeks without boost, while lifelong maintenance of protective response to influenza haemagglutinin was demonstrated in mice after gene gun delivery. Antibody-secreting cells (ASC) migrate to the bone marrow and spleen for long-term antibody production, and generally localise there after one year.
Comparisons of antibody responses generated by natural (viral) infection, immunization with recombinant protein and immunization with pDNA are summarised in Table 4. DNA-raised antibody responses rise much more slowly than when natural infection or recombinant protein immunization occurs. As many as 12 weeks may be required to reach peak titres in mice, although boosting can decrease the interval. This response is probably due to the low levels of antigen expressed over several weeks, which supports both DNA vaccine expressing HBV small and middle envelope protein was injected into adults with chronic hepatitis. The vaccine resulted in specific interferon gamma cell production. Also specific T-cells for middle envelop proteins antigens were developed. The immune response of the patients was not robust enough to control HBV infection
Additionally, the titres of specific antibodies raised by DNA vaccination are lower than those obtained after vaccination with a recombinant protein. However, DNA immunization-induced antibodies show greater affinity to native epitopes than recombinant protein-induced antibodies. In other words, DNA immunization induces a qualitatively superior response. Antibodies can be induced after one vaccination with DNA, whereas recombinant protein vaccinations generally require a boost. DNA immunization can be used to bias the TH profile of the immune response and thus the antibody isotype, which is not possible with either natural infection or recombinant protein immunization. Antibody responses generated by DNA are useful as a preparative tool. For example, polyclonal and monoclonal antibodies can be generated for use as reagents.
Mechanistic basis for DNA-raised immune responses
DNA uptake mechanism
When DNA uptake and subsequent expression was first demonstrated in vivo in muscle cells, these cells were thought to be unique because of their extensive network of T-tubules. Using electron microscopy, it was proposed that DNA uptake was facilitated by caveolae (or, non-clathrin coated pits). However, subsequent research revealed that other cells (such as keratinocytes, fibroblasts and epithelial Langerhans cells) could also internalize DNA. The mechanism of DNA uptake is not known.
Two theories dominate – that in vivo uptake of DNA occurs non-specifically, in a method similar to phago- or pinocytosis, or through specific receptors. These might include a 30kDa surface receptor, or The 30kDa surface receptor binds specifically to 4500-bp DNA fragments (which are then internalised) and is found on professional APCs and T-cells. Macrophage scavenger receptors bind to a variety of macromolecules, including polyribonucleotides and are thus candidates for DNA uptake. Receptor-mediated DNA uptake could be facilitated by Gene gun delivery systems, cationic liposome packaging, and other delivery methods bypass this entry method, but understanding it may be useful in reducing costs (e.g. by reducing the requirement for cytofectins), which could be important in animal husbandry.
Antigen presentation by bone marrow-derived cells
Studies using chimeric mice have shown that antigen is presented by bone-marrow derived cells, which include dendritic cells, macrophages and specialised B-cells called professional antigen presenting cells (APC). After gene gun inoculation to the skin, transfected Langerhans cells migrate to the draining lymph node to present antigens. After IM and ID injections, dendritic cells present antigen in the draining lymph node and transfected macrophages have been found in the peripheral blood.
Besides direct transfection of dendritic cells or macrophages, cross priming occurs following IM, ID and gene gun DNA deliveries. Cross-priming occurs when a bone marrow-derived cell presents peptides from proteins synthesised in another cell in the context of MHC class 1. This can prime cytotoxic T-cell responses and seems to be important for a full primary immune response.
Target site role
IM and ID DNA delivery initiate immune responses differently. In the skin, keratinocytes, fibroblasts and Langerhans cells take up and express antigens and are responsible for inducing a primary antibody response. Transfected Langerhans cells migrate out of the skin (within 12 hours) to the draining lymph node where they prime secondary B- and T-cell responses. In skeletal muscle, striated muscle cells are most frequently transfected, but seem to be unimportant in immune response. Instead, IM inoculated DNA "washes" into the draining lymph node within minutes, where distal dendritic cells are transfected and then initiate an immune response. Transfected myocytes seem to act as a "reservoir" of antigen for trafficking professional APCs.
Maintenance of immune response
DNA vaccination generates an effective immune memory via the display of antigen-antibody complexes on follicular dendritic cells (FDC), which are potent B-cell stimulators. T-cells can be stimulated by similar, germinal centre dendritic cells. FDC are able to generate an immune memory because antibodies production "overlaps" long-term expression of antigen, allowing antigen-antibody immunocomplexes to form and be displayed by FDC.
Interferons
Both helper and cytotoxic T-cells can control viral infections by secreting interferons. Cytotoxic T cells usually kill virally infected cells. However, they can also be stimulated to secrete antiviral cytokines such as IFN-γ and TNF-α, which do not kill the cell, but limit viral infection by down-regulating the expression of viral components. DNA vaccinations can be used to curb viral infections by non-destructive IFN-mediated control. This was demonstrated for hepatitis B. IFN-γ is critically important in controlling malaria infections and is a consideration for anti-malarial DNA vaccines.
Immune response modulation
Cytokine modulation
An effective vaccine must induce an appropriate immune response for a given pathogen. DNA vaccines can polarise T-cell help towards TH1 or TH2 profiles and generate CTL and/or antibody when required. This can be accomplished by modifications to the form of antigen expressed (i.e. intracellular vs. secreted), the method and route of delivery or the dose. It can also be accomplished by the co-administration of plasmid DNA encoding immune regulatory molecules, i.e. cytokines, lymphokines or co-stimulatory molecules. These "genetic adjuvants" can be administered as a:
mixture of 2 plasmids, one encoding the immunogen and the other encoding the cytokine
single bi- or polycistronic vector, separated by spacer regions
plasmid-encoded chimera, or fusion protein
In general, co-administration of pro-inflammatory agents (such as various interleukins, tumor necrosis factor, and GM-CSF) plus TH2-inducing cytokines increase antibody responses, whereas pro-inflammatory agents and TH1-inducing cytokines decrease humoral responses and increase cytotoxic responses (more important in viral protection). Co-stimulatory molecules such as B7-1, B7-2 and CD40L are sometimes used.
This concept was applied in topical administration of pDNA encoding IL-10. Plasmid encoding B7-1 (a ligand on APCs) successfully enhanced the immune response in tumour models. Mixing plasmids encoding GM-CSF and the circumsporozoite protein of P. yoelii (PyCSP) enhanced protection against subsequent challenge (whereas plasmid-encoded PyCSP alone did not). It was proposed that GM-CSF caused dendritic cells to present antigen more efficiently and enhance IL-2 production and TH cell activation, thus driving the increased immune response. This can be further enhanced by first priming with a pPyCSP and pGM-CSF mixture, followed by boosting with a recombinant poxvirus expressing PyCSP. However, co-injection of plasmids encoding GM-CSF (or IFN-γ, or IL-2) and a fusion protein of P. chabaudi merozoite surface protein 1 (C-terminus)-hepatitis B virus surface protein (PcMSP1-HBs) abolished protection against challenge, compared to protection acquired by delivery of pPcMSP1-HBs alone.
The advantages of genetic adjuvants are their low cost and simple administration, as well as avoidance of unstable recombinant cytokines and potentially toxic, "conventional" adjuvants (such as alum, calcium phosphate, monophosphoryl lipid A, cholera toxin, cationic and mannan-coated liposomes, QS21, carboxymethyl cellulose and ubenimex). However, the potential toxicity of prolonged cytokine expression is not established. In many commercially important animal species, cytokine genes have not been identified and isolated. In addition, various plasmid-encoded cytokines modulate the immune system differently according to the delivery time. For example, some cytokine plasmid DNAs are best delivered after immunogen pDNA, because pre- or co-delivery can decrease specific responses and increase non-specific responses.
Immunostimulatory CpG motifs
Plasmid DNA itself appears to have an adjuvant effect on the immune system. Bacterially derived DNA can trigger innate immune defence mechanisms, the activation of dendritic cells and the production of TH1 cytokines. This is due to recognition of certain CpG dinucleotide sequences that are immunostimulatory. CpG stimulatory (CpG-S) sequences occur twenty times more frequently in bacterially-derived DNA than in eukaryotes. This is because eukaryotes exhibit "CpG suppression" – i.e. CpG dinucleotide pairs occur much less frequently than expected. Additionally, CpG-S sequences are hypomethylated. This occurs frequently in bacterial DNA, while CpG motifs occurring in eukaryotes are methylated at the cytosine nucleotide. In contrast, nucleotide sequences that inhibit the activation of an immune response (termed CpG neutralising, or CpG-N) are over represented in eukaryotic genomes. The optimal immunostimulatory sequence is an unmethylated CpG dinucleotide flanked by two 5’ purines and two 3’ pyrimidines. Additionally, flanking regions outside this immunostimulatory hexamer must be guanine-rich to ensure binding and uptake into target cells.
The innate system works with the adaptive immune system to mount a response against the DNA encoded protein. CpG-S sequences induce polyclonal B-cell activation and the upregulation of cytokine expression and secretion. Stimulated macrophages secrete IL-12, IL-18, TNF-α, IFN-α, IFN-β and IFN-γ, while stimulated B-cells secrete IL-6 and some IL-12.
Manipulation of CpG-S and CpG-N sequences in the plasmid backbone of DNA vaccines can ensure the success of the immune response to the encoded antigen and drive the immune response toward a TH1 phenotype. This is useful if a pathogen requires a TH response for protection. CpG-S sequences have also been used as external adjuvants for both DNA and recombinant protein vaccination with variable success rates. Other organisms with hypomethylated CpG motifs have demonstrated the stimulation of polyclonal B-cell expansion. The mechanism behind this may be more complicated than simple methylation – hypomethylated murine DNA has not been found to mount an immune response.
Most of the evidence for immunostimulatory CpG sequences comes from murine studies. Extrapolation of this data to other species requires caution – individual species may require different flanking sequences, as binding specificities of scavenger receptors vary across species. Additionally, species such as ruminants may be insensitive to immunostimulatory sequences due to their large gastrointestinal load.
Alternative boosts
DNA-primed immune responses can be boosted by the administration of recombinant protein or recombinant poxviruses. "Prime-boost" strategies with recombinant protein have successfully increased both neutralising antibody titre, and antibody avidity and persistence, for weak immunogens, such as HIV-1 envelope protein. Recombinant virus boosts have been shown to be very efficient at boosting DNA-primed CTL responses. Priming with DNA focuses the immune response on the required immunogen, while boosting with the recombinant virus provides a larger amount of expressed antigen, leading to a large increase in specific CTL responses.
Prime-boost strategies have been successful in inducing protection against malarial challenge in a number of studies. Primed mice with plasmid DNA encoding Plasmodium yoelii circumsporozoite surface protein (PyCSP), then boosted with a recombinant vaccinia virus expressing the same protein had significantly higher levels of antibody, CTL activity and IFN-γ, and hence higher levels of protection, than mice immunized and boosted with plasmid DNA alone. This can be further enhanced by priming with a mixture of plasmids encoding PyCSP and murine GM-CSF, before boosting with recombinant vaccinia virus. An effective prime-boost strategy for the simian malarial model P. knowlesi has also been demonstrated. Rhesus monkeys were primed with a multicomponent, multistage DNA vaccine encoding two liver-stage antigens – the circumsporozoite surface protein (PkCSP) and sporozoite surface protein 2 (PkSSP2) – and two blood stage antigens – the apical merozoite surface protein 1 (PkAMA1) and merozoite surface protein 1 (PkMSP1p42). They were then boosted with a recombinant canarypox virus encoding all four antigens (ALVAC-4). Immunized monkeys developed antibodies against sporozoites and infected erythrocytes, and IFN-γ-secreting T-cell responses against peptides from PkCSP. Partial protection against sporozoite challenge was achieved, and mean parasitemia was significantly reduced, compared to control monkeys. These models, while not ideal for extrapolation to P. falciparum in humans, will be important in pre-clinical trials.
Enhancing immune responses
DNA
The efficiency of DNA immunization can be improved by stabilising DNA against degradation, and increasing the efficiency of delivery of DNA into antigen-presenting cells. This has been demonstrated by coating biodegradable cationic microparticles (such as poly(lactide-co-glycolide) formulated with cetyltrimethylammonium bromide) with DNA. Such DNA-coated microparticles can be as effective at raising CTL as recombinant viruses, especially when mixed with alum. Particles 300 nm in diameter appear to be most efficient for uptake by antigen presenting cells.
Alphavirus vectors
Recombinant alphavirus-based vectors have been used to improve DNA vaccination efficiency. The gene encoding the antigen of interest is inserted into the alphavirus replicon, replacing structural genes but leaving non-structural replicase genes intact. The Sindbis virus and Semliki Forest virus have been used to build recombinant alphavirus replicons. Unlike conventional DNA vaccinations alphavirus vectors kill transfected cells and are only transiently expressed. Alphavirus replicase genes are expressed in addition to the vaccine insert. It is not clear how alphavirus replicons raise an immune response, but it may be due to the high levels of protein expressed by this vector, replicon-induced cytokine responses, or replicon-induced apoptosis leading to enhanced antigen uptake by dendritic cells.
See also
Vector DNA
HIV vaccine
Gene therapy
mRNA vaccine
References
Further reading
DNA
.
Gene delivery
Virology
21st-century inventions | DNA vaccine | [
"Chemistry",
"Biology"
] | 7,138 | [
"Genetics techniques",
"Molecular biology techniques",
"Gene delivery"
] |
45,584 | https://en.wikipedia.org/wiki/Biodefense | Biodefense refers to measures to counter biological threats, reduce biological risks, and prepare for, respond to, and recover from bioincidents, whether naturally occurring, accidental, or deliberate in origin and whether impacting human, animal, plant, or environmental health. Biodefense measures often aim to improve biosecurity or biosafety. Biodefense is frequently discussed in the context of biological warfare or bioterrorism, and is generally considered a military or emergency response term.
Biodefense applies to two distinct target populations: civilian non-combatants and military combatants (troops in the field). Protection of water supplies and food supplies are often a critical part of biodefense.
Military
Troops in the field
Military biodefense in the United States began with the United States Army Medical Unit (USAMU) at Fort Detrick, Maryland, in 1956. (In contrast to the U.S. Army Biological Warfare Laboratories [1943–1969], also at Fort Detrick, the USAMU's mission was purely to develop defensive measures against bio-agents, as opposed to weapons development.) The USAMU was disestablished in 1969 and succeeded by today's United States Army Medical Research Institute of Infectious Diseases (USAMRIID).
The U.S. Department of Defense (DoD) has focused since at least 1998 on the development and application of vaccine-based biodefenses. In a July 2001 report commissioned by the DoD, the "DoD-critical products" were stated as vaccines against anthrax (AVA and Next Generation), smallpox, plague, tularemia, botulinum, ricin, and equine encephalitis. Note that two of these targets are toxins (botulinum and ricin) while the remainder are infectious agents.
Civilian
Role of public health and disease surveillance
It's extremely important to note that all of the classical and modern biological weapons organisms are animal diseases, the only exception being smallpox. Thus, in any use of biological weapons, it is highly likely that animals will become ill either simultaneously with, or perhaps earlier than humans.
Indeed, in the largest biological weapons accident known–the anthrax outbreak in Sverdlovsk (now Yekaterinburg) in the Soviet Union in 1979, sheep became ill with anthrax as far as 200 kilometers from the release point of the organism from a military facility in the southeastern portion of the city (known as Compound 19 and still off limits to visitors today, see Sverdlovsk anthrax leak).
Thus, a robust surveillance system involving human clinicians and veterinarians may identify a bioweapons attack early in the course of an epidemic, permitting the prophylaxis of disease in the vast majority of people (and/or animals) exposed but not yet ill.
For example, in the case of anthrax, it is likely that by 24–36 hours after an attack, some small percentage of individuals (those with compromised immune system or who had received a large dose of the organism due to proximity to the release point) will become ill with classical symptoms and signs (including a virtually unique chest X-ray finding, often recognized by public health officials if they receive timely reports). By making these data available to local public health officials in real time, most models of anthrax epidemics indicate that more than 80% of an exposed population can receive antibiotic treatment before becoming symptomatic, and thus avoid the moderately high mortality of the disease.
Identification of bioweapons
The goal of biodefense is to integrate the sustained efforts of the national and homeland security, medical, public health, intelligence, diplomatic, and police communities. Health care providers and public health officers are among the first lines of defense. In some countries private, local, and provincial (state) capabilities are being augmented by and coordinated with federal assets, to provide layered defenses against biological weapons attacks. During the first Gulf War the United Nations activated a biological and chemical response team, Task Force Scorpio, to respond to any potential use of weapons of mass destruction on civilians.
The traditional approach toward protecting agriculture, food, and water: focusing on the natural or unintentional introduction of a disease is being strengthened by focused efforts to address current and anticipated future biological weapons threats that may be deliberate, multiple, and repetitive.
The growing threat of biowarfare agents and bioterrorism has led to the development of specific field tools that perform on-the-spot analysis and identification of encountered suspect materials. One such technology, being developed by researchers from the Lawrence Livermore National Laboratory (LLNL), employs a "sandwich immunoassay", in which fluorescent dye-labeled antibodies aimed at specific pathogens are attached to silver and gold nanowires.
The U.S. National Institute of Allergy and Infectious Diseases (NIAID) also participates in the identification and prevention of biowarfare and first released a strategy for biodefense in 2002, periodically releasing updates as new pathogens are becoming topics of discussion. Within this list of strategies, responses for specific infectious agents are provided, along with the classification of these agents. NIAID provides countermeasures after the U.S. Department of Homeland Security details which pathogens hold the most threat.
Planning and response
Planning may involve the training human resources specialist and development of biological identification systems. Until recently in the United States, most biological defense strategies have been geared to protecting soldiers on the battlefield rather than ordinary people in cities. Financial cutbacks have limited the tracking of disease outbreaks. Some outbreaks, such as food poisoning due to E. coli or Salmonella, could be of either natural or deliberate origin.
Human Resource Training Programs
To date, several endangered countries have designed various training programs at their universities to train specialized personnel to deal with biological threats(for example: George Mason University Biodefense PhD program (USA) or Biodefense Strategic Studies PhD program designated by Dr Reza Aghanouri(Iran)). These programs are designed to prepare students and officers to serve as scholars and professionals in the fields of biodefense and biosecurity. These programs integrates knowledge of natural and man-made biological threats with the skills to develop and analyze policies and strategies for enhancing biosecurity. Other areas of biodefense, including nonproliferation, intelligence and threat assessment, and medical and public health preparedness are integral parts of these programs.
Preparedness
Biological agents are relatively easy to obtain by terrorists and are becoming more threatening in the U.S., and laboratories are working on advanced detection systems to provide early warning, identify contaminated areas and populations at risk, and to facilitate prompt treatment. Methods for predicting the use of biological agents in urban areas as well as assessing the area for the hazards associated with a biological attack are being established in major cities. In addition, forensic technologies are working on identifying biological agents, their geographical origins and/or their initial son. Efforts include decontamination technologies to restore facilities without causing additional environmental concerns.
Early detection and rapid response to bioterrorism depend on close cooperation between public health authorities and law enforcement; however, such cooperation is currently lacking. National detection assets and vaccine stockpiles are not useful if local and state officials do not have access to them.
United States strategy
In October 2022, the Biden Administration published the "National Biodefense Strategy and Implementation Plan for Countering Biological Threats, Enhancing Pandemic Preparedness, and Achieving Global Health." It updates the Presidency of Donald Trump's 2018 National Biodefense Strategy.
The U.S. government had a comprehensive defense strategy against bioterror attacks in 2004, when then-President George W. Bush signed a Homeland Security Presidential Directive 10. The directive laid out the country's 21st Century biodefense system and assigned various tasks to federal agencies that would prevent, protect and mitigate biological attacks against our homeland and global interests. Until 2018, however, the federal government did not have a comprehensive biodefense strategy.
Biosurveillance
In 1999, the University of Pittsburgh's Center for Biomedical Informatics deployed the first automated bioterrorism detection system, called RODS (Real-Time Outbreak Disease Surveillance). RODS is designed to draw collect data from many data sources and use them to perform signal detection, that is, to detect a possible bioterrorism event at the earliest possible moment. RODS, and other systems like it, collect data from sources including clinic data, laboratory data, and data from over-the-counter drug sales. In 2000, Michael Wagner, the codirector of the RODS laboratory, and Ron Aryel, a subcontractor, conceived the idea of obtaining live data feeds from "non-traditional" (non-health-care) data sources. The RODS laboratory's first efforts eventually led to the establishment of the National Retail Data Monitor, a system which collects data from 20,000 retail locations nationwide.
On February 5, 2002, George W. Bush visited the RODS laboratory and used it as a model for a $300 million spending proposal to equip all 50 states with biosurveillance systems. In a speech delivered at the nearby Masonic temple, Bush compared the RODS system to a modern "DEW" line (referring to the Cold War ballistic missile early warning system).
The principles and practices of biosurveillance, a new interdisciplinary science, were defined and described in the Handbook of Biosurveillance, edited by Michael Wagner, Andrew Moore and Ron Aryel, and published in 2006. Biosurveillance is the science of real-time disease outbreak detection. Its principles apply to both natural and man-made epidemics (bioterrorism).
Data which potentially could assist in early detection of a bioterrorism event include many categories of information. Health-related data such as that from hospital computer systems, clinical laboratories, electronic health record systems, medical examiner record-keeping systems, 911 call center computers, and veterinary medical record systems could be of help; researchers are also considering the utility of data generated by ranching and feedlot operations, food processors, drinking water systems, school attendance recording, and physiologic monitors, among others. Intuitively, one would expect systems which collect more than one type of data to be more useful than systems which collect only one type of information (such as single-purpose laboratory or 911 call-center based systems), and be less prone to false alarms, and this appears to be the case.
In Europe, disease surveillance is beginning to be organized on the continent-wide scale needed to track a biological emergency. The system not only monitors infected persons, but attempts to discern the origin of the outbreak.
Researchers are experimenting with devices to detect the existence of a threat:
Tiny electronic chips that would contain living nerve cells to warn of the presence of bacterial toxins (identification of broad range toxins)
Fiber-optic tubes lined with antibodies coupled to light-emitting molecules (identification of specific pathogens, such as anthrax, botulinum, ricin)
New research shows that ultraviolet avalanche photodiodes offer the high gain, reliability and robustness needed to detect anthrax and other bioterrorism agents in the air. The fabrication methods and device characteristics were described at the 50th Electronic Materials Conference in Santa Barbara on June 25, 2008. Details of the photodiodes were also published in the February 14, 2008 issue of the journal Electronics Letters and the November 2007 issue of the journal IEEE Photonics Technology Letters.
The United States Department of Defense conducts global biosurveillance through several programs, including the Global Emerging Infections Surveillance and Response System.
Response to bioterrorism incident or threat
Government agencies which would be called on to respond to a bioterrorism incident would include law enforcement, hazardous materials/decontamination units and emergency medical units. The US military has specialized units, which can respond to a bioterrorism event; among them are the United States Marine Corps' Chemical Biological Incident Response Force and the U.S. Army's 20th Support Command (CBRNE), which can detect, identify, and neutralize threats, and decontaminate victims exposed to bioterror agents.
There are four hospitals capable of caring for anyone with an exposure to a BSL3 or BSL4 pathogen, the special clinical studies unit at National Institutes of Health is one of them. National Institutes of Health built a facility in April 2010. This unit has state of the art isolation capabilities with a unique airflow system. This unit is also being trained to care for patients who are ill due to a highly infectious pathogen outbreak, such as ebola. The doctors work closely with USAMRIID, NBACC and IRF. Special trainings take place regularly in order to maintain a high level of confidence to care for these patients.
Biodefense market
In 2015, global biodefense market was estimated at $9.8 billion. Experts correlated the large marketplace to an increase in government attention and support as a result of rising bioterrorism threats worldwide. Government's heightened interest is anticipated expand the industry into the foreseeable future. According to Medgadget.com, "Many government legislations like Project Bioshield offers nations with counter measures against chemical, radiological, nuclear and biological attack."
Project Bioshield offers accessible biological countermeasures targeting various strains of smallpox and anthrax. "Main goal of the project is creating funding authority to build next generation counter measures, make innovative research & development programs and create a body like FDA (Food & Drug Administration) that can effectively use treatments in case of emergencies." Increased funding, in addition to public health organizations' elevated consideration in biodefense technology investments, could trigger growth in the global biodefense market.
The global biodefense market is divided into geographical locations such as APAC, Latin America, Europe, MEA, and North America. The biodefense industry in North America lead the global industry by a large margin, making it the highest regional revenue share for 2015, contributing approximately $8.91 billion of revenue this year, due to immense funding and government reinforcements. The biodefense market in Europe is predicted to register a CAGR of 11.41% by the forecast timeline. The United Kingdom's Ministry of Defense granted $75.67 million designated for defense & civilian research, making it the highest regional industry share for 2012.
In 2016, Global Market Insights released a report covering the new trends in the biodefense market backed by detailed, scientific data. Industry leaders in biodefense market include the following corporations: Emergent Biosolutions, SIGA Technologies, Ichor Medical Systems Incorporation, PharmaAthene, Cleveland BioLabs Incorporation, Achaogen (bankrupt in 2019), Alnylam Pharmaceuticals, Avertis, Xoma Corporation, Dynavax Technologies Incorporation, Elusys Therapeutics, DynPort Vaccine Company LLC, Bavarian Nordic and Nanotherapeutics Incorporation.
Legislation
During the 115th Congress in July 2018, four Members of Congress, both Republican and Democrat (Anna Eshoo, Susan Brooks, Frank Palone and Greg Walden) introduced biodefense legislation called the Pandemic and All Hazards Preparedness and Advancing Innovation Act (PAHPA) (H.R. 6378). The bill strengthens the federal government's preparedness to deal with a wide range of public health emergencies, whether created through an act of bioterrorism or occurring through a natural disaster. The bill reauthorizes funding to improve bioterrorism and other public health emergency preparedness and response activities such as the Hospital Preparedness Program, the Public Health Emergency Preparedness Cooperative Agreement, Project BioShield, and BARDA for the advanced research and development of medical countermeasures (MCMs).
H.R. 6378 has 24 cosponsors from both political parties. On September 25, 2018, the House of Representatives passed the bill.
See also
Fluctuation-enhanced sensing of biological and chemical agents
National Biodefense Analysis and Countermeasures Center (NBACC)
Sensing of phage-triggered ion cascades
United States Army Medical Research Institute of Infectious Diseases (USAMRIID)
United States biological defense program
References
Citations
Other sources
Department of Defense (2001). Report on Biological Warfare Defense Vaccine Research & Development Programs. Retrieved 2005-02-25.
Institute of Medicine and National Research Council of the National Academies (2004). Giving Full Measure to Countermeasures: Addressing Problems in the DoD Program to Develop Medical Countermeasures Against Biological Warfare Agents. National Academy Press (Washington, D.C.). (paperback).
External links
BiodefenseEducation.org - A biodefense digital library and learning collaboratory
NIAID Biodefense Research
The Biodefense Field
Bioethics
Biological warfare | Biodefense | [
"Technology",
"Biology"
] | 3,491 | [
"Bioethics",
"Biological warfare",
"Ethics of science and technology"
] |
45,600 | https://en.wikipedia.org/wiki/Reed%E2%80%93Solomon%20error%20correction | In information theory and coding theory, Reed–Solomon codes are a group of error-correcting codes that were introduced by Irving S. Reed and Gustave Solomon in 1960.
They have many applications, including consumer technologies such as MiniDiscs, CDs, DVDs, Blu-ray discs, QR codes, Data Matrix, data transmission technologies such as DSL and WiMAX, broadcast systems such as satellite communications, DVB and ATSC, and storage systems such as RAID 6.
Reed–Solomon codes operate on a block of data treated as a set of finite-field elements called symbols. Reed–Solomon codes are able to detect and correct multiple symbol errors. By adding = − check symbols to the data, a Reed–Solomon code can detect (but not correct) any combination of up to erroneous symbols, or locate and correct up to erroneous symbols at unknown locations. As an erasure code, it can correct up to erasures at locations that are known and provided to the algorithm, or it can detect and correct combinations of errors and erasures. Reed–Solomon codes are also suitable as multiple-burst bit-error correcting codes, since a sequence of consecutive bit errors can affect at most two symbols of size . The choice of is up to the designer of the code and may be selected within wide limits.
There are two basic types of Reed–Solomon codes original view and BCH view with BCH view being the most common, as BCH view decoders are faster and require less working storage than original view decoders.
History
Reed–Solomon codes were developed in 1960 by Irving S. Reed and Gustave Solomon, who were then staff members of MIT Lincoln Laboratory. Their seminal article was titled "Polynomial Codes over Certain Finite Fields" . The original encoding scheme described in the Reed and Solomon article used a variable polynomial based on the message to be encoded where only a fixed set of values (evaluation points) to be encoded are known to encoder and decoder. The original theoretical decoder generated potential polynomials based on subsets of k (unencoded message length) out of n (encoded message length) values of a received message, choosing the most popular polynomial as the correct one, which was impractical for all but the simplest of cases. This was initially resolved by changing the original scheme to a BCH-code-like scheme based on a fixed polynomial known to both encoder and decoder, but later, practical decoders based on the original scheme were developed, although slower than the BCH schemes. The result of this is that there are two main types of Reed–Solomon codes: ones that use the original encoding scheme and ones that use the BCH encoding scheme.
Also in 1960, a practical fixed polynomial decoder for BCH codes developed by Daniel Gorenstein and Neal Zierler was described in an MIT Lincoln Laboratory report by Zierler in January 1960 and later in an article in June 1961. The Gorenstein–Zierler decoder and the related work on BCH codes are described in a book "Error-Correcting Codes" by W. Wesley Peterson (1961). By 1963 (or possibly earlier), J. J. Stone (and others) recognized that Reed–Solomon codes could use the BCH scheme of using a fixed generator polynomial, making such codes a special class of BCH codes, but Reed–Solomon codes based on the original encoding scheme are not a class of BCH codes, and depending on the set of evaluation points, they are not even cyclic codes.
In 1969, an improved BCH scheme decoder was developed by Elwyn Berlekamp and James Massey and has since been known as the Berlekamp–Massey decoding algorithm.
In 1975, another improved BCH scheme decoder was developed by Yasuo Sugiyama, based on the extended Euclidean algorithm.
In 1977, Reed–Solomon codes were implemented in the Voyager program in the form of concatenated error correction codes. The first commercial application in mass-produced consumer products appeared in 1982 with the compact disc, where two interleaved Reed–Solomon codes are used. Today, Reed–Solomon codes are widely implemented in digital storage devices and digital communication standards, though they are being slowly replaced by Bose–Chaudhuri–Hocquenghem (BCH) codes. For example, Reed–Solomon codes are used in the Digital Video Broadcasting (DVB) standard DVB-S, in conjunction with a convolutional inner code, but BCH codes are used with LDPC in its successor, DVB-S2.
In 1986, an original scheme decoder known as the Berlekamp–Welch algorithm was developed.
In 1996, variations of original scheme decoders called list decoders or soft decoders were developed by Madhu Sudan and others, and work continues on these types of decoders (see Guruswami–Sudan list decoding algorithm).
In 2002, another original scheme decoder was developed by Shuhong Gao, based on the extended Euclidean algorithm.
Applications
Data storage
Reed–Solomon coding is very widely used in mass storage systems to correct
the burst errors associated with media defects.
Reed–Solomon coding is a key component of the compact disc. It was the first use of strong error correction coding in a mass-produced consumer product, and DAT and DVD use similar schemes. In the CD, two layers of Reed–Solomon coding separated by a 28-way convolutional interleaver yields a scheme called Cross-Interleaved Reed–Solomon Coding (CIRC). The first element of a CIRC decoder is a relatively weak inner (32,28) Reed–Solomon code, shortened from a (255,251) code with 8-bit symbols. This code can correct up to 2 byte errors per 32-byte block. More importantly, it flags as erasures any uncorrectable blocks, i.e., blocks with more than 2 byte errors. The decoded 28-byte blocks, with erasure indications, are then spread by the deinterleaver to different blocks of the (28,24) outer code. Thanks to the deinterleaving, an erased 28-byte block from the inner code becomes a single erased byte in each of 28 outer code blocks. The outer code easily corrects this, since it can handle up to 4 such erasures per block.
The result is a CIRC that can completely correct error bursts up to 4000 bits, or about 2.5 mm on the disc surface. This code is so strong that most CD playback errors are almost certainly caused by tracking errors that cause the laser to jump track, not by uncorrectable error bursts.
DVDs use a similar scheme, but with much larger blocks, a (208,192) inner code, and a (182,172) outer code.
Reed–Solomon error correction is also used in parchive files which are commonly posted accompanying multimedia files on USENET. The distributed online storage service Wuala (discontinued in 2015) also used Reed–Solomon when breaking up files.
Bar code
Almost all two-dimensional bar codes such as PDF-417, MaxiCode, Datamatrix, QR Code, Aztec Code and Han Xin code use Reed–Solomon error correction to allow correct reading even if a portion of the bar code is damaged. When the bar code scanner cannot recognize a bar code symbol, it will treat it as an erasure.
Reed–Solomon coding is less common in one-dimensional bar codes, but is used by the PostBar symbology.
Data transmission
Specialized forms of Reed–Solomon codes, specifically Cauchy-RS and Vandermonde-RS, can be used to overcome the unreliable nature of data transmission over erasure channels. The encoding process assumes a code of RS(N, K) which results in N codewords of length N symbols each storing K symbols of data, being generated, that are then sent over an erasure channel.
Any combination of K codewords received at the other end is enough to reconstruct all of the N codewords. The code rate is generally set to 1/2 unless the channel's erasure likelihood can be adequately modelled and is seen to be less. In conclusion, N is usually 2K, meaning that at least half of all the codewords sent must be received in order to reconstruct all of the codewords sent.
Reed–Solomon codes are also used in xDSL systems and CCSDS's Space Communications Protocol Specifications as a form of forward error correction.
Space transmission
One significant application of Reed–Solomon coding was to encode the digital pictures sent back by the Voyager program.
Voyager introduced Reed–Solomon coding concatenated with convolutional codes, a practice that has since become very widespread in deep space and satellite (e.g., direct digital broadcasting) communications.
Viterbi decoders tend to produce errors in short bursts. Correcting these burst errors is a job best done by short or simplified Reed–Solomon codes.
Modern versions of concatenated Reed–Solomon/Viterbi-decoded convolutional coding were and are used on the Mars Pathfinder, Galileo, Mars Exploration Rover and Cassini missions, where they perform within about 1–1.5 dB of the ultimate limit, the Shannon capacity.
These concatenated codes are now being replaced by more powerful turbo codes:
Constructions (encoding)
The Reed–Solomon code is actually a family of codes, where every code is characterised by three parameters: an alphabet size , a block length , and a message length , with . The set of alphabet symbols is interpreted as the finite field of order , and thus, must be a prime power. In the most useful parameterizations of the Reed–Solomon code, the block length is usually some constant multiple of the message length, that is, the rate is some constant, and furthermore, the block length is either equal to the alphabet size or one less than it, i.e., or .
Reed & Solomon's original view: The codeword as a sequence of values
There are different encoding procedures for the Reed–Solomon code, and thus, there are different ways to describe the set of all codewords.
In the original view of , every codeword of the Reed–Solomon code is a sequence of function values of a polynomial of degree less than . In order to obtain a codeword of the Reed–Solomon code, the message symbols (each within the q-sized alphabet) are treated as the coefficients of a polynomial of degree less than k, over the finite field with elements.
In turn, the polynomial p is evaluated at n ≤ q distinct points of the field F, and the sequence of values is the corresponding codeword. Common choices for a set of evaluation points include {0, 1, 2, ..., n − 1}, {0, 1, α, α2, ..., αn−2}, or for n < q, {1, α, α2, ..., αn−1}, ... , where α is a primitive element of F.
Formally, the set of codewords of the Reed–Solomon code is defined as follows:
Since any two distinct polynomials of degree less than agree in at most points, this means that any two codewords of the Reed–Solomon code disagree in at least positions. Furthermore, there are two polynomials that do agree in points but are not equal, and thus, the distance of the Reed–Solomon code is exactly . Then the relative distance is , where is the rate. This trade-off between the relative distance and the rate is asymptotically optimal since, by the Singleton bound, every code satisfies .
Being a code that achieves this optimal trade-off, the Reed–Solomon code belongs to the class of maximum distance separable codes.
While the number of different polynomials of degree less than k and the number of different messages are both equal to , and thus every message can be uniquely mapped to such a polynomial, there are different ways of doing this encoding. The original construction of interprets the message x as the coefficients of the polynomial p, whereas subsequent constructions interpret the message as the values of the polynomial at the first k points and obtain the polynomial p by interpolating these values with a polynomial of degree less than k. The latter encoding procedure, while being slightly less efficient, has the advantage that it gives rise to a systematic code, that is, the original message is always contained as a subsequence of the codeword.
Simple encoding procedure: The message as a sequence of coefficients
In the original construction of , the message is mapped to the polynomial with
The codeword of is obtained by evaluating at different points of the field . Thus the classical encoding function for the Reed–Solomon code is defined as follows:
This function is a linear mapping, that is, it satisfies for the following -matrix with elements from :
This matrix is a Vandermonde matrix over . In other words, the Reed–Solomon code is a linear code, and in the classical encoding procedure, its generator matrix is .
Systematic encoding procedure: The message as an initial sequence of values
There are alternative encoding procedures that produce a systematic Reed–Solomon code. One method uses Lagrange interpolation to compute polynomial such that Then is evaluated at the other points .
This function is a linear mapping. To generate the corresponding systematic encoding matrix G, multiply the Vandermonde matrix A by the inverse of A's left square submatrix.
for the following -matrix with elements from :
Discrete Fourier transform and its inverse
A discrete Fourier transform is essentially the same as the encoding procedure; it uses the generator polynomial to map a set of evaluation points into the message values as shown above:
The inverse Fourier transform could be used to convert an error free set of n < q message values back into the encoding polynomial of k coefficients, with the constraint that in order for this to work, the set of evaluation points used to encode the message must be a set of increasing powers of α:
However, Lagrange interpolation performs the same conversion without the constraint on the set of evaluation points or the requirement of an error free set of message values and is used for systematic encoding, and in one of the steps of the Gao decoder.
The BCH view: The codeword as a sequence of coefficients
In this view, the message is interpreted as the coefficients of a polynomial . The sender computes a related polynomial of degree where and sends the polynomial . The polynomial is constructed by multiplying the message polynomial , which has degree , with a generator polynomial of degree that is known to both the sender and the receiver. The generator polynomial is defined as the polynomial whose roots are sequential powers of the Galois field primitive
For a "narrow sense code", .
Systematic encoding procedure
The encoding procedure for the BCH view of Reed–Solomon codes can be modified to yield a systematic encoding procedure, in which each codeword contains the message as a prefix, and simply appends error correcting symbols as a suffix. Here, instead of sending , the encoder constructs the transmitted polynomial such that the coefficients of the largest monomials are equal to the corresponding coefficients of , and the lower-order coefficients of are chosen exactly in such a way that becomes divisible by . Then the coefficients of are a subsequence of the coefficients of . To get a code that is overall systematic, we construct the message polynomial by interpreting the message as the sequence of its coefficients.
Formally, the construction is done by multiplying by to make room for the check symbols, dividing that product by to find the remainder, and then compensating for that remainder by subtracting it. The check symbols are created by computing the remainder :
The remainder has degree at most , whereas the coefficients of in the polynomial are zero. Therefore, the following definition of the codeword has the property that the first coefficients are identical to the coefficients of :
As a result, the codewords are indeed elements of , that is, they are divisible by the generator polynomial :
This function is a linear mapping. To generate the corresponding systematic encoding matrix G, set G's left square submatrix to the identity matrix and then encode each row:
Ignoring leading zeroes, the last row = .
for the following -matrix with elements from :
Properties
The Reed–Solomon code is a [n, k, n − k + 1] code; in other words, it is a linear block code of length n (over F) with dimension k and minimum Hamming distance The Reed–Solomon code is optimal in the sense that the minimum distance has the maximum value possible for a linear code of size (n, k); this is known as the Singleton bound. Such a code is also called a maximum distance separable (MDS) code.
The error-correcting ability of a Reed–Solomon code is determined by its minimum distance, or equivalently, by , the measure of redundancy in the block. If the locations of the error symbols are not known in advance, then a Reed–Solomon code can correct up to erroneous symbols, i.e., it can correct half as many errors as there are redundant symbols added to the block. Sometimes error locations are known in advance (e.g., "side information" in demodulator signal-to-noise ratios)—these are called erasures. A Reed–Solomon code (like any MDS code) is able to correct twice as many erasures as errors, and any combination of errors and erasures can be corrected as long as the relation is satisfied, where is the number of errors and is the number of erasures in the block.
The theoretical error bound can be described via the following formula for the AWGN channel for FSK:
and for other modulation schemes:
where , , , is the symbol error rate in uncoded AWGN case and is the modulation order.
For practical uses of Reed–Solomon codes, it is common to use a finite field with elements. In this case, each symbol can be represented as an -bit value.
The sender sends the data points as encoded blocks, and the number of symbols in the encoded block is . Thus a Reed–Solomon code operating on 8-bit symbols has symbols per block. (This is a very popular value because of the prevalence of byte-oriented computer systems.) The number , with , of data symbols in the block is a design parameter. A commonly used code encodes eight-bit data symbols plus 32 eight-bit parity symbols in an -symbol block; this is denoted as a code, and is capable of correcting up to 16 symbol errors per block.
The Reed–Solomon code properties discussed above make them especially well-suited to applications where errors occur in bursts. This is because it does not matter to the code how many bits in a symbol are in error — if multiple bits in a symbol are corrupted it only counts as a single error. Conversely, if a data stream is not characterized by error bursts or drop-outs but by random single bit errors, a Reed–Solomon code is usually a poor choice compared to a binary code.
The Reed–Solomon code, like the convolutional code, is a transparent code. This means that if the channel symbols have been inverted somewhere along the line, the decoders will still operate. The result will be the inversion of the original data. However, the Reed–Solomon code loses its transparency when the code is shortened (see 'Remarks' at the end of this section). The "missing" bits in a shortened code need to be filled by either zeros or ones, depending on whether the data is complemented or not. (To put it another way, if the symbols are inverted, then the zero-fill needs to be inverted to a one-fill.) For this reason it is mandatory that the sense of the data (i.e., true or complemented) be resolved before Reed–Solomon decoding.
Whether the Reed–Solomon code is cyclic or not depends on subtle details of the construction. In the original view of Reed and Solomon, where the codewords are the values of a polynomial, one can choose the sequence of evaluation points in such a way as to make the code cyclic. In particular, if is a primitive root of the field , then by definition all non-zero elements of take the form for , where . Each polynomial over gives rise to a codeword . Since the function is also a polynomial of the same degree, this function gives rise to a codeword ; since holds, this codeword is the cyclic left-shift of the original codeword derived from . So choosing a sequence of primitive root powers as the evaluation points makes the original view Reed–Solomon code cyclic. Reed–Solomon codes in the BCH view are always cyclic because BCH codes are cyclic.
Remarks
Designers are not required to use the "natural" sizes of Reed–Solomon code blocks. A technique known as "shortening" can produce a smaller code of any desired size from a larger code. For example, the widely used (255,223) code can be converted to a (160,128) code by padding the unused portion of the source block with 95 binary zeroes and not transmitting them. At the decoder, the same portion of the block is loaded locally with binary zeroes.
The QR code, Ver 3 (29×29) uses interleaved blocks. The message has 26 data bytes and is encoded using two Reed-Solomon code blocks. Each block is a (255,233) Reed Solomon code shortened to a (35,13) code.
The Delsarte–Goethals–Seidel theorem illustrates an example of an application of shortened Reed–Solomon codes. In parallel to shortening, a technique known as puncturing allows omitting some of the encoded parity symbols.
BCH view decoders
The decoders described in this section use the BCH view of a codeword as a sequence of coefficients. They use a fixed generator polynomial known to both encoder and decoder.
Peterson–Gorenstein–Zierler decoder
Daniel Gorenstein and Neal Zierler developed a decoder that was described in a MIT Lincoln Laboratory report by Zierler in January 1960 and later in a paper in June 1961. The Gorenstein–Zierler decoder and the related work on BCH codes are described in a book Error Correcting Codes by W. Wesley Peterson (1961).
Formulation
The transmitted message, , is viewed as the coefficients of a polynomial
As a result of the Reed–Solomon encoding procedure, s(x) is divisible by the generator polynomial
where α is a primitive element.
Since s(x) is a multiple of the generator g(x), it follows that it "inherits" all its roots:
Therefore,
The transmitted polynomial is corrupted in transit by an error polynomial
to produce the received polynomial
Coefficient ei will be zero if there is no error at that power of x, and nonzero if there is an error. If there are ν errors at distinct powers ik of x, then
The goal of the decoder is to find the number of errors (ν), the positions of the errors (ik), and the error values at those positions (eik). From those, e(x) can be calculated and subtracted from r(x) to get the originally sent message s(x).
Syndrome decoding
The decoder starts by evaluating the polynomial as received at points . We call the results of that evaluation the "syndromes" Sj. They are defined as
Note that because has roots at , as shown in the previous section.
The advantage of looking at the syndromes is that the message polynomial drops out. In other words, the syndromes only relate to the error and are unaffected by the actual contents of the message being transmitted. If the syndromes are all zero, the algorithm stops here and reports that the message was not corrupted in transit.
Error locators and error values
For convenience, define the error locators Xk and error values Yk as
Then the syndromes can be written in terms of these error locators and error values as
This definition of the syndrome values is equivalent to the previous since .
The syndromes give a system of equations in 2ν unknowns, but that system of equations is nonlinear in the Xk and does not have an obvious solution. However, if the Xk were known (see below), then the syndrome equations provide a linear system of equations
which can easily be solved for the Yk error values.
Consequently, the problem is finding the Xk, because then the leftmost matrix would be known, and both sides of the equation could be multiplied by its inverse, yielding Yk
In the variant of this algorithm where the locations of the errors are already known (when it is being used as an erasure code), this is the end. The error locations (Xk) are already known by some other method (for example, in an FM transmission, the sections where the bitstream was unclear or overcome with interference are probabilistically determinable from frequency analysis). In this scenario, up to errors can be corrected.
The rest of the algorithm serves to locate the errors and will require syndrome values up to , instead of just the used thus far. This is why twice as many error-correcting symbols need to be added as can be corrected without knowing their locations.
Error locator polynomial
There is a linear recurrence relation that gives rise to a system of linear equations. Solving those equations identifies those error locations Xk.
Define the error locator polynomial as
The zeros of are the reciprocals . This follows from the above product notation construction, since if , then one of the multiplied terms will be zero, , making the whole polynomial evaluate to zero:
Let be any integer such that . Multiply both sides by , and it will still be zero:
Sum for k = 1 to ν, and it will still be zero:
Collect each term into its own sum:
Extract the constant values of that are unaffected by the summation:
These summations are now equivalent to the syndrome values, which we know and can substitute in. This therefore reduces to
Subtracting from both sides yields
Recall that j was chosen to be any integer between 1 and v inclusive, and this equivalence is true for all such values. Therefore, we have v linear equations, not just one. This system of linear equations can therefore be solved for the coefficients Λi of the error-location polynomial:
The above assumes that the decoder knows the number of errors ν, but that number has not been determined yet. The PGZ decoder does not determine ν directly but rather searches for it by trying successive values. The decoder first assumes the largest value for a trial ν and sets up the linear system for that value. If the equations can be solved (i.e., the matrix determinant is nonzero), then that trial value is the number of errors. If the linear system cannot be solved, then the trial ν is reduced by one and the next smaller system is examined .
Find the roots of the error locator polynomial
Use the coefficients Λi found in the last step to build the error location polynomial. The roots of the error location polynomial can be found by exhaustive search. The error locators Xk are the reciprocals of those roots. The order of coefficients of the error location polynomial can be reversed, in which case the roots of that reversed polynomial are the error locators (not their reciprocals ). Chien search is an efficient implementation of this step.
Calculate the error values
Once the error locators Xk are known, the error values can be determined. This can be done by direct solution for Yk in the error equations matrix given above, or using the Forney algorithm.
Calculate the error locations
Calculate ik by taking the log base of Xk. This is generally done using a precomputed lookup table.
Fix the errors
Finally, e(x) is generated from ik and eik and then is subtracted from r(x) to get the originally sent message s(x), with errors corrected.
Example
Consider the Reed–Solomon code defined in with and (this is used in PDF417 barcodes) for a RS(7,3) code. The generator polynomial is
If the message polynomial is , then a systematic codeword is encoded as follows:
Errors in transmission might cause this to be received instead:
The syndromes are calculated by evaluating r at powers of α:
yielding the system
Using Gaussian elimination,
so
with roots x1 = 757 = 3−3 and x2 = 562 = 3−4.
The coefficients can be reversed:
to produce roots 27 = 33 and 81 = 34 with positive exponents, but typically this isn't used. The logarithm of the inverted roots corresponds to the error locations (right to left, location 0 is the last term in the codeword).
To calculate the error values, apply the Forney algorithm:
Subtracting from the received polynomial r(x) reproduces the original codeword s.
Berlekamp–Massey decoder
The Berlekamp–Massey algorithm is an alternate iterative procedure for finding the error locator polynomial. During each iteration, it calculates a discrepancy based on a current instance of Λ(x) with an assumed number of errors e:
and then adjusts Λ(x) and e so that a recalculated Δ would be zero. The article Berlekamp–Massey algorithm has a detailed description of the procedure. In the following example, C(x) is used to represent Λ(x).
Example
Using the same data as the Peterson Gorenstein Zierler example above:
The final value of C is the error locator polynomial, Λ(x).
Euclidean decoder
Another iterative method for calculating both the error locator polynomial and the error value polynomial is based on Sugiyama's adaptation of the extended Euclidean algorithm .
Define S(x), Λ(x), and Ω(x) for t syndromes and e errors:
The key equation is:
For t = 6 and e = 3:
The middle terms are zero due to the relationship between Λ and syndromes.
The extended Euclidean algorithm can find a series of polynomials of the form
where the degree of R decreases as i increases. Once the degree of Ri(x) < t/2, then
B(x) and Q(x) don't need to be saved, so the algorithm becomes:
R−1 := xt
R0 := S(x)
A−1 := 0
A0 := 1
i := 0
while degree of Ri ≥ t/2
i := i + 1
Q := Ri-2 / Ri-1
Ri := Ri-2 - Q Ri-1
Ai := Ai-2 - Q Ai-1
to set low order term of Λ(x) to 1, divide Λ(x) and Ω(x) by Ai(0):
Ai(0) is the constant (low order) term of Ai.
Example
Using the same data as the Peterson–Gorenstein–Zierler example above:
Decoder using discrete Fourier transform
A discrete Fourier transform can be used for decoding. To avoid conflict with syndrome names, let c(x) = s(x) the encoded codeword. r(x) and e(x) are the same as above. Define C(x), E(x), and R(x) as the discrete Fourier transforms of c(x), e(x), and r(x). Since r(x) = c(x) + e(x), and since a discrete Fourier transform is a linear operator, R(x) = C(x) + E(x).
Transform r(x) to R(x) using discrete Fourier transform. Since the calculation for a discrete Fourier transform is the same as the calculation for syndromes, t coefficients of R(x) and E(x) are the same as the syndromes:
Use through as syndromes (they're the same) and generate the error locator polynomial using the methods from any of the above decoders.
Let v = number of errors. Generate E(x) using the known coefficients to , the error locator polynomial, and these formulas
Then calculate C(x) = R(x) − E(x) and take the inverse transform (polynomial interpolation) of C(x) to produce c(x).
Decoding beyond the error-correction bound
The Singleton bound states that the minimum distance d of a linear block code of size (n,k) is upper-bounded by . The distance d was usually understood to limit the error-correction capability to . The Reed–Solomon code achieves this bound with equality, and can thus correct up to errors. However, this error-correction bound is not exact.
In 1999, Madhu Sudan and Venkatesan Guruswami at MIT published "Improved Decoding of Reed–Solomon and Algebraic-Geometry Codes" introducing an algorithm that allowed for the correction of errors beyond half the minimum distance of the code. It applies to Reed–Solomon codes and more generally to algebraic geometric codes. This algorithm produces a list of codewords (it is a list-decoding algorithm) and is based on interpolation and factorization of polynomials over and its extensions.
In 2023, building on three exciting works, coding theorists showed that Reed-Solomon codes defined over random evaluation points can actually achieve list decoding capacity (up to errors) over linear size alphabets with high probability. However, this result is combinatorial rather than algorithmic.
Soft-decoding
The algebraic decoding methods described above are hard-decision methods, which means that for every symbol a hard decision is made about its value. For example, a decoder could associate with each symbol an additional value corresponding to the channel demodulator's confidence in the correctness of the symbol. The advent of LDPC and turbo codes, which employ iterated soft-decision belief propagation decoding methods to achieve error-correction performance close to the theoretical limit, has spurred interest in applying soft-decision decoding to conventional algebraic codes. In 2003, Ralf Koetter and Alexander Vardy presented a polynomial-time soft-decision algebraic list-decoding algorithm for Reed–Solomon codes, which was based upon the work by Sudan and Guruswami.
In 2016, Steven J. Franke and Joseph H. Taylor published a novel soft-decision decoder.
MATLAB example
Encoder
Here we present a simple MATLAB implementation for an encoder.
function encoded = rsEncoder(msg, m, prim_poly, n, k)
% RSENCODER Encode message with the Reed-Solomon algorithm
% m is the number of bits per symbol
% prim_poly: Primitive polynomial p(x). Ie for DM is 301
% k is the size of the message
% n is the total size (k+redundant)
% Example: msg = uint8('Test')
% enc_msg = rsEncoder(msg, 8, 301, 12, numel(msg));
% Get the alpha
alpha = gf(2, m, prim_poly);
% Get the Reed-Solomon generating polynomial g(x)
g_x = genpoly(k, n, alpha);
% Multiply the information by X^(n-k), or just pad with zeros at the end to
% get space to add the redundant information
msg_padded = gf([msg zeros(1, n - k)], m, prim_poly);
% Get the remainder of the division of the extended message by the
% Reed-Solomon generating polynomial g(x)
[~, remainder] = deconv(msg_padded, g_x);
% Now return the message with the redundant information
encoded = msg_padded - remainder;
end
% Find the Reed-Solomon generating polynomial g(x), by the way this is the
% same as the rsgenpoly function on matlab
function g = genpoly(k, n, alpha)
g = 1;
% A multiplication on the galois field is just a convolution
for k = mod(1 : n - k, n)
g = conv(g, [1 alpha .^ (k)]);
end
end
Decoder
Now the decoding part:
function [decoded, error_pos, error_mag, g, S] = rsDecoder(encoded, m, prim_poly, n, k)
% RSDECODER Decode a Reed-Solomon encoded message
% Example:
% [dec, ~, ~, ~, ~] = rsDecoder(enc_msg, 8, 301, 12, numel(msg))
max_errors = floor((n - k) / 2);
orig_vals = encoded.x;
% Initialize the error vector
errors = zeros(1, n);
g = [];
S = [];
% Get the alpha
alpha = gf(2, m, prim_poly);
% Find the syndromes (Check if dividing the message by the generator
% polynomial the result is zero)
Synd = polyval(encoded, alpha .^ (1:n - k));
Syndromes = trim(Synd);
% If all syndromes are zeros (perfectly divisible) there are no errors
if isempty(Syndromes.x)
decoded = orig_vals(1:k);
error_pos = [];
error_mag = [];
g = [];
S = Synd;
return;
end
% Prepare for the euclidean algorithm (Used to find the error locating
% polynomials)
r0 = [1, zeros(1, 2 * max_errors)]; r0 = gf(r0, m, prim_poly); r0 = trim(r0);
size_r0 = length(r0);
r1 = Syndromes;
f0 = gf([zeros(1, size_r0 - 1) 1], m, prim_poly);
f1 = gf(zeros(1, size_r0), m, prim_poly);
g0 = f1; g1 = f0;
% Do the euclidean algorithm on the polynomials r0(x) and Syndromes(x) in
% order to find the error locating polynomial
while true
% Do a long division
[quotient, remainder] = deconv(r0, r1);
% Add some zeros
quotient = pad(quotient, length(g1));
% Find quotient*g1 and pad
c = conv(quotient, g1);
c = trim(c);
c = pad(c, length(g0));
% Update g as g0-quotient*g1
g = g0 - c;
% Check if the degree of remainder(x) is less than max_errors
if all(remainder(1:end - max_errors) == 0)
break;
end
% Update r0, r1, g0, g1 and remove leading zeros
r0 = trim(r1); r1 = trim(remainder);
g0 = g1; g1 = g;
end
% Remove leading zeros
g = trim(g);
% Find the zeros of the error polynomial on this galois field
evalPoly = polyval(g, alpha .^ (n - 1 : - 1 : 0));
error_pos = gf(find(evalPoly == 0), m);
% If no error position is found we return the received work, because
% basically is nothing that we could do and we return the received message
if isempty(error_pos)
decoded = orig_vals(1:k);
error_mag = [];
return;
end
% Prepare a linear system to solve the error polynomial and find the error
% magnitudes
size_error = length(error_pos);
Syndrome_Vals = Syndromes.x;
b(:, 1) = Syndrome_Vals(1:size_error);
for idx = 1 : size_error
e = alpha .^ (idx * (n - error_pos.x));
err = e.x;
er(idx, :) = err;
end
% Solve the linear system
error_mag = (gf(er, m, prim_poly) \ gf(b, m, prim_poly))';
% Put the error magnitude on the error vector
errors(error_pos.x) = error_mag.x;
% Bring this vector to the galois field
errors_gf = gf(errors, m, prim_poly);
% Now to fix the errors just add with the encoded code
decoded_gf = encoded(1:k) + errors_gf(1:k);
decoded = decoded_gf.x;
end
% Remove leading zeros from Galois array
function gt = trim(g)
gx = g.x;
gt = gf(gx(find(gx, 1) : end), g.m, g.prim_poly);
end
% Add leading zeros
function xpad = pad(x, k)
len = length(x);
if len < k
xpad = [zeros(1, k - len) x];
end
end
Reed Solomon original view decoders
The decoders described in this section use the Reed Solomon original view of a codeword as a sequence of polynomial values where the polynomial is based on the message to be encoded. The same set of fixed values are used by the encoder and decoder, and the decoder recovers the encoding polynomial (and optionally an error locating polynomial) from the received message.
Theoretical decoder
described a theoretical decoder that corrected errors by finding the most popular message polynomial. The decoder only knows the set of values to and which encoding method was used to generate the codeword's sequence of values. The original message, the polynomial, and any errors are unknown. A decoding procedure could use a method like Lagrange interpolation on various subsets of n codeword values taken k at a time to repeatedly produce potential polynomials, until a sufficient number of matching polynomials are produced to reasonably eliminate any errors in the received codeword. Once a polynomial is determined, then any errors in the codeword can be corrected, by recalculating the corresponding codeword values. Unfortunately, in all but the simplest of cases, there are too many subsets, so the algorithm is impractical. The number of subsets is the binomial coefficient, , and the number of subsets is infeasible for even modest codes. For a code that can correct 3 errors, the naïve theoretical decoder would examine 359 billion subsets.
Berlekamp Welch decoder
In 1986, a decoder known as the Berlekamp–Welch algorithm was developed as a decoder that is able to recover the original message polynomial as well as an error "locator" polynomial that produces zeroes for the input values that correspond to errors, with time complexity , where is the number of values in a message. The recovered polynomial is then used to recover (recalculate as needed) the original message.
Example
Using RS(7,3), GF(929), and the set of evaluation points ai = i − 1
If the message polynomial is
The codeword is
Errors in transmission might cause this to be received instead.
The key equations are:
Assume maximum number of errors: e = 2. The key equations become:
Using Gaussian elimination:
Recalculate where to correct resulting in the corrected codeword:
Gao decoder
In 2002, an improved decoder was developed by Shuhong Gao, based on the extended Euclid algorithm.
Example
Using the same data as the Berlekamp Welch example above:
Lagrange interpolation of for i = 1 to n
divide Q(x) and E(x) by most significant coefficient of E(x) = 708. (Optional)
Recalculate where to correct resulting in the corrected codeword:
See also
BCH code
Berlekamp–Massey algorithm
Berlekamp–Welch algorithm
Chien search
Cyclic code
Folded Reed–Solomon code
Forward error correction
Notes
References
Further reading
External links
Information and tutorials
Introduction to Reed–Solomon codes: principles, architecture and implementation (CMU)
A Tutorial on Reed–Solomon Coding for Fault-Tolerance in RAID-like Systems
Algebraic soft-decoding of Reed–Solomon codes
Wikiversity:Reed–Solomon codes for coders
BBC R&D White Paper WHP031
Concatenated codes by Dr. Dave Forney (scholarpedia.org).
Implementations
FEC library in C by Phil Karn (aka KA9Q) includes Reed–Solomon codec, both arbitrary and optimized (223,255) version
Schifra Open Source C++ Reed–Solomon Codec
Henry Minsky's RSCode library, Reed–Solomon encoder/decoder
Open Source C++ Reed–Solomon Soft Decoding library
Matlab implementation of errors and-erasures Reed–Solomon decoding
Octave implementation in communications package
Pure-Python implementation of a Reed–Solomon codec
Error detection and correction
Coding theory | Reed–Solomon error correction | [
"Mathematics",
"Engineering"
] | 9,498 | [
"Discrete mathematics",
"Coding theory",
"Reliability engineering",
"Error detection and correction"
] |
45,619 | https://en.wikipedia.org/wiki/Lagged%20Fibonacci%20generator | A Lagged Fibonacci generator (LFG or sometimes LFib) is an example of a pseudorandom number generator. This class of random number generator is aimed at being an improvement on the 'standard' linear congruential generator. These are based on a generalisation of the Fibonacci sequence.
The Fibonacci sequence may be described by the recurrence relation:
Hence, the new term is the sum of the last two terms in the sequence. This can be generalised to the sequence:
In which case, the new term is some combination of any two previous terms. m is usually a power of 2 (m = 2M), often 232 or 264. The operator denotes a general binary operation. This may be either addition, subtraction, multiplication, or the bitwise exclusive-or operator (XOR). The theory of this type of generator is rather complex, and it may not be sufficient simply to choose random values for and . These generators also tend to be very sensitive to initialisation.
Generators of this type employ k words of state (they 'remember' the last k values).
If the operation used is addition, then the generator is described as an Additive Lagged Fibonacci Generator or ALFG, if multiplication is used, it is a Multiplicative Lagged Fibonacci Generator or MLFG, and if the XOR operation is used, it is called a Two-tap generalised feedback shift register or GFSR. The Mersenne Twister algorithm is a variation on a GFSR. The GFSR is also related to the linear-feedback shift register, or LFSR.
Properties of lagged Fibonacci generators
The maximum period of lagged Fibonacci generators depends on the binary operation . If addition or subtraction is used, the maximum period is (2k − 1) × 2M−1. If multiplication is used, the maximum period is (2k − 1) × 2M−3, or 1/4 of period of the additive case. If bitwise xor is used, the maximum period is 2k − 1.
For the generator to achieve this maximum period, the polynomial:
y = xk + xj + 1
must be primitive over the integers mod 2. Values of j and k satisfying this constraint have been published in the literature.
Another list of possible values for j and k is on page 29 of volume 2 of The Art of Computer Programming:
(24, 55), (38, 89), (37, 100), (30, 127), (83, 258), (107, 378), (273, 607), (1029, 2281), (576, 3217), (4187, 9689), (7083, 19937), (9739, 23209)
Note that the smaller number have short periods (only a few "random" numbers are generated before the first "random" number is repeated and the sequence restarts).
If addition is used, it is required that at least one of the first k values chosen to initialise the generator be odd. If multiplication is used, instead, it is required that all the first k values be odd, and further that at least one of them is ±3 mod 8.
It has been suggested that good ratios between and are approximately the golden ratio.
Problems with LFGs
In a paper on four-tap shift registers, Robert M. Ziff, referring to LFGs that use the XOR operator, states that "It is now widely known that such generators, in particular with the two-tap rules such as R(103, 250), have serious deficiencies. Marsaglia observed very poor behavior with R(24, 55) and smaller generators, and advised against using generators of this type altogether. ... The basic problem of two-tap generators R(a, b) is that they have a built-in three-point correlation between , , and , simply given by the generator itself ... While these correlations are spread over the size of the generator itself, they can evidently still lead to significant errors.". This only refers to the standard LFG where each new number in the sequence depends on two previous numbers. A three-tap LFG has been shown to eliminate some statistical problems such as failing the Birthday Spacings and Generalized Triple tests.
Usage
Freeciv uses a lagged Fibonacci generator with {j = 24, k = 55} for its random number generator.
The Boost library includes an implementation of a lagged Fibonacci generator.
Subtract with carry, a lagged Fibonacci generator engine, is included in the C++11 library.
The Oracle Database implements this generator in its DBMS_RANDOM package (available in Oracle 8 and newer versions).
See also
Wikipedia page 'List of random number generators' lists other PRNGs including some with better statistical qualitites:
Linear congruential generator
ACORN generator
Mersenne Twister
Xoroshiro128+
FISH (cipher)
Pike
VIC cipher
References
Toward a universal random number generator, G.Marsaglia, A.Zaman
Pseudorandom number generators
Fibonacci numbers | Lagged Fibonacci generator | [
"Mathematics"
] | 1,097 | [
"Fibonacci numbers",
"Mathematical relations",
"Golden ratio",
"Recurrence relations"
] |
45,621 | https://en.wikipedia.org/wiki/Psychopharmacology | Psychopharmacology (from Greek ; ; and ) is the scientific study of the effects drugs have on mood, sensation, thinking, behavior, judgment and evaluation, and memory. It is distinguished from neuropsychopharmacology, which emphasizes the correlation between drug-induced changes in the functioning of cells in the nervous system and changes in consciousness and behavior.
The field of psychopharmacology studies a wide range of substances with various types of psychoactive properties, focusing primarily on the chemical interactions with the brain. The term "psychopharmacology" was likely first coined by David Macht in 1920. Psychoactive drugs interact with particular target sites or receptors found in the nervous system to induce widespread changes in physiological or psychological functions. The specific interaction between drugs and their receptors is referred to as "drug action", and the widespread changes in physiological or psychological function is referred to as "drug effect". These drugs may originate from natural sources such as plants and animals, or from artificial sources such as chemical synthesis in the laboratory.
Historical overview
Early psychopharmacology
Not often mentioned or included in the field of psychopharmacology today are psychoactive substances not identified as useful in modern mental health settings or references. These substances are naturally occurring, but nonetheless psychoactive, and are compounds identified through the work of ethnobotanists and ethnomycologists (and others who study the native use of naturally occurring psychoactive drugs). However, although these substances have been used throughout history by various cultures, and have a profound effect on mentality and brain function, they have not always attained the degree of scrutinous evaluation that lab-made compounds have. Nevertheless, some, such as psilocybin and mescaline, have provided a basis of study for the compounds that are used and examined in the field today. Hunter-gatherer societies tended to favor hallucinogens, and today their use can still be observed in many surviving tribal cultures. The exact drug used depends on what the particular ecosystem a given tribe lives in can support, and are typically found growing wild. Such drugs include various psychoactive mushrooms containing psilocybin or muscimol and cacti containing mescaline and other chemicals, along with myriad other plants containing psychoactive chemicals. These societies generally attach spiritual significance to such drug use, and often incorporate it into their religious practices.
With the dawn of the Neolithic and the proliferation of agriculture, new psychoactives came into use as a natural by-product of farming. Among them were opium, cannabis, and alcohol derived from the fermentation of cereals and fruits. Most societies began developing herblores, lists of herbs which were good for treating various physical and mental ailments. For example, St. John's wort was traditionally prescribed in parts of Europe for depression (in addition to use as a general-purpose tea), and Chinese medicine developed elaborate lists of herbs and preparations. These and various other substances that have an effect on the brain are still used as remedies in many cultures.
Modern psychopharmacology
The dawn of contemporary psychopharmacology marked the beginning of the use of psychiatric drugs to treat psychological illnesses. It brought with it the use of opiates and barbiturates for the management of acute behavioral issues in patients. In the early stages, psychopharmacology was primarily used for sedation. With the 1950s came the establishment of lithium for mania, chlorpromazine for psychoses, and then in rapid succession, the development of tricyclic antidepressants, monoamine oxidase inhibitors, and benzodiazepines, among other antipsychotics and antidepressants. A defining feature of this era includes an evolution of research methods, with the establishment of placebo-controlled, double-blind studies, and the development of methods for analyzing blood levels with respect to clinical outcome and increased sophistication in clinical trials. The early 1960s revealed a revolutionary model by Julius Axelrod describing nerve signals and synaptic transmission, which was followed by a drastic increase of biochemical brain research into the effects of psychotropic agents on brain chemistry. After the 1960s, the field of psychiatry shifted to incorporate the indications for and efficacy of pharmacological treatments, and began to focus on the use and toxicities of these medications. The 1970s and 1980s were further marked by a better understanding of the synaptic aspects of the action mechanisms of drugs. However, the model has its critics, too – notably Joanna Moncrieff and the Critical Psychiatry Network.
Chemical signaling
Neurotransmitters
Psychoactive drugs exert their sensory and behavioral effects almost entirely by acting on neurotransmitters and by modifying one or more aspects of synaptic transmission. Neurotransmitters can be viewed as chemicals through which neurons primarily communicate; psychoactive drugs affect the mind by altering this communication. Drugs may act by 1) serving as a precursor to a neurotransmitter; 2) inhibiting neurotransmitter synthesis; 3) preventing storage of neurotransmitters in the presynaptic vesicle; 4) stimulating or inhibiting neurotransmitter release; 5) stimulating or blocking post-synaptic receptors; 6) stimulating autoreceptors, inhibiting neurotransmitter release; 7) blocking autoreceptors, increasing neurotransmitter release; 8) inhibiting neurotransmission breakdown; or 9) blocking neurotransmitter reuptake by the presynaptic neuron.
Hormones
The other central method through which drugs act is by affecting communications between cells through hormones. Neurotransmitters can usually only travel a microscopic distance before reaching their target at the other side of the synaptic cleft, while hormones can travel long distances before reaching target cells anywhere in the body. Thus, the endocrine system is a critical focus of psychopharmacology because 1) drugs can alter the secretion of many hormones; 2) hormones may alter the behavioral responses to drugs; 3) hormones themselves sometimes have psychoactive properties; and 4) the secretion of some hormones, especially those dependent on the pituitary gland, is controlled by neurotransmitter systems in the brain.
Psychopharmacological substances
Alcohol
Alcohol is a depressant, the effects of which may vary according to dosage amount, frequency, and chronicity. As a member of the sedative-hypnotic class, at the lowest doses, the individual feels relaxed and less anxious. In quiet settings, the user may feel drowsy, but in settings with increased sensory stimulation, individuals may feel uninhibited and more confident. High doses of alcohol rapidly consumed may produce amnesia for the events that occur during intoxication. Other effects include reduced coordination, which leads to slurred speech, impaired fine-motor skills, and delayed reaction time. The effects of alcohol on the body's neurochemistry are more difficult to examine than some other drugs. This is because the chemical nature of the substance makes it easy to penetrate into the brain, and it also influences the phospholipid bilayer of neurons. This allows alcohol to have a widespread impact on many normal cell functions and modifies the actions of several neurotransmitter systems. Alcohol inhibits glutamate (a major excitatory neurotransmitter in the nervous system) neurotransmission by reducing the effectiveness at the NMDA receptor, which is related to memory loss associated with intoxication. It also modulates the function of GABA, a major inhibitory amino acid neurotransmitter. Abuse of alcohol has also been correlated with thiamine deficiencies within the brain, leading to lasting neurological conditions that affect primarily the ability of the brain to effectively store memories. One such neurological condition is called Korsakoff's syndrome, for which very few effective treatment modalities have been found. The reinforcing qualities of alcohol leading to repeated use – and thus also the mechanisms of withdrawal from chronic alcohol use – are partially due to the substance's action on the dopamine system. This is also due to alcohol's effect on the opioid systems, or endorphins, that have opiate-like effects, such as modulating pain, mood, feeding, reinforcement, and response to stress.
Antidepressants
Antidepressants reduce symptoms of mood disorders primarily through the regulation of norepinephrine and serotonin (particularly the 5-HT receptors). After chronic use, neurons adapt to the change in biochemistry, resulting in a change in pre- and postsynaptic receptor density and second messenger function. The Monoamine Theory of Depression and Anxiety, which states that the disruption of the activity of nitrogen containing neurotransmitters (i.e. serotonin, norepinephrine, and dopamine) is strongly correlated with the presence of depressive symptoms. Despite its longstanding prominence in pharmaceutical advertising, the myth that low serotonin levels cause depression is not supported by scientific evidence.
Monoamine oxidase inhibitors (MAOIs) are the oldest class of antidepressants. They inhibit monoamine oxidase, the enzyme that metabolizes the monoamine neurotransmitters in the presynaptic terminals that are not contained in protective synaptic vesicles. The inhibition of the enzyme increases the amount of neurotransmitter available for release. It increases norepinephrine, dopamine, and 5-HT, thus increasing the action of the transmitters at their receptors. MAOIs have been somewhat disfavored because of their reputation for more serious side effects.
Tricyclic antidepressants (TCAs) work through binding to the presynaptic transporter proteins and blocking the reuptake of norepinephrine or 5-HT into the presynaptic terminal, prolonging the duration of transmitter action at the synapse.
Selective serotonin reuptake inhibitors (SSRIs) selectively block the reuptake of serotonin (5-HT) through their inhibiting effects on the sodium/potassium ATP-dependent serotonin transporter in presynaptic neurons. This increases the availability of 5-HT in the synaptic cleft. The main parameters to consider in choosing an antidepressant are side effects and safety. Most SSRIs are available generically and are relatively inexpensive. Older antidepressants such as TCAs and MAOIs usually require more visits and monitoring, which may offset the low expense of the drugs. SSRIs are relatively safe in overdoses and better tolerated than TCAs and MAOIs for most patients.
Antipsychotics
All proven antipsychotics are postsynaptic dopamine receptor blockers (dopamine antagonists). For an antipsychotic to be effective, it generally requires a dopamine antagonism of 60%–80% of dopamine D2 receptors.
First generation (typical) antipsychotics: Traditional neuroleptics modify several neurotransmitter systems, but their clinical effectiveness is most likely due to their ability to antagonize dopamine transmission by competitively blocking the receptors or by inhibiting dopamine release. The most serious and troublesome side effects of these classical antipsychotics are movement disorders that resemble the symptoms of Parkinson's disease, because the neuroleptics antagonize dopamine receptors broadly, also reducing the normal dopamine-mediated inhibition of cholinergic cells in the striatum.
Second-generation (atypical) antipsychotics: The concept of "atypicality" is from the finding that second generation antipsychotics (SGAs) have a greater serotonin/dopamine ratio than earlier drugs, and might be associated with improved efficacy (particularly for the negative symptoms of psychosis) and reduced extrapyramidal side effects. Some of the efficacy of atypical antipsychotics may be due to 5-HT2 antagonism or the blockade of other dopamine receptors. Agents that purely block 5-HT2 or dopamine receptors other than D2 have often failed as effective antipsychotics.
Benzodiazepines
Benzodiazepines are often used to reduce anxiety symptoms, muscle tension, seizure disorders, insomnia, symptoms of alcohol withdrawal, and panic attack symptoms. Their action is primarily on specific benzodiazepine sites on the GABAA receptor. This receptor complex is thought to mediate the anxiolytic, sedative, and anticonvulsant actions of the benzodiazepines. Use of benzodiazepines carries the risk of tolerance (necessitating increased dosage), dependence, and abuse. Taking these drugs for a long period of time can lead to severe withdrawal symptoms upon abrupt discontinuation.
Hallucinogens
Classical serotonergic psychedelics
Psychedelics cause perceptual and cognitive distortions without delirium. The state of intoxication is often called a "trip". Onset is the first stage after an individual ingests (LSD, psilocybin, ayahuasca, and mescaline) or smokes (dimethyltryptamine) the substance. This stage may consist of visual effects, with an intensification of colors and the appearance of geometric patterns that can be seen with one's eyes closed. This is followed by a plateau phase, where the subjective sense of time begins to slow and the visual effects increase in intensity. The user may experience synesthesia, a crossing-over of sensations (for example, one may "see" sounds and "hear" colors). These outward sensory effects have been referred to as the "mystical experience", and current research suggests that this state could be beneficial to the treatment of some mental illnesses, such as depression and possibly addiction. In instances where some patients have seen a lack of improvement from the use of antidepressants, serotonergic hallucinogens have been observed to be rather effective in treatment. In addition to the sensory-perceptual effects, hallucinogenic substances may induce feelings of depersonalization, emotional shifts to a euphoric or anxious/fearful state, and a disruption of logical thought. Hallucinogens are classified chemically as either indolamines (specifically tryptamines), sharing a common structure with serotonin, or as phenethylamines, which share a common structure with norepinephrine. Both classes of these drugs are agonists at the 5-HT2 receptors; this is thought to be the central component of their hallucinogenic properties. Activation of 5-HT2A may be particularly important for hallucinogenic activity. However, repeated exposure to hallucinogens leads to rapid tolerance, likely through down-regulation of these receptors in specific target cells. Research suggests that hallucinogens affect many of these receptor sites around the brain and that through these interactions, hallucinogenic substances may be capable of inducing positive introspective experiences. The current research implies that many of the effects that can be observed occur in the occipital lobe and the frontomedial cortex; however, they also present many secondary global effects in the brain that have not yet been connected to the substance's biochemical mechanism of action.
Dissociative hallucinogens
Another class of hallucinogens, known as dissociatives, includes drugs such as ketamine, phencyclidine (PCP), and Salvia divinorum. Drugs such as these are thought to interact predominantly with glutamate receptors within the brain. Specifically, ketamine is thought to block NMDA receptors that are responsible for signalling in the glutamate pathways. Ketamine's more tranquilizing effects can be seen in the central nervous system through interactions with parts of the thalamus by inhibition of certain functions. Ketamine has become a major drug of research for the treatment of depression. These antidepressant effects are thought to be related to the drug's action on the glutamate receptor system and the relative spike in glutamate levels, as well as its interaction with mTOR, which is an enzymatic protein involved in catabolic processes in the human body. Phencyclidine's biochemical properties are still mostly unknown; however, its use has been associated with dissociation, hallucinations, and in some cases seizures and death. Salvia divinorum, a plant native to Mexico, has strong dissociative and hallucinogenic properties when the dry leaves are smoked or chewed. The qualitative value of these effects, whether negative or positive, has been observed to vary between individuals with many other factors to consider.
Hypnotics
Hypnotics are often used to treat the symptoms of insomnia or other sleep disorders. Benzodiazepines are still among the most widely prescribed sedative-hypnotics in the United States today. Certain non-benzodiazepine drugs are used as hypnotics as well. Although they lack the chemical structure of the benzodiazepines, their sedative effect is similarly through action on the GABAA receptor. They also have a reputation of being less addictive than benzodiazepines. Melatonin, a naturally-occurring hormone, is often used over the counter (OTC) to treat insomnia and jet lag. This hormone appears to be excreted by the pineal gland early during the sleep cycle and may contribute to human circadian rhythms. Because OTC melatonin supplements are not subject to careful and consistent manufacturing, more specific melatonin agonists are sometimes preferred. They are used for their action on melatonin receptors in the suprachiasmatic nucleus, responsible for sleep-wake cycles. Many barbiturates have or had an FDA-approved indication for use as sedative-hypnotics, but have become less widely used because of their limited safety margin in overdose, their potential for dependence, and the degree of central nervous system depression they induce. The amino-acid L-tryptophan is also available OTC, and seems to be free of dependence or abuse liability. However, it is not as powerful as the traditional hypnotics. Because of the possible role of serotonin in sleep patterns, a new generation of 5-HT2 antagonists are in current development as hypnotics.
Cannabis and the cannabinoids
Cannabis consumption produces a dose-dependent state of intoxication in humans. There is commonly increased blood flow to the skin, which leads to an increased heart rate and sensations of warmth or flushing. It also frequently induces increased hunger. Iversen (2000) categorized the subjective and behavioral effects often associated with cannabis into three stages. The first is the "buzz", a brief period of initial responding where the main effects are lightheadedness or slight dizziness, in addition to possible tingling sensations in the extremities or other parts of the body. The "high" is characterized by feelings of euphoria and exhilaration characterized by mild psychedelia as well as a sense of disinhibition. If the individual has taken a sufficiently large dose of cannabis, the level of intoxication progresses to the stage of being "stoned", and the user may feel calm, relaxed, and possibly in a dreamlike state. Sensory reactions may include the feeling of floating, enhanced visual and auditory perception, visual illusions, or the perception of the slowing of time passage, which are somewhat psychedelic in nature.
There exist two primary CNS cannabinoid receptors, on which marijuana and the cannabinoids act. Both the CB1 and CB2 receptor are found in the brain. The CB2 receptor is also found in the immune system. CB1 is expressed at high densities in the basal ganglia, cerebellum, hippocampus, and cerebral cortex. Receptor activation can inhibit cAMP formation, inhibit voltage-sensitive calcium ion channels, and activate potassium ion channels. Many CB1 receptors are located on axon terminals, where they act to inhibit the release of various neurotransmitters. In combination, these chemical actions work to alter various functions of the central nervous system, including the motor system, memory, and various cognitive processes.
Opioids
The opioid category of drugs – including drugs such as heroin, morphine, and oxycodone – belong to the class of narcotic analgesics, which reduce pain without producing unconsciousness but do produce a sense of relaxation and sleep, and at high doses may result in coma and death. The ability of opioids (both endogenous and exogenous) to relieve pain depends on a complex set of neuronal pathways at the spinal cord level, as well as various locations above the spinal cord. Small endorphin neurons in the spinal cord act on receptors to decrease the conduction of pain signals from the spinal cord to higher brain centers. Descending neurons originating in the periaqueductal gray give rise to two pathways that further block pain signals in the spinal cord. The pathways begin in the locus coeruleus (noradrenaline) and the nucleus of raphe (serotonin). Similar to other abused substances, opioid drugs increase dopamine release in the nucleus accumbens. Opioids are more likely to produce physical dependence worse than that of other classes of psychoactive drugs, and can lead to painful withdrawal symptoms if discontinued abruptly after regular use.
Stimulants
Cocaine is one of the more common stimulants and is a complex drug that interacts with various neurotransmitter systems. It commonly causes heightened alertness, increased confidence, feelings of exhilaration, reduced fatigue, and a generalized sense of well-being. The effects of cocaine are similar to those of amphetamines, though cocaine tends to have a shorter duration of effect. In high doses or with prolonged use, cocaine can result in a number of negative effects, including irritability, anxiety, exhaustion, total insomnia, and even psychotic symptomatology. Most of the behavioral and physiological actions of cocaine can be explained by its ability to block the reuptake of the two catecholamines, dopamine and norepinephrine, as well as serotonin. Cocaine binds to transporters that normally clear these transmitters from the synaptic cleft, inhibiting their function. This leads to increased levels of neurotransmitter in the cleft and transmission at the synapses. Based on in-vitro studies using rat brain tissue, cocaine binds most strongly to the serotonin transporter, followed by the dopamine transporter, and then the norepinephrine transporter.
Amphetamines tend to cause the same behavioral and subjective effects of cocaine. Various forms of amphetamine are commonly used to treat the symptoms of attention deficit hyperactivity disorder (ADHD) and narcolepsy, or are used recreationally. Amphetamine and methamphetamine are indirect agonists of the catecholaminergic systems. They block catecholamine reuptake, in addition to releasing catecholamines from nerve terminals. There is evidence that dopamine receptors play a central role in the behavioral responses of animals to cocaine, amphetamines, and other psychostimulant drugs. One action causes the dopamine molecules to be released from inside the vesicles into the cytoplasm of the nerve terminal, which are then transported outside by the mesolimbic dopamine pathway to the nucleus accumbens. This plays a key role in the rewarding and reinforcing effects of cocaine and amphetamine in animals, and is the primary mechanism for amphetamine dependence.
Psychopharmacological research
In psychopharmacology, researchers are interested in any substance that crosses the blood–brain barrier and thus has an effect on behavior, mood, or cognition. Drugs are researched for their physiochemical properties, physical side effects, and psychological side effects. Researchers in psychopharmacology study a variety of different psychoactive substances, including alcohol, cannabinoids, club drugs, psychedelics, opiates, nicotine, caffeine, psychomotor stimulants, inhalants, and anabolic–androgenic steroids. They also study drugs used in the treatment of affective and anxiety disorders, as well as schizophrenia.
Clinical studies are often very specific, typically beginning with animal testing and ending with human testing. In the human testing phase, there is often a group of subjects: one group is given a placebo, and the other is administered a carefully measured therapeutic dose of the drug in question. After all of the testing is completed, the drug is proposed to the concerned regulatory authority (e.g. the U.S. FDA), and is either commercially introduced to the public via prescription, or deemed safe enough for over-the-counter sale.
Though particular drugs are prescribed for specific symptoms or syndromes, they are usually not specific to the treatment of any single mental disorder.
A somewhat controversial application of psychopharmacology is "cosmetic psychiatry": persons who do not meet criteria for any psychiatric disorder are nevertheless prescribed psychotropic medication. The antidepressant bupropion is then prescribed to increase perceived energy levels and assertiveness while diminishing the need for sleep. The antihypertensive compound propranolol is sometimes chosen to eliminate the discomfort of day-to-day anxiety. Fluoxetine in nondepressed people can produce a feeling of generalized well-being. Pramipexole, a treatment for restless leg syndrome, can dramatically increase libido in women. These and other off-label lifestyle applications of medications are not uncommon. Although occasionally reported in the medical literature, no guidelines for such usage have been developed. There is also a potential for the misuse of prescription psychoactive drugs by elderly persons, who may have multiple drug prescriptions.
See also
Pharmacology
Neuropharmacology
Neuropsychopharmacology
Psychiatry
History of pharmacy
Mental health
Recreational drug use
Nathan S. Kline
Prescriptive authority for psychologists movement
References
Further reading
, an introductory text with detailed examples of treatment protocols and problems.
, a general historical analysis.
Peer-reviewed journals
Experimental and Clinical Psychopharmacology, American Psychological Association
Journal of Clinical Psychopharmacology, Lippincott Williams & Wilkins
Journal of Psychopharmacology, British Association for Psychopharmacology, SAGE Publications
Psychopharmacology, Springer Berlin/Heidelberg
External links
Psychopharmacology: The Fourth Generation of Progress — American College of Neuropsychopharmacology (ACNP)
Bibliographical history of Psychopharmacology and Pharmacopsychology — Advances in the History of Psychology, York University
Monograph Psychopharmacology Today
British Association for Psychopharmacology (BAP)
Psychopharmacology Institute: Video lectures and tutorials on psychotropic medications.
Neuropharmacology | Psychopharmacology | [
"Chemistry"
] | 5,676 | [
"Psychopharmacology",
"Pharmacology",
"Neuropharmacology"
] |
45,634 | https://en.wikipedia.org/wiki/Thread%20safety | In multi-threaded computer programming, a function is thread-safe when it can be invoked or accessed concurrently by multiple threads without causing unexpected behavior, race conditions, or data corruption. As in the multi-threaded context where a program executes several threads simultaneously in a shared address space and each of those threads has access to every other thread's memory, thread-safe functions need to ensure that all those threads behave properly and fulfill their design specifications without unintended interaction.
There are various strategies for making thread-safe data structures.
Levels of thread safety
Different vendors use slightly different terminology for thread-safety, but the most commonly use thread-safety terminology are:
Not thread safe: Data structures should not be accessed simultaneously by different threads.
Thread safe, serialization: Use a single mutex for all resources to guarantee the thread to be free of race conditions when those resources are accessed by multiple threads simultaneously.
Thread safe, MT-safe: Use a mutex for every single resource to guarantee the thread to be free of race conditions when those resources are accessed by multiple threads simultaneously.
Thread safety guarantees usually also include design steps to prevent or limit the risk of different forms of deadlocks, as well as optimizations to maximize concurrent performance. However, deadlock-free guarantees cannot always be given, since deadlocks can be caused by callbacks and violation of architectural layering independent of the library itself.
Software libraries can provide certain thread-safety guarantees. For example, concurrent reads might be guaranteed to be thread-safe, but concurrent writes might not be. Whether a program using such a library is thread-safe depends on whether it uses the library in a manner consistent with those guarantees.
Implementation approaches
Below we discuss two classes of approaches for avoiding race conditions to achieve thread-safety.
The first class of approaches focuses on avoiding shared state and includes:
Re-entrancy Writing code in such a way that it can be partially executed by a thread, executed by the same thread, or simultaneously executed by another thread and still correctly complete the original execution. This requires the saving of state information in variables local to each execution, usually on a stack, instead of in static or global variables or other non-local state. All non-local states must be accessed through atomic operations and the data-structures must also be reentrant.
Thread-local storage Variables are localized so that each thread has its own private copy. These variables retain their values across subroutine and other code boundaries and are thread-safe since they are local to each thread, even though the code which accesses them might be executed simultaneously by another thread.
Immutable objects The state of an object cannot be changed after construction. This implies both that only read-only data is shared and that inherent thread safety is attained. Mutable (non-const) operations can then be implemented in such a way that they create new objects instead of modifying the existing ones. This approach is characteristic of functional programming and is also used by the string implementations in Java, C#, and Python. (See Immutable object.)
The second class of approaches are synchronization-related, and are used in situations where shared state cannot be avoided:
Mutual exclusion Access to shared data is serialized using mechanisms that ensure only one thread reads or writes to the shared data at any time. Incorporation of mutual exclusion needs to be well thought out, since improper usage can lead to side-effects like deadlocks, livelocks, and resource starvation.
Atomic operations Shared data is accessed by using atomic operations which cannot be interrupted by other threads. This usually requires using special machine language instructions, which might be available in a runtime library. Since the operations are atomic, the shared data is always kept in a valid state, no matter how other threads access it. Atomic operations form the basis of many thread locking mechanisms, and are used to implement mutual exclusion primitives.
Examples
In the following piece of Java code, the Java keyword synchronized makes the method thread-safe:
class Counter {
private int i = 0;
public synchronized void inc() {
i++;
}
}
In the C programming language, each thread has its own stack. However, a static variable is not kept on the stack; all threads share simultaneous access to it. If multiple threads overlap while running the same function, it is possible that a static variable might be changed by one thread while another is midway through checking it. This difficult-to-diagnose logic error, which may compile and run properly most of the time, is called a race condition. One common way to avoid this is to use another shared variable as a "lock" or "mutex" (from mutual exclusion).
In the following piece of C code, the function is thread-safe, but not reentrant:
# include <pthread.h>
int increment_counter ()
{
static int counter = 0;
static pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
// only allow one thread to increment at a time
pthread_mutex_lock(&mutex);
++counter;
// store value before any other threads increment it further
int result = counter;
pthread_mutex_unlock(&mutex);
return result;
}
In the above, increment_counter can be called by different threads without any problem since a mutex is used to synchronize all access to the shared counter variable. But if the function is used in a reentrant interrupt handler and a second interrupt arises while the mutex is locked, the second routine will hang forever. As interrupt servicing can disable other interrupts, the whole system could suffer.
The same function can be implemented to be both thread-safe and reentrant using the lock-free atomics in C++11:
# include <atomic>
int increment_counter ()
{
static std::atomic<int> counter(0);
// increment is guaranteed to be done atomically
int result = ++counter;
return result;
}
See also
Concurrency control
Concurrent data structure
Exception safety
Priority inversion
ThreadSafe
References
External links
Threads (computing)
Programming language topics | Thread safety | [
"Engineering"
] | 1,279 | [
"Software engineering",
"Programming language topics"
] |
45,635 | https://en.wikipedia.org/wiki/Bottom%E2%80%93up%20and%20top%E2%80%93down%20design | Bottom–up and top–down are both strategies of information processing and ordering knowledge, used in a variety of fields including software, humanistic and scientific theories (see systemics), and management and organization. In practice they can be seen as a style of thinking, teaching, or leadership.
A top–down approach (also known as stepwise design and stepwise refinement and in some cases used as a synonym of decomposition) is essentially the breaking down of a system to gain insight into its compositional subsystems in a reverse engineering fashion. In a top–down approach an overview of the system is formulated, specifying, but not detailing, any first-level subsystems. Each subsystem is then refined in yet greater detail, sometimes in many additional subsystem levels, until the entire specification is reduced to base elements. A top–down model is often specified with the assistance of black boxes, which makes it easier to manipulate. However, black boxes may fail to clarify elementary mechanisms or be detailed enough to realistically validate the model. A top–down approach starts with the big picture, then breaks down into smaller segments.
A bottom–up approach is the piecing together of systems to give rise to more complex systems, thus making the original systems subsystems of the emergent system. Bottom–up processing is a type of information processing based on incoming data from the environment to form a perception. From a cognitive psychology perspective, information enters the eyes in one direction (sensory input, or the "bottom"), and is then turned into an image by the brain that can be interpreted and recognized as a perception (output that is "built up" from processing to final cognition). In a bottom–up approach the individual base elements of the system are first specified in great detail. These elements are then linked together to form larger subsystems, which then in turn are linked, sometimes in many levels, until a complete top-level system is formed. This strategy often resembles a "seed" model, by which the beginnings are small but eventually grow in complexity and completeness. But "organic strategies" may result in a tangle of elements and subsystems, developed in isolation and subject to local optimization as opposed to meeting a global purpose.
Product design and development
During the development of new products, designers and engineers rely on both bottom–up and top–down approaches. The bottom–up approach is being used when off-the-shelf or existing components are selected and integrated into the product. An example includes selecting a particular fastener, such as a bolt, and designing the receiving components such that the fastener will fit properly. In a top–down approach, a custom fastener would be designed such that it would fit properly in the receiving components. For perspective, for a product with more restrictive requirements (such as weight, geometry, safety, environment), such as a spacesuit, a more top–down approach is taken and almost everything is custom designed.
Computer science
Software development
Part of this section is from the Perl Design Patterns Book.
In the software development process, the top–down and bottom–up approaches play a key role.
Top–down approaches emphasize planning and a complete understanding of the system. It is inherent that no coding can begin until a sufficient level of detail has been reached in the design of at least some part of the system. Top–down approaches are implemented by attaching the stubs in place of the module. But these delay testing of the ultimate functional units of a system until significant design is complete.
Bottom–up emphasizes coding and early testing, which can begin as soon as the first module has been specified. But this approach runs the risk that modules may be coded without having a clear idea of how they link to other parts of the system, and that such linking may not be as easy as first thought. Re-usability of code is one of the main benefits of a bottom–up approach.
Top–down design was promoted in the 1970s by IBM researchers Harlan Mills and Niklaus Wirth. Mills developed structured programming concepts for practical use and tested them in a 1969 project to automate the New York Times morgue index. The engineering and management success of this project led to the spread of the top–down approach through IBM and the rest of the computer industry. Among other achievements, Niklaus Wirth, the developer of Pascal programming language, wrote the influential paper Program Development by Stepwise Refinement. Since Niklaus Wirth went on to develop languages such as Modula and Oberon (where one could define a module before knowing about the entire program specification), one can infer that top–down programming was not strictly what he promoted. Top–down methods were favored in software engineering until the late 1980s, and object-oriented programming assisted in demonstrating the idea that both aspects of top-down and bottom-up programming could be used.
Modern software design approaches usually combine top–down and bottom–up approaches. Although an understanding of the complete system is usually considered necessary for good design—leading theoretically to a top-down approach—most software projects attempt to make use of existing code to some degree. Pre-existing modules give designs a bottom–up flavor.
Programming
Top–down is a programming style, the mainstay of traditional procedural languages, in which design begins by specifying complex pieces and then dividing them into successively smaller pieces. The technique for writing a program using top–down methods is to write a main procedure that names all the major functions it will need. Later, the programming team looks at the requirements of each of those functions and the process is repeated. These compartmentalized subroutines eventually will perform actions so simple they can be easily and concisely coded. When all the various subroutines have been coded the program is ready for testing. By defining how the application comes together at a high level, lower-level work can be self-contained.
In a bottom–up approach the individual base elements of the system are first specified in great detail. These elements are then linked together to form larger subsystems, which in turn are linked, sometimes at many levels, until a complete top–level system is formed. This strategy often resembles a "seed" model, by which the beginnings are small, but eventually grow in complexity and completeness. Object-oriented programming (OOP) is a paradigm that uses "objects" to design applications and computer programs. In mechanical engineering with software programs such as Pro/ENGINEER, Solidworks, and Autodesk Inventor users can design products as pieces not part of the whole and later add those pieces together to form assemblies like building with Lego. Engineers call this "piece part design".
Parsing
Parsing is the process of analyzing an input sequence (such as that read from a file or a keyboard) in order to determine its grammatical structure. This method is used in the analysis of both natural languages and computer languages, as in a compiler. Bottom-up parsing is parsing strategy that recognizes the text's lowest-level small details first, before its mid-level structures, and leaves the highest-level overall structure to last. In top-down parsing, on the other hand, one first looks at the highest level of the parse tree and works down the parse tree by using the rewriting rules of a formal grammar.
Nanotechnology
Top–down and bottom–up are two approaches for the manufacture of products. These terms were first applied to the field of nanotechnology by the Foresight Institute in 1989 to distinguish between molecular manufacturing (to mass-produce large atomically precise objects) and conventional manufacturing (which can mass-produce large objects that are not atomically precise). Bottom–up approaches seek to have smaller (usually molecular) components built up into more complex assemblies, while top–down approaches seek to create nanoscale devices by using larger, externally controlled ones to direct their assembly. Certain valuable nanostructures, such as Silicon nanowires, can be fabricated using either approach, with processing methods selected on the basis of targeted applications.
A top–down approach often uses the traditional workshop or microfabrication methods where externally controlled tools are used to cut, mill, and shape materials into the desired shape and order. Micropatterning techniques, such as photolithography and inkjet printing belong to this category. Vapor treatment can be regarded as a new top–down secondary approaches to engineer nanostructures.
Bottom–up approaches, in contrast, use the chemical properties of single molecules to cause single-molecule components to (a) self-organize or self-assemble into some useful conformation, or (b) rely on positional assembly. These approaches use the concepts of molecular self-assembly and/or molecular recognition. See also Supramolecular chemistry. Such bottom–up approaches should, broadly speaking, be able to produce devices in parallel and much cheaper than top–down methods but could potentially be overwhelmed as the size and complexity of the desired assembly increases.
Neuroscience and psychology
These terms are also employed in cognitive sciences including neuroscience, cognitive neuroscience and cognitive psychology to discuss the flow of information in processing. Typically, sensory input is considered bottom–up, and higher cognitive processes, which have more information from other sources, are considered top–down. A bottom-up process is characterized by an absence of higher-level direction in sensory processing, whereas a top-down process is characterized by a high level of direction of sensory processing by more cognition, such as goals or targets (Biederman, 19).
According to college teaching notes written by Charles Ramskov, Irvin Rock, Neiser, and Richard Gregory claim that the top–down approach involves perception that is an active and constructive process. Additionally, it is an approach not directly given by stimulus input, but is the result of stimulus, internal hypotheses, and expectation interactions. According to theoretical synthesis, "when a stimulus is presented short and clarity is uncertain that gives a vague stimulus, perception becomes a top-down approach."
Conversely, psychology defines bottom–up processing as an approach in which there is a progression from the individual elements to the whole. According to Ramskov, one proponent of bottom–up approach, Gibson, claims that it is a process that includes visual perception that needs information available from proximal stimulus produced by the distal stimulus. Theoretical synthesis also claims that bottom–up processing occurs "when a stimulus is presented long and clearly enough."
Certain cognitive processes, such as fast reactions or quick visual identification, are considered bottom–up processes because they rely primarily on sensory information, whereas processes such as motor control and directed attention are considered top–down because they are goal directed. Neurologically speaking, some areas of the brain, such as area V1 mostly have bottom–up connections. Other areas, such as the fusiform gyrus have inputs from higher brain areas and are considered to have top–down influence.
The study of visual attention is an example. If your attention is drawn to a flower in a field, it may be because the color or shape of the flower are visually salient. The information that caused you to attend to the flower came to you in a bottom–up fashion—your attention was not contingent on knowledge of the flower: the outside stimulus was sufficient on its own. Contrast this situation with one in which you are looking for a flower. You have a representation of what you are looking for. When you see the object, you are looking for, it is salient. This is an example of the use of top–down information.
In cognition, two thinking approaches are distinguished. "Top–down" (or "big chunk") is stereotypically the visionary, or the person who sees the larger picture and overview. Such people focus on the big picture and from that derive the details to support it. "Bottom–up" (or "small chunk") cognition is akin to focusing on the detail primarily, rather than the landscape. The expression "seeing the wood for the trees" references the two styles of cognition.
Studies in task switching and response selection show that there are differences through the two types of processing. Top–down processing primarily focuses on the attention side, such as task repetition. Bottom–up processing focuses on item-based learning, such as finding the same object over and over again. Implications for understanding attentional control of response selection in conflict situations are discussed.
This also applies to how we structure these processing neurologically. With structuring information interfaces in our neurological processes for procedural learning. These processes were proven effective to work in our interface design. But although both top–down principles were effective in guiding interface design; they were not sufficient. They can be combined with iterative bottom–up methods to produce usable interfaces .
Schooling
Undergraduate (or bachelor) students are taught the basis of top–down bottom–up processing around their third year in the program. Going through four main parts of the processing when viewing it from a learning perspective. The two main definitions are that bottom–up processing is determined directly by environmental stimuli rather than the individual's knowledge and expectations.
Management and organization
In the fields of management and organization, the terms "top–down" and "bottom–up" are used to describe how decisions are made and/or how change is implemented.
A "top–down" approach is where an executive decision maker or other top person makes the decisions of how something should be done. This approach is disseminated under their authority to lower levels in the hierarchy, who are, to a greater or lesser extent, bound by them. For example, when wanting to make an improvement in a hospital, a hospital administrator might decide that a major change (such as implementing a new program) is needed, and then use a planned approach to drive the changes down to the frontline staff.
A bottom–up approach to changes is one that works from the grassroots, and originates in a flat structure with people working together, causing a decision to arise from their joint involvement. A decision by a number of activists, students, or victims of some incident to take action is a "bottom–up" decision. A bottom–up approach can be thought of as "an incremental change approach that represents an emergent process cultivated and upheld primarily by frontline workers".
Positive aspects of top–down approaches include their efficiency and superb overview of higher levels; and external effects can be internalized. On the negative side, if reforms are perceived to be imposed "from above", it can be difficult for lower levels to accept them. Evidence suggests this to be true regardless of the content of reforms. A bottom–up approach allows for more experimentation and a better feeling for what is needed at the bottom. Other evidence suggests that there is a third combination approach to change.
Public health
Both top–down and bottom–up approaches are used in public health. There are many examples of top–down programs, often run by governments or large inter-governmental organizations; many of these are disease-or issue-specific, such as HIV control or smallpox eradication. Examples of bottom–up programs include many small NGOs set up to improve local access to healthcare. But many programs seek to combine both approaches; for instance, guinea worm eradication, a single-disease international program currently run by the Carter Center has involved the training of many local volunteers, boosting bottom-up capacity, as have international programs for hygiene, sanitation, and access to primary healthcare.
Architecture
Often the École des Beaux-Arts school of design is said to have primarily promoted top–down design because it taught that an architectural design should begin with a parti, a basic plan drawing of the overall project.
By contrast, the Bauhaus focused on bottom–up design. This method manifested itself in the study of translating small-scale organizational systems to a larger, more architectural scale (as with the wood panel carving and furniture design).
Ecology
In ecology top–down control refers to when a top predator controls the structure or population dynamics of the ecosystem. The interactions between these top predators and their prey are what influences lower trophic levels. Changes in the top level of trophic levels have an inverse effect on the lower trophic levels. Top–down control can have negative effects on the surrounding ecosystem if there is a drastic change in the number of predators. The classic example is of kelp forest ecosystems. In such ecosystems, sea otters are a keystone predator. They prey on urchins, which in turn eat kelp. When otters are removed, urchin populations grow and reduce the kelp forest creating urchin barrens. This reduces the diversity of the ecosystem as a whole and can have detrimental effects on all of the other organisms. In other words, such ecosystems are not controlled by productivity of the kelp, but rather, a top predator. One can see the inverse effect that top–down control has in this example; when the population of otters decreased, the population of the urchins increased.
Bottom–up control in ecosystems refers to ecosystems in which the nutrient supply, productivity, and type of primary producers (plants and phytoplankton) control the ecosystem structure. If there are not enough resources or producers in the ecosystem, there is not enough energy left for the rest of the animals in the food chain because of biomagnification and ecological efficiency. An example would be how plankton populations are controlled by the availability of nutrients. Plankton populations tend to be higher and more complex in areas where upwelling brings nutrients to the surface.
There are many different examples of these concepts. It is common for populations to be influenced by both types of control, and there are still debates going on as to which type of control affects food webs in certain ecosystems.
Philosophy and ethics
Top–down reasoning in ethics is when the reasoner starts from abstract universalizable principles and then reasons down them to particular situations. Bottom–up reasoning occurs when the reasoner starts from intuitive particular situational judgements and then reasons up to principles. Reflective equilibrium occurs when there is interaction between top-down and bottom-up reasoning until both are in harmony. That is to say, when universalizable abstract principles are reflectively found to be in equilibrium with particular intuitive judgements. The process occurs when cognitive dissonance occurs when reasoners try to resolve top–down with bottom–up reasoning, and adjust one or the other, until they are satisfied, they have found the best combinations of principles and situational judgements.
See also
Formal concept analysis
Pseudocode
The Cathedral and the Bazaar
Citations and notes
Sources
Further reading
https://philpapers.org/rec/COHTNO
Corpeño, E (2021). "The Top-Down Approach to Problem Solving: How to Stop Struggling in Class and Start Learning".
J. A. Estes, M. T. Tinker, T. M. Williams, D. F. Doak "Killer Whale Predation on Sea Otters Linking Oceanic and Nearshore Ecosystems", Science, October 16, 1998: Vol. 282. no. 5388, pp. 473 – 476
Galotti, K. (2008). Cognitive Psychology: In and out of the laboratory. USA: Wadsworth.
Goldstein, E.B. (2010). Sensation and Perception. USA: Wadsworth.
External links
"Program Development by Stepwise Refinement", Communications of the ACM, Vol. 14, No. 4, April (1971)
Integrated Parallel Bottom-up and Top-down Approach. In Proceedings of the International Emergency Management Society's Fifth Annual Conference (TIEMS 98), May 19–22, Washington DC, USA (1998).
Changing Your Mind: On the Contributions of Top-Down and Bottom-Up Guidance in Visual Search for Feature Singletons, Journal of Experimental Psychology: Human Perception and Performance, Vol. 29, No. 2, 483–502, 2003.
K. Eric Drexler and Christine Peterson, Nanotechnology and Enabling Technologies, Foresight Briefing No. 2, 1989.
Empowering sustained patient safety: the benefits of combining top-down and bottom-up approaches
Dichotomies
Information science
Neuropsychology
Software design
Hierarchy | Bottom–up and top–down design | [
"Engineering"
] | 4,137 | [
"Design",
"Software design"
] |
45,638 | https://en.wikipedia.org/wiki/Undocumented%20feature | An undocumented feature is an unintended or undocumented hardware operation, for example an undocumented instruction, or software feature found in computer hardware and software that is considered beneficial or useful. Sometimes the documentation is omitted through oversight, but undocumented features are sometimes not intended for use by end users, but left available for use by the vendor for software support and development. Also, some unintended operation of hardware or software that ends up being of utility to users is simply a bug, flaw or quirk.
Since the suppliers of the software usually consider the software documentation to constitute a contract for the behavior of the software, undocumented features are generally left unsupported and may be removed or changed at will and without notice to the users.
Undocumented or unsupported features are sometimes also called "not manufacturer supported" (NOMAS), a term coined by PPC Journal in the early 1980s.
Some user-reported defects are viewed by software developers as working as expected, leading to the catchphrase "it's not a bug, it's a feature" (INABIAF) and its variations.
Hardware
Undocumented instructions, known as illegal opcodes, on the MOS Technology 6502 and its variants are sometimes used by programmers. These were removed in the WDC 65C02.
Video game and demoscene programmers have taken advantage of the unintended operation of computers' hardware to produce new effects or optimizations.
In 2019, researchers discovered that a manufacturer debugging mode, known as VISA, had an undocumented feature on Intel Platform Controller Hubs (PCHs), chipsets included on most Intel-based motherboards, which makes the mode accessible with a normal motherboard. Since the chipset has direct memory access this is problematic for security reasons.
Software
Undocumented features (for example, the ability to change the switch character in MS-DOS, usually to a hyphen) can be included for compatibility purposes (in this case with Unix utilities) or for future-expansion reasons. However; if the software provider changes their software strategy to better align with the business, the absence of documentation makes it easier to justify the feature's removal.
New versions of software might omit mention of old (possibly superseded) features in documentation but keep them implemented for users who've grown accustomed to them.
In some cases, software bugs are referred to by developers either jokingly or conveniently as undocumented features. This usage may have been popularised in some of Microsoft's responses to bug reports for its first Word for Windows product, but does not originate there. The oldest surviving reference on Usenet dates to 5 March 1984. Between 1969 and 1972, Sandy Mathes, a systems programmer for PDP-8 software at Digital Equipment Corporation (DEC) in Maynard, MA, used the terms "bug" and "feature" in her reporting of test results to distinguish between undocumented actions of delivered software products that were unacceptable and tolerable, respectively. This usage may have been perpetuated.
Undocumented features themselves have become a major feature of computer games. Developers often include various cheats and other special features ("easter eggs") that are not explained in the packaged material, but have become part of the "buzz" about the game on the Internet and among gamers. The undocumented features of foreign games are often elements that were not localized from their native language.
Closed source APIs can also have undocumented functions that are not generally known. These are sometimes used to gain a commercial advantage over third-party software by providing additional information or better performance to the application provider.
See also
Backdoor (computing)
Easter egg (media)
References
Software anomalies
Technical communication | Undocumented feature | [
"Technology"
] | 770 | [
"Computer errors",
"Technological failures",
"Software anomalies"
] |
45,642 | https://en.wikipedia.org/wiki/Demography | Demography () is the statistical study of human populations: their size, composition (e.g., ethnic group, age), and how they change through the interplay of fertility (births), mortality (deaths), and migration.
Demographic analysis examines and measures the dimensions and dynamics of populations; it can cover whole societies or groups defined by criteria such as education, nationality, religion, and ethnicity. Educational institutions usually treat demography as a field of sociology, though there are a number of independent demography departments. These methods have primarily been developed to study human populations, but are extended to a variety of areas where researchers want to know how populations of social actors can change across time through processes of birth, death, and migration. In the context of human biological populations, demographic analysis uses administrative records to develop an independent estimate of the population. Demographic analysis estimates are often considered a reliable standard for judging the accuracy of the census information gathered at any time. In the labor force, demographic analysis is used to estimate sizes and flows of populations of workers; in population ecology the focus is on the birth, death, migration and immigration of individuals in a population of living organisms, alternatively, in social human sciences could involve movement of firms and institutional forms. Demographic analysis is used in a wide variety of contexts. For example, it is often used in business plans, to describe the population connected to the geographic location of the business. Demographic analysis is usually abbreviated as DA. For the 2010 U.S. Census, The U.S. Census Bureau has expanded its DA categories. Also as part of the 2010 U.S. Census, DA now also includes comparative analysis between independent housing estimates, and census address lists at different key time points.
Patient demographics form the core of the data for any medical institution, such as patient and emergency contact information and patient medical record data. They allow for the identification of a patient and their categorization into categories for the purpose of statistical analysis. Patient demographics include: date of birth, gender, date of death, postal code, ethnicity, blood type, emergency contact information, family doctor, insurance provider data, allergies, major diagnoses and major medical history.
Formal demography limits its object of study to the measurement of population processes, while the broader field of social demography or population studies also analyses the relationships between economic, social, institutional, cultural, and biological processes influencing a population.
History
Demographic thoughts traced back to antiquity, and were present in many civilisations and cultures, like Ancient Greece, Ancient Rome, China and India. Made up of the prefix demo- and the suffix -graphy, the term demography refers to the overall study of population.
In ancient Greece, this can be found in the writings of Herodotus, Thucydides, Hippocrates, Epicurus, Protagoras, Polus, Plato and Aristotle. In Rome, writers and philosophers like Cicero, Seneca, Pliny the Elder, Marcus Aurelius, Epictetus, Cato, and Columella also expressed important ideas on this ground.
In the Middle Ages, Christian thinkers devoted much time in refuting the Classical ideas on demography. Important contributors to the field were William of Conches, Bartholomew of Lucca, William of Auvergne, William of Pagula, and Muslim sociologists like Ibn Khaldun.
One of the earliest demographic studies in the modern period was Natural and Political Observations Made upon the Bills of Mortality (1662) by John Graunt, which contains a primitive form of life table. Among the study's findings were that one-third of the children in London died before their sixteenth birthday. Mathematicians, such as Edmond Halley, developed the life table as the basis for life insurance mathematics. Richard Price was credited with the first textbook on life contingencies published in 1771, followed later by Augustus De Morgan, On the Application of Probabilities to Life Contingencies (1838).
In 1755, Benjamin Franklin published his essay Observations Concerning the Increase of Mankind, Peopling of Countries, etc., projecting exponential growth in British colonies. His work influenced Thomas Robert Malthus, who, writing at the end of the 18th century, feared that, if unchecked, population growth would tend to outstrip growth in food production, leading to ever-increasing famine and poverty (see Malthusian catastrophe). Malthus is seen as the intellectual father of ideas of overpopulation and the limits to growth. Later, more sophisticated and realistic models were presented by Benjamin Gompertz and Verhulst.
In 1855, a Belgian scholar Achille Guillard defined demography as the natural and social history of human species or the mathematical knowledge of populations, of their general changes, and of their physical, civil, intellectual, and moral condition.
The period 1860–1910 can be characterized as a period of transition where in demography emerged from statistics as a separate field of interest. This period included a panoply of international 'great demographers' like Adolphe Quetelet (1796–1874), William Farr (1807–1883), Louis-Adolphe Bertillon (1821–1883) and his son Jacques (1851–1922), Joseph Körösi (1844–1906), Anders Nicolas Kaier (1838–1919), Richard Böckh (1824–1907), Émile Durkheim (1858–1917), Wilhelm Lexis (1837–1914), and Luigi Bodio (1840–1920) contributed to the development of demography and to the toolkit of methods and techniques of demographic analysis.
Methods
Demography is the statistical and mathematical study of the size, composition, and spatial distribution of human populations and how these features change over time. Data are obtained from a census of the population and from registries: records of events like birth, deaths, migrations, marriages, divorces, diseases, and employment. To do this, there needs to be an understanding of how they are calculated and the questions they answer which are included in these four concepts: population change, standardization of population numbers, the demographic bookkeeping equation, and population composition.
There are two types of data collection—direct and indirect—with several methods of each type.
Direct methods
Direct data comes from vital statistics registries that track all births and deaths as well as certain changes in legal status such as marriage, divorce, and migration (registration of place of residence). In developed countries with good registration systems (such as the United States and much of Europe), registry statistics are the best method for estimating the number of births and deaths.
A census is the other common direct method of collecting demographic data. A census is usually conducted by a national government and attempts to enumerate every person in a country. In contrast to vital statistics data, which are typically collected continuously and summarized on an annual basis, censuses typically occur only every 10 years or so, and thus are not usually the best source of data on births and deaths. Analyses are conducted after a census to estimate how much over or undercounting took place. These compare the sex ratios from the census data to those estimated from natural values and mortality data.
Censuses do more than just count people. They typically collect information about families or households in addition to individual characteristics such as age, sex, marital status, literacy/education, employment status, and occupation, and geographical location. They may also collect data on migration (or place of birth or of previous residence), language, religion, nationality (or ethnicity or race), and citizenship. In countries in which the vital registration system may be incomplete, the censuses are also used as a direct source of information about fertility and mortality; for example, the censuses of the People's Republic of China gather information on births and deaths that occurred in the 18 months immediately preceding the census.
Indirect methods
Indirect methods of collecting data are required in countries and periods where full data are not available, such as is the case in much of the developing world, and most of historical demography. One of these techniques in contemporary demography is the sister method, where survey researchers ask women how many of their sisters have died or had children and at what age. With these surveys, researchers can then indirectly estimate birth or death rates for the entire population. Other indirect methods in contemporary demography include asking people about siblings, parents, and children. Other indirect methods are necessary in historical demography.
There are a variety of demographic methods for modelling population processes. They include models of mortality (including the life table, Gompertz models, hazards models, Cox proportional hazards models, multiple decrement life tables, Brass relational logits), fertility (Hermes model, Coale-Trussell models, parity progression ratios), marriage (Singulate Mean at Marriage, Page model), disability (Sullivan's method, multistate life tables), population projections (Lee-Carter model, the Leslie Matrix), and population momentum (Keyfitz).
The United Kingdom has a series of four national birth cohort studies, the first three spaced apart by 12 years: the 1946 National Survey of Health and Development, the 1958 National Child Development Study, the 1970 British Cohort Study, and the Millennium Cohort Study, begun much more recently in 2000. These have followed the lives of samples of people (typically beginning with around 17,000 in each study) for many years, and are still continuing. As the samples have been drawn in a nationally representative way, inferences can be drawn from these studies about the differences between four distinct generations of British people in terms of their health, education, attitudes, childbearing and employment patterns.
Indirect standardization is used when a population is small enough that the number of events (births, deaths, etc.) are also small. In this case, methods must be used to produce a standardized mortality rate (SMR) or standardized incidence rate (SIR).
Population change
Population change is analyzed by measuring the change between one population size to another. Global population continues to rise, which makes population change an essential component to demographics. This is calculated by taking one population size minus the population size in an earlier census. The best way of measuring population change is using the intercensal percentage change. The intercensal percentage change is the absolute change in population between the censuses divided by the population size in the earlier census. Next, multiply this a hundredfold to receive a percentage. When this statistic is achieved, the population growth between two or more nations that differ in size, can be accurately measured and examined.
Standardization of population numbers
For there to be a significant comparison, numbers must be altered for the size of the population that is under study. For example, the fertility rate is calculated as the ratio of the number of births to women of childbearing age to the total number of women in this age range. If these adjustments were not made, we would not know if a nation with a higher rate of births or deaths has a population with more women of childbearing age or more births per eligible woman.
Within the category of standardization, there are two major approaches: direct standardization and indirect standardization.
Common rates and ratios
The crude birth rate, the annual number of live births per 1,000 people.
The general fertility rate, the annual number of live births per 1,000 women of childbearing age (often taken to be from 15 to 49 years old, but sometimes from 15 to 44).
The age-specific fertility rates, the annual number of live births per 1,000 women in particular age groups (usually age 15–19, 20–24 etc.)
The crude death rate, the annual number of deaths per 1,000 people.
The infant mortality rate, the annual number of deaths of children less than 1 year old per 1,000 live births.
The expectation of life (or life expectancy), the number of years that an individual at a given age could expect to live at present mortality levels.
The total fertility rate, the number of live births per woman completing her reproductive life, if her childbearing at each age reflected current age-specific fertility rates.
The replacement level fertility, the average number of children women must have in order to replace the population for the next generation. For example, the replacement level fertility in the US is 2.11.
The gross reproduction rate, the number of daughters who would be born to a woman completing her reproductive life at current age-specific fertility rates.
The net reproduction ratio is the expected number of daughters, per newborn prospective mother, who may or may not survive to and through the ages of childbearing.
A stable population, one that has had constant crude birth and death rates for such a long period of time that the percentage of people in every age class remains constant, or equivalently, the population pyramid has an unchanging structure.
A stationary population, one that is both stable and unchanging in size (the difference between crude birth rate and crude death rate is zero).
Measures of centralisation are concerned with the extent to which an area's population is concentrated in its urban centres.
A stable population does not necessarily remain fixed in size. It can be expanding or shrinking.
The crude death rate as defined above and applied to a whole population can give a misleading impression. For example, the number of deaths per 1,000 people can be higher in developed nations than in less-developed countries, despite standards of health being better in developed countries. This is because developed countries have proportionally more older people, who are more likely to die in a given year, so that the overall mortality rate can be higher even if the mortality rate at any given age is lower. A more complete picture of mortality is given by a life table, which summarizes mortality separately at each age. A life table is necessary to give a good estimate of life expectancy.
Basic equation regarding development of a population
Suppose that a country (or other entity) contains Populationt persons at time t.
What is the size of the population at time t + 1 ?
Natural increase from time t to t + 1:
Net migration from time t to t + 1:
These basic equations can also be applied to subpopulations. For example, the population size of ethnic groups or nationalities within a given society or country is subject to the same sources of change. When dealing with ethnic groups, however, "net migration" might have to be subdivided into physical migration and ethnic reidentification (assimilation). Individuals who change their ethnic self-labels or whose ethnic classification in government statistics changes over time may be thought of as migrating or moving from one population subcategory to another.
More generally, while the basic demographic equation holds true by definition, in practice the recording and counting of events (births, deaths, immigration, emigration) and the enumeration of the total population size are subject to error. So allowance needs to be made for error in the underlying statistics when any accounting of population size or change is made.
The figure in this section shows the latest (2004) UN (United Nations) WHO projections of world population out to the year 2150 (red = high, orange = medium, green = low). The UN "medium" projection shows world population reaching an approximate equilibrium at 9 billion by 2075. Working independently, demographers at the International Institute for Applied Systems Analysis in Austria expect world population to peak at 9 billion by 2070. Throughout the 21st century, the average age of the population is likely to continue to rise.
Science of population
Populations can change through three processes: fertility, mortality, and migration. Fertility involves the number of children that women have and is to be contrasted with fecundity (a woman's childbearing potential). Mortality is the study of the causes, consequences, and measurement of processes affecting death to members of the population. Demographers most commonly study mortality using the life table, a statistical device that provides information about the mortality conditions (most notably the life expectancy) in the population.
Migration refers to the movement of persons from a locality of origin to a destination place across some predefined, political boundary. Migration researchers do not designate movements 'migrations' unless they are somewhat permanent. Thus, demographers do not consider tourists and travellers to be migrating. While demographers who study migration typically do so through census data on place of residence, indirect sources of data including tax forms and labour force surveys are also important.
Demography is today widely taught in many universities across the world, attracting students with initial training in social sciences, statistics or health studies. Being at the crossroads of several disciplines such as sociology, economics, epidemiology, geography, anthropology and history, demography offers tools to approach a large range of population issues by combining a more technical quantitative approach that represents the core of the discipline with many other methods borrowed from social or other sciences. Demographic research is conducted in universities, in research institutes, as well as in statistical departments and in several international agencies. Population institutions are part of the CICRED (International Committee for Coordination of Demographic Research) network while most individual scientists engaged in demographic research are members of the International Union for the Scientific Study of Population, or a national association such as the Population Association of America in the United States, or affiliates of the Federation of Canadian Demographers in Canada.
Population composition
Population composition is the description of population defined by characteristics such as age, race, sex or marital status. These descriptions can be necessary for understanding the social dynamics from historical and comparative research. This data is often compared using a population pyramid.
Population composition is also a very important part of historical research. Information ranging back hundreds of years is not always worthwhile, because the numbers of people for which data are available may not provide the information that is important (such as population size). Lack of information on the original data-collection procedures may prevent accurate evaluation of data quality.
Demographic analysis in institutions and organizations
Labor market
The demographic analysis of labor markets can be used to show slow population growth, population aging, and the increased importance of immigration. The U.S. Census Bureau projects that in the next 100 years, the United States will face some dramatic demographic changes. The population is expected to grow more slowly and age more rapidly than ever before and the nation will become a nation of immigrants. This influx is projected to rise over the next century as new immigrants and their children will account for over half the U.S. population. These demographic shifts could ignite major adjustments in the economy, more specifically, in labor markets.
Turnover and in internal labor markets
People decide to exit organizations for many reasons, such as, better jobs, dissatisfaction, and concerns within the family. The causes of turnover can be split into two separate factors, one linked with the culture of the organization, and the other relating to all other factors. People who do not fully accept a culture might leave voluntarily. Or, some individuals might leave because they fail to fit in and fail to change within a particular organization.
Population ecology of organizations
A basic definition of population ecology is a study of the distribution and abundance of organisms. As it relates to organizations and demography, organizations go through various liabilities to their continued survival. Hospitals, like all other large and complex organizations are impacted in the environment they work. For example, a study was done on the closure of acute care hospitals in Florida between a particular time. The study examined effect size, age, and niche density of these particular hospitals. A population theory says that organizational outcomes are mostly determined by environmental factors. Among several factors of the theory, there are four that apply to the hospital closure example: size, age, density of niches in which organizations operate, and density of niches in which organizations are established.
Business organizations
Problems in which demographers may be called upon to assist business organizations are when determining the best prospective location in an area of a branch store or service outlet, predicting the demand for a new product, and to analyze certain dynamics of a company's workforce. Choosing a new location for a branch of a bank, choosing the area in which to start a new supermarket, consulting a bank loan officer that a particular location would be a beneficial site to start a car wash, and determining what shopping area would be best to buy and be redeveloped in metropolis area are types of problems in which demographers can be called upon.
Standardization is a useful demographic technique used in the analysis of a business. It can be used as an interpretive and analytic tool for the comparison of different markets.
Nonprofit organizations
These organizations have interests about the number and characteristics of their clients so they can maximize the sale of their products, their outlook on their influence, or the ends of their power, services, and beneficial works.
See also
Biodemography
Biodemography of human longevity
Demographics of the world
Demographic economics
Gompertz–Makeham law of mortality
Linguistic demography
List of demographics articles
Medieval demography
National Security Study Memorandum 200 of 1974
NRS social grade
Political demography
Population biology
Population dynamics
Population geography
Population reconstruction
Population statistics
Religious demography
Replacement migration
Reproductive health
Social surveys
Current Population Survey (CPS)
Demographic and Health Surveys (DHS)
European Social Survey (ESS)
General Social Survey (GSS)
German General Social Survey (ALLBUS)
Multiple Indicator Cluster Surveys (MICS)
National Longitudinal Survey (NLS)
Panel Study of Income Dynamics (PSID)
Performance Monitoring and Accountability 2020 (PMA2020)
Socio-Economic Panel (SOEP, German)
World Values Survey (WVS)
Organizations
Global Social Change Research Project (United States)
Institut national d'études démographiques (INED) (France)
Max Planck Institute for Demographic Research (Germany)
Office of Population Research (Princeton University) (United States)
Population Council (United States)
Population Studies Center at the University of Michigan (United States)
Vienna Institute of Demography (VID) (Austria)
Wittgenstein Centre for Demography and Global Human Capital (Austria)
Scientific journals
Brazilian Journal of Population Studies
Cahiers québécois de démographie
Demography
Population and Development Review
References
Further reading
Josef Ehmer, Jens Ehrhardt, Martin Kohli (Eds.): Fertility in the History of the 20th Century: Trends, Theories, Policies, Discourses. Historical Social Research 36 (2), 2011.
Glad, John. 2008. Future Human Evolution: Eugenics in the Twenty-First Century. Hermitage Publishers,
Gavrilova N.S., Gavrilov L.A. 2011. Ageing and Longevity: Mortality Laws and Mortality Forecasts for Ageing Populations [In Czech: Stárnutí a dlouhověkost: Zákony a prognózy úmrtnosti pro stárnoucí populace]. Demografie, 53(2): 109–128.
Preston, Samuel, Patrick Heuveline, and Michel Guillot. 2000. Demography: Measuring and Modeling Population Processes. Blackwell Publishing.
Gavrilov L.A., Gavrilova N.S. 2010. Demographic Consequences of Defeating Aging. Rejuvenation Research, 13(2-3): 329–334.
Paul R. Ehrlich (1968), The Population Bomb Controversial Neo-Malthusianist pamphlet
Leonid A. Gavrilov & Natalia S. Gavrilova (1991), The Biology of Life Span: A Quantitative Approach. New York: Harwood Academic Publisher,
Andrey Korotayev & Daria Khaltourina (2006). Introduction to Social Macrodynamics: Compact Macromodels of the World System Growth. Moscow: URSS
Uhlenberg P. (Editor), (2009) International Handbook of the Demography of Aging, New York: Springer-Verlag, pp. 113–131.
Paul Demeny and Geoffrey McNicoll (Eds.). 2003. The Encyclopedia of Population. New York, Macmillan Reference USA, vol.1, 32-37
Phillip Longman (2004), The Empty Cradle: how falling birth rates threaten global prosperity and what to do about it
Sven Kunisch, Stephan A. Boehm, Michael Boppel (eds) (2011). From Grey to Silver: Managing the Demographic Change Successfully, Springer-Verlag, Berlin Heidelberg,
Joe McFalls (2007), Population: A Lively Introduction, Population Reference Bureau
Ben J. Wattenberg (2004), How the New Demography of Depopulation Will Shape Our Future. Chicago: R. Dee,
Perry, Marc J. & Mackun, Paul J. Population Change & Distribution: Census 2000 Brief. (2001)
Preston, Samuel; Heuveline, Patrick; and Guillot Michel. 2000. Demography: Measuring and Modeling Population Processes. Blackwell Publishing.
Schutt, Russell K. 2006. "Investigating the Social World: The Process and Practice of Research". SAGE Publications.
Siegal, Jacob S. (2002), Applied Demography: Applications to Business, Government, Law, and Public Policy. San Diego: Academic Press.
Wattenberg, Ben J. (2004), How the New Demography of Depopulation Will Shape Our Future. Chicago: R. Dee,
External links
Quick demography data lookup (archived 4 March 2016)
Historicalstatistics.org Links to historical demographic and economic statistics
United Nations Population Division: Homepage
World Population Prospects, the 2012 Revision, Population estimates and projections for 230 countries and areas (archived 6 May 2011)
World Urbanization Prospects, the 2011 Revision, Estimates and projections of urban and rural populations and urban agglomerations
Probabilistic Population Projections, the 2nd Revision, Probabilistic Population Projections, based on the 2010 Revision of the World Population Prospects (archived 13 December 2012)
Java Simulation of Population Dynamics.
Basic Guide to the World: Population changes and trends, 1960–2003
Brief review of world basic demographic trends
Family and Fertility Surveys (FFS)
Actuarial science
Environmental social science
Interdisciplinary subfields of sociology
Human geography
Market segmentation
Human populations | Demography | [
"Mathematics",
"Environmental_science"
] | 5,315 | [
"Applied mathematics",
"Actuarial science",
"Demography",
"Environmental social science",
"Human geography"
] |
45,705 | https://en.wikipedia.org/wiki/Inverse%20transform%20sampling | Inverse transform sampling (also known as inversion sampling, the inverse probability integral transform, the inverse transformation method, or the Smirnov transform) is a basic method for pseudo-random number sampling, i.e., for generating sample numbers at random from any probability distribution given its cumulative distribution function.
Inverse transformation sampling takes uniform samples of a number between 0 and 1, interpreted as a probability, and then returns the smallest number such that for the cumulative distribution function of a random variable. For example, imagine that is the standard normal distribution with mean zero and standard deviation one. The table below shows samples taken from the uniform distribution and their representation on the standard normal distribution.
We are randomly choosing a proportion of the area under the curve and returning the number in the domain such that exactly this proportion of the area occurs to the left of that number. Intuitively, we are unlikely to choose a number in the far end of tails because there is very little area in them which would require choosing a number very close to zero or one.
Computationally, this method involves computing the quantile function of the distribution — in other words, computing the cumulative distribution function (CDF) of the distribution (which maps a number in the domain to a probability between 0 and 1) and then inverting that function. This is the source of the term "inverse" or "inversion" in most of the names for this method. Note that for a discrete distribution, computing the CDF is not in general too difficult: we simply add up the individual probabilities for the various points of the distribution. For a continuous distribution, however, we need to integrate the probability density function (PDF) of the distribution, which is impossible to do analytically for most distributions (including the normal distribution). As a result, this method may be computationally inefficient for many distributions and other methods are preferred; however, it is a useful method for building more generally applicable samplers such as those based on rejection sampling.
For the normal distribution, the lack of an analytical expression for the corresponding quantile function means that other methods (e.g. the Box–Muller transform) may be preferred computationally. It is often the case that, even for simple distributions, the inverse transform sampling method can be improved on: see, for example, the ziggurat algorithm and rejection sampling. On the other hand, it is possible to approximate the quantile function of the normal distribution extremely accurately using moderate-degree polynomials, and in fact the method of doing this is fast enough that inversion sampling is now the default method for sampling from a normal distribution in the statistical package R.
Formal statement
For any random variable , the random variable has the same distribution as , where is the generalized inverse of the cumulative distribution function of and is uniform on .
For continuous random variables, the inverse probability integral transform is indeed the inverse of the probability integral transform, which states that for a continuous random variable with cumulative distribution function , the random variable is uniform on .
Intuition
From , we want to generate with CDF We assume to be a continuous, strictly increasing function, which provides good intuition.
We want to see if we can find some strictly monotone transformation , such that . We will have
where the last step used that when is uniform on .
So we got to be the inverse function of , or, equivalently
Therefore, we can generate from
The method
The problem that the inverse transform sampling method solves is as follows:
Let be a random variable whose distribution can be described by the cumulative distribution function .
We want to generate values of which are distributed according to this distribution.
The inverse transform sampling method works as follows:
Generate a random number from the standard uniform distribution in the interval , i.e. from
Find the generalized inverse of the desired CDF, i.e. .
Compute . The computed random variable has distribution and thereby the same law as .
Expressed differently, given a cumulative distribution function and a uniform variable , the random variable has the distribution .
In the continuous case, a treatment of such inverse functions as objects satisfying differential equations can be given. Some such differential equations admit explicit power series solutions, despite their non-linearity.
Examples
As an example, suppose we have a random variable and a cumulative distribution function
In order to perform an inversion we want to solve for
From here we would perform steps one, two and three.
As another example, we use the exponential distribution with for x ≥ 0 (and 0 otherwise). By solving y=F(x) we obtain the inverse function
It means that if we draw some from a and compute This has exponential distribution.
The idea is illustrated in the following graph:
Note that the distribution does not change if we start with 1-y instead of y. For computational purposes, it therefore suffices to generate random numbers y in [0, 1] and then simply calculate
Proof of correctness
Let be a cumulative distribution function, and let be its generalized inverse function (using the infimum because CDFs are weakly monotonic and right-continuous):
Claim: If is a uniform random variable on then has as its CDF.
Proof:
Truncated distribution
Inverse transform sampling can be simply extended to cases of truncated distributions on the interval without the cost of rejection sampling: the same algorithm can be followed, but instead of generating a random number uniformly distributed between 0 and 1, generate uniformly distributed between and , and then again take .
Reduction of the number of inversions
In order to obtain a large number of samples, one needs to perform the same number of inversions of the distribution.
One possible way to reduce the number of inversions while obtaining a large number of samples is the application of the so-called Stochastic Collocation Monte Carlo sampler (SCMC sampler) within a polynomial chaos expansion framework. This allows us to generate any number of Monte Carlo samples with only a few inversions of the original distribution with independent samples of a variable for which the inversions are analytically available, for example the standard normal variable.
Software implementations
There are software implementations available for applying the inverse sampling method by using numerical approximations of the inverse in the case that it is not available in closed form. For example, an approximation of the inverse can be computed if the user provides some information about the distributions such as the PDF or the CDF.
C library UNU.RAN
R library Runuran
Python subpackage sampling in scipy.stats
See also
Probability integral transform
Copula, defined by means of probability integral transform.
Quantile function, for the explicit construction of inverse CDFs.
Inverse distribution function for a precise mathematical definition for distributions with discrete components.
Rejection sampling is another common technique to generate random variates that does not rely on inversion of the CDF.
References
Monte Carlo methods
Non-uniform random numbers | Inverse transform sampling | [
"Physics"
] | 1,378 | [
"Monte Carlo methods",
"Computational physics"
] |
45,708 | https://en.wikipedia.org/wiki/Coordinate%20covalent%20bond | In coordination chemistry, a coordinate covalent bond, also known as a dative bond, dipolar bond, or coordinate bond is a kind of two-center, two-electron covalent bond in which the two electrons derive from the same atom. The bonding of metal ions to ligands involves this kind of interaction. This type of interaction is central to Lewis acid–base theory.
Coordinate bonds are commonly found in coordination compounds.
Examples
Coordinate covalent bonding is ubiquitous. In all metal aquo-complexes [M(H2O)n]m+, the bonding between water and the metal cation is described as a coordinate covalent bond. Metal-ligand interactions in most organometallic compounds and most coordination compounds are described similarly.
The term dipolar bond is used in organic chemistry for compounds such as amine oxides for which the electronic structure can be described in terms of the basic amine donating two electrons to an oxygen atom.
→ O
The arrow → indicates that both electrons in the bond originate from the amine moiety. In a standard covalent bond each atom contributes one electron. Therefore, an alternative description is that the amine gives away one electron to the oxygen atom, which is then used, with the remaining unpaired electron on the nitrogen atom, to form a standard covalent bond. The process of transferring the electron from nitrogen to oxygen creates formal charges, so the electronic structure may also be depicted as
This electronic structure has an electric dipole, hence the name polar bond. In reality, the atoms carry partial charges; the more electronegative atom of the two involved in the bond will usually carry a partial negative charge. One exception to this is carbon monoxide. In this case, the carbon atom carries the partial negative charge although it is less electronegative than oxygen.
An example of a dative covalent bond is provided by the interaction between a molecule of ammonia, a Lewis base with a lone pair of electrons on the nitrogen atom, and boron trifluoride, a Lewis acid by virtue of the boron atom having an incomplete octet of electrons. In forming the adduct, the boron atom attains an octet configuration.
The electronic structure of a coordination complex can be described in terms of the set of ligands each donating a pair of electrons to a metal centre. For example, in hexamminecobalt(III) chloride, each ammonia ligand donates its lone pair of electrons to the cobalt(III) ion. In this case, the bonds formed are described as coordinate bonds. In the Covalent Bond Classification (CBC) method, ligands that form coordinate covalent bonds with a central atom are classed as L-type, while those that form normal covalent bonds are classed as X-type.
Comparison with other electron-sharing modes
In all cases, the bond, whether dative or "normal" electron-sharing, is a covalent bond. In common usage, the prefix dipolar, dative or coordinate merely serves to indicate the origin of the electrons used in creating the bond. For example, F3B ← O(C2H5)2 ("boron trifluoride (diethyl) etherate") is prepared from BF3 and :O(C2H5)2, as opposed to the radical species [•BF3]– and [•O(C2H5)2]+. The dative bond is also a convenience in terms of notation, as formal charges are avoided: we can write D: + []A ⇌ D → A rather than D+–A– (here : and [] represent the lone-pair and empty orbital on the electron-pair donor D and acceptor A, respectively). The notation is sometimes used even when the Lewis acid-base reaction involved is only notional (e.g., the sulfoxide R2S → O is rarely if ever made by reacting the sulfide R2S with atomic oxygen O). Thus, most chemists do not make any claim with respect to the properties of the bond when choosing one notation over the other (formal charges vs. arrow bond).
It is generally true, however, that bonds depicted this way are polar covalent, sometimes strongly so, and some authors claim that there are genuine differences in the properties of a dative bond and electron-sharing bond and suggest that showing a dative bond is more appropriate in particular situations. As far back as 1989, Haaland characterized dative bonds as bonds that are (i) weak and long; (ii) with only a small degree of charge-transfer taking place during bond formation; and (iii) whose preferred mode of dissociation in the gas phase (or low ε inert solvent) is heterolytic rather than homolytic. The ammonia-borane adduct (H3N → BH3) is given as a classic example: the bond is weak, with a dissociation energy of 31 kcal/mol (cf. 90 kcal/mol for ethane), and long, at 166 pm (cf. 153 pm for ethane), and the molecule possesses a dipole moment of 5.2 D that implies a transfer of only 0.2 e– from nitrogen to boron. The heterolytic dissociation of H3N → BH3 is estimated to require 27 kcal/mol, confirming that heterolysis into ammonia and borane is more favorable than homolysis into radical cation and radical anion. However, aside from clear-cut examples, there is considerable dispute as to when a particular compound qualifies and, thus, the overall prevalence of dative bonding (with respect to an author's preferred definition). Computational chemists have suggested quantitative criteria to distinguish between the two "types" of bonding.
Some non-obvious examples where dative bonding is claimed to be important include carbon suboxide (O≡C → C0 ← C≡O), tetraaminoallenes (described using dative bond language as "carbodicarbenes"; (R2N)2C → C0 ← C(NR2)2), the Ramirez carbodiphosphorane (Ph3P → C0 ← PPh3), and bis(triphenylphosphine)iminium cation (Ph3P → N+ ← PPh3), all of which exhibit considerably bent equilibrium geometries, though with a shallow barrier to bending. Simple application of the normal rules for drawing Lewis structures by maximizing bonding (using electron-sharing bonds) and minimizing formal charges would predict heterocumulene structures, and therefore linear geometries, for each of these compounds. Thus, these molecules are claimed to be better modeled as coordination complexes of :C: (carbon(0) or carbone) or :N:+ (mononitrogen cation) with CO, PPh3, or N-heterocycliccarbenes as ligands, the lone-pairs on the central atom accounting for the bent geometry. However, the usefulness of this view is disputed.
References
Chemical bonding
Acid–base chemistry
Coordination chemistry | Coordinate covalent bond | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,490 | [
"Acid–base chemistry",
"Coordination chemistry",
"Equilibrium chemistry",
"Condensed matter physics",
"nan",
"Chemical bonding"
] |
45,710 | https://en.wikipedia.org/wiki/Forensic%20science | Forensic science, also known as criminalistics, is the application of science principles and methods to support legal decision-making in matters of criminal and civil law.
During criminal investigation in particular, it is governed by the legal standards of admissible evidence and criminal procedure. It is a broad field utilizing numerous practices such as the analysis of DNA, fingerprints, bloodstain patterns, firearms, ballistics, toxicology, microscopy and fire debris analysis.
Forensic scientists collect, preserve, and analyze evidence during the course of an investigation. While some forensic scientists travel to the scene of the crime to collect the evidence themselves, others occupy a laboratory role, performing analysis on objects brought to them by other individuals. Others are involved in analysis of financial, banking, or other numerical data for use in financial crime investigation, and can be employed as consultants from private firms, academia, or as government employees.
In addition to their laboratory role, forensic scientists testify as expert witnesses in both criminal and civil cases and can work for either the prosecution or the defense. While any field could technically be forensic, certain sections have developed over time to encompass the majority of forensically related cases.
Etymology
The term forensic stems from the Latin word, forēnsis (3rd declension, adjective), meaning "of a forum, place of assembly". The history of the term originates in Roman times, when a criminal charge meant presenting the case before a group of public individuals in the forum. Both the person accused of the crime and the accuser would give speeches based on their sides of the story. The case would be decided in favor of the individual with the best argument and delivery. This origin is the source of the two modern usages of the word forensic—as a form of legal evidence; and as a category of public presentation.
In modern use, the term forensics is often used in place of "forensic science."
The word "science", is derived from the Latin word for 'knowledge' and is today closely tied to the scientific method, a systematic way of acquiring knowledge. Taken together, forensic science means the use of scientific methods and processes for crime solving.
History
Origins of forensic science and early methods
The ancient world lacked standardized forensic practices, which enabled criminals to escape punishment. Criminal investigations and trials relied heavily on forced confessions and witness testimony. However, ancient sources do contain several accounts of techniques that foreshadow concepts in forensic science developed centuries later.
The first written account of using medicine and entomology to solve criminal cases is attributed to the book of Xi Yuan Lu (translated as Washing Away of Wrongs), written in China in 1248 by Song Ci (, 1186–1249), a director of justice, jail and supervision, during the Song dynasty.
Song Ci introduced regulations concerning autopsy reports to court, how to protect the evidence in the examining process, and explained why forensic workers must demonstrate impartiality to the public. He devised methods for making antiseptic and for promoting the reappearance of hidden injuries to dead bodies and bones (using sunlight and vinegar under a red-oil umbrella); for calculating the time of death (allowing for weather and insect activity); described how to wash and examine the dead body to ascertain the reason for death. At that time the book had described methods for distinguishing between suicide and faked suicide. He wrote the book on forensics stating that all wounds or dead bodies should be examined, not avoided. The book became the first form of literature to help determine the cause of death.
In one of Song Ci's accounts (Washing Away of Wrongs), the case of a person murdered with a sickle was solved by an investigator who instructed each suspect to bring his sickle to one location. (He realized it was a sickle by testing various blades on an animal carcass and comparing the wounds.) Flies, attracted by the smell of blood, eventually gathered on a single sickle. In light of this, the owner of that sickle confessed to the murder. The book also described how to distinguish between a drowning (water in the lungs) and strangulation (broken neck cartilage), and described evidence from examining corpses to determine if a death was caused by murder, suicide or accident.
Methods from around the world involved saliva and examination of the mouth and tongue to determine innocence or guilt, as a precursor to the Polygraph test. In ancient India, some suspects were made to fill their mouths with dried rice and spit it back out. Similarly, in ancient China, those accused of a crime would have rice powder placed in their mouths. In ancient middle-eastern cultures, the accused were made to lick hot metal rods briefly. It is thought that these tests had some validity since a guilty person would produce less saliva and thus have a drier mouth; the accused would be considered guilty if rice was sticking to their mouths in abundance or if their tongues were severely burned due to lack of shielding from saliva.
Education and training
Initial glance, forensic intelligence may appear as a nascent facet of forensic science facilitated by advancements in information technologies such as computers, databases, and data-flow management software. However, a more profound examination reveals that forensic intelligence represents a genuine and emerging inclination among forensic practitioners to actively participate in investigative and policing strategies. In doing so, it elucidates existing practices within scientific literature, advocating for a paradigm shift from the prevailing conception of forensic science as a conglomerate of disciplines merely aiding the criminal justice system. Instead, it urges a perspective that views forensic science as a discipline studying the informative potential of traces—remnants of criminal activity. Embracing this transformative shift poses a significant challenge for education, necessitating a shift in learners' mindset to accept concepts and methodologies in forensic intelligence.
Recent calls advocating for the integration of forensic scientists into the criminal justice system, as well as policing and intelligence missions, underscore the necessity for the establishment of educational and training initiatives in the field of forensic intelligence. This article contends that a discernible gap exists between the perceived and actual comprehension of forensic intelligence among law enforcement and forensic science managers, positing that this asymmetry can be rectified only through educational interventions
The primary challenge in forensic intelligence education and training is identified as the formulation of programs aimed at heightening awareness, particularly among managers, to mitigate the risk of making suboptimal decisions in information processing. The paper highlights two recent European courses as exemplars of educational endeavors, elucidating lessons learned and proposing future directions.
The overarching conclusion is that the heightened focus on forensic intelligence has the potential to rejuvenate a proactive approach to forensic science, enhance quantifiable efficiency, and foster greater involvement in investigative and managerial decision-making. A novel educational challenge is articulated for forensic science university programs worldwide: a shift in emphasis from a fragmented criminal trace analysis to a more comprehensive security problem-solving approach.
Development of forensic science
In 16th-century Europe, medical practitioners in army and university settings began to gather information on the cause and manner of death. Ambroise Paré, a French army surgeon, systematically studied the effects of violent death on internal organs. Two Italian surgeons, Fortunato Fidelis and Paolo Zacchia, laid the foundation of modern pathology by studying changes that occurred in the structure of the body as the result of disease. In the late 18th century, writings on these topics began to appear. These included A Treatise on Forensic Medicine and Public Health by the French physician François-Emmanuel Fodéré and The Complete System of Police Medicine by the German medical expert Johann Peter Frank.
As the rational values of the Enlightenment era increasingly permeated society in the 18th century, criminal investigation became a more evidence-based, rational procedure − the use of torture to force confessions was curtailed, and belief in witchcraft and other powers of the occult largely ceased to influence the court's decisions. Two examples of English forensic science in individual legal proceedings demonstrate the increasing use of logic and procedure in criminal investigations at the time. In 1784, in Lancaster, John Toms was tried and convicted for murdering Edward Culshaw with a pistol. When the dead body of Culshaw was examined, a pistol wad (crushed paper used to secure powder and balls in the muzzle) found in his head wound matched perfectly with a torn newspaper found in Toms's pocket, leading to the conviction.
In Warwick 1816, a farm laborer was tried and convicted of the murder of a young maidservant. She had been drowned in a shallow pool and bore the marks of violent assault. The police found footprints and an impression from corduroy cloth with a sewn patch in the damp earth near the pool. There were also scattered grains of wheat and chaff. The breeches of a farm labourer who had been threshing wheat nearby were examined and corresponded exactly to the impression in the earth near the pool.
An article appearing in Scientific American in 1885 describes the use of microscopy to distinguish between the blood of two persons in a criminal case in Chicago.
Chromatography
Chromatography is a common technique used in the field of Forensic Science. Chromatography is a method of separating the components of a mixture from a mobile phase. Chromatography is an essential tool used in forensic science, helping analysts identify and compare trace amounts of samples including ignitable liquids, drugs, and biological samples. Many laboratories utilize gas chromatography/mass spectrometry (GC/MS) to examine these kinds of samples; this analysis provides rapid and reliant data to identify samples in question.
Toxicology
A method for detecting arsenious oxide, simple arsenic, in corpses was devised in 1773 by the Swedish chemist, Carl Wilhelm Scheele. His work was expanded upon, in 1806, by German chemist Valentin Ross, who learned to detect the poison in the walls of a victim's stomach. Toxicology, a subfield of forensic chemistry, focuses on detecting and identifying drugs, poisons, and other toxic substances in biological samples. Forensic toxicologists work on cases involving drug overdoses, poisoning, and substance abuse. Their work is critical in determining whether harmful substances play a role in a person’s death or impairment. read more
James Marsh was the first to apply this new science to the art of forensics. He was called by the prosecution in a murder trial to give evidence as a chemist in 1832. The defendant, John Bodle, was accused of poisoning his grandfather with arsenic-laced coffee. Marsh performed the standard test by mixing a suspected sample with hydrogen sulfide and hydrochloric acid. While he was able to detect arsenic as yellow arsenic trisulfide, when it was shown to the jury it had deteriorated, allowing the suspect to be acquitted due to reasonable doubt.
Annoyed by that, Marsh developed a much better test. He combined a sample containing arsenic with sulfuric acid and arsenic-free zinc, resulting in arsine gas. The gas was ignited, and it decomposed to pure metallic arsenic, which, when passed to a cold surface, would appear as a silvery-black deposit. So sensitive was the test, known formally as the Marsh test, that it could detect as little as one-fiftieth of a milligram of arsenic. He first described this test in The Edinburgh Philosophical Journal in 1836.
Ballistics and firearms
Ballistics is "the science of the motion of projectiles in flight". In forensic science, analysts examine the patterns left on bullets and cartridge casings after being ejected from a weapon. When fired, a bullet is left with indentations and markings that are unique to the barrel and firing pin of the firearm that ejected the bullet. This examination can help scientists identify possible makes and models of weapons connected to a crime.
Henry Goddard at Scotland Yard pioneered the use of bullet comparison in 1835. He noticed a flaw in the bullet that killed the victim and was able to trace this back to the mold that was used in the manufacturing process.
Anthropometry
The French police officer Alphonse Bertillon was the first to apply the anthropological technique of anthropometry to law enforcement, thereby creating an identification system based on physical measurements. Before that time, criminals could be identified only by name or photograph. Dissatisfied with the ad hoc methods used to identify captured criminals in France in the 1870s, he began his work on developing a reliable system of anthropometrics for human classification.
Bertillon created many other forensics techniques, including forensic document examination, the use of galvanoplastic compounds to preserve footprints, ballistics, and the dynamometer, used to determine the degree of force used in breaking and entering. Although his central methods were soon to be supplanted by fingerprinting, "his other contributions like the mug shot and the systematization of crime-scene photography remain in place to this day."
Fingerprints
Sir William Herschel was one of the first to advocate the use of fingerprinting in the identification of criminal suspects. While working for the Indian Civil Service, he began to use thumbprints on documents as a security measure to prevent the then-rampant repudiation of signatures in 1858.
In 1877 at Hooghly (near Kolkata), Herschel instituted the use of fingerprints on contracts and deeds, and he registered government pensioners' fingerprints to prevent the collection of money by relatives after a pensioner's death.
In 1880, Henry Faulds, a Scottish surgeon in a Tokyo hospital, published his first paper on the subject in the scientific journal Nature, discussing the usefulness of fingerprints for identification and proposing a method to record them with printing ink. He established their first classification and was also the first to identify fingerprints left on a vial. Returning to the UK in 1886, he offered the concept to the Metropolitan Police in London, but it was dismissed at that time.
Faulds wrote to Charles Darwin with a description of his method, but, too old and ill to work on it, Darwin gave the information to his cousin, Francis Galton, who was interested in anthropology. Having been thus inspired to study fingerprints for ten years, Galton published a detailed statistical model of fingerprint analysis and identification and encouraged its use in forensic science in his book Finger Prints. He had calculated that the chance of a "false positive" (two different individuals having the same fingerprints) was about 1 in 64 billion.
Juan Vucetich, an Argentine chief police officer, created the first method of recording the fingerprints of individuals on file. In 1892, after studying Galton's pattern types, Vucetich set up the world's first fingerprint bureau. In that same year, Francisca Rojas of Necochea was found in a house with neck injuries whilst her two sons were found dead with their throats cut. Rojas accused a neighbour, but despite brutal interrogation, this neighbour would not confess to the crimes. Inspector Alvarez, a colleague of Vucetich, went to the scene and found a bloody thumb mark on a door. When it was compared with Rojas' prints, it was found to be identical with her right thumb. She then confessed to the murder of her sons.
A Fingerprint Bureau was established in Calcutta (Kolkata), India, in 1897, after the Council of the Governor General approved a committee report that fingerprints should be used for the classification of criminal records. Working in the Calcutta Anthropometric Bureau, before it became the Fingerprint Bureau, were Azizul Haque and Hem Chandra Bose. Haque and Bose were Indian fingerprint experts who have been credited with the primary development of a fingerprint classification system eventually named after their supervisor, Sir Edward Richard Henry. The Henry Classification System, co-devised by Haque and Bose, was accepted in England and Wales when the first United Kingdom Fingerprint Bureau was founded in Scotland Yard, the Metropolitan Police headquarters, London, in 1901. Sir Edward Richard Henry subsequently achieved improvements in dactyloscopy.
In the United States, Henry P. DeForrest used fingerprinting in the New York Civil Service in 1902, and by December 1905, New York City Police Department Deputy Commissioner Joseph A. Faurot, an expert in the Bertillon system and a fingerprint advocate at Police Headquarters, introduced the fingerprinting of criminals to the United States.
Uhlenhuth test
The Uhlenhuth test, or the antigen–antibody precipitin test for species, was invented by Paul Uhlenhuth in 1901 and could distinguish human blood from animal blood, based on the discovery that the blood of different species had one or more characteristic proteins. The test represented a major breakthrough and came to have tremendous importance in forensic science. The test was further refined for forensic use by the Swiss chemist Maurice Müller in the year 1960s.
DNA
Forensic DNA analysis was first used in 1984. It was developed by Sir Alec Jeffreys, who realized that variation in the genetic sequence could be used to identify individuals and to tell individuals apart from one another. The first application of DNA profiles was used by Jeffreys in a double murder mystery in the small English town of Narborough, Leicestershire, in 1985. A 15-year-old school girl by the name of Lynda Mann was raped and murdered in Carlton Hayes psychiatric hospital. The police did not find a suspect but were able to obtain a semen sample.
In 1986, Dawn Ashworth, 15 years old, was also raped and strangled in the nearby village of Enderby. Forensic evidence showed that both killers had the same blood type. Richard Buckland became the suspect because he worked at Carlton Hayes psychiatric hospital, had been spotted near Dawn Ashworth's murder scene and knew unreleased details about the body. He later confessed to Dawn's murder but not Lynda's. Jefferys was brought into the case to analyze the semen samples. He concluded that there was no match between the samples and Buckland, who became the first person to be exonerated using DNA. Jefferys confirmed that the DNA profiles were identical for the two murder semen samples. To find the perpetrator, DNA samples from the entire male population, more than 4,000 aged from 17 to 34, of the town were collected. They all were compared to semen samples from the crime. A friend of Colin Pitchfork was heard saying that he had given his sample to the police claiming to be Colin. Colin Pitchfork was arrested in 1987 and it was found that his DNA profile matched the semen samples from the murder.
Because of this case, DNA databases were developed. There is the national (FBI) and international databases as well as the European countries (ENFSI: European Network of Forensic Science Institutes). These searchable databases are used to match crime scene DNA profiles to those already in a database.
Maturation
By the turn of the 20th century, the science of forensics had become largely established in the sphere of criminal investigation. Scientific and surgical investigation was widely employed by the Metropolitan Police during their pursuit of the mysterious Jack the Ripper, who had killed a number of women in the 1880s. This case is a watershed in the application of forensic science. Large teams of policemen conducted house-to-house inquiries throughout Whitechapel. Forensic material was collected and examined. Suspects were identified, traced and either examined more closely or eliminated from the inquiry. Police work follows the same pattern today. Over 2000 people were interviewed, "upwards of 300" people were investigated, and 80 people were detained.
The investigation was initially conducted by the Criminal Investigation Department (CID), headed by Detective Inspector Edmund Reid. Later, Detective Inspectors Frederick Abberline, Henry Moore, and Walter Andrews were sent from Central Office at Scotland Yard to assist. Initially, butchers, surgeons and physicians were suspected because of the manner of the mutilations. The alibis of local butchers and slaughterers were investigated, with the result that they were eliminated from the inquiry. Some contemporary figures thought the pattern of the murders indicated that the culprit was a butcher or cattle drover on one of the cattle boats that plied between London and mainland Europe. Whitechapel was close to the London Docks, and usually such boats docked on Thursday or Friday and departed on Saturday or Sunday. The cattle boats were examined, but the dates of the murders did not coincide with a single boat's movements, and the transfer of a crewman between boats was also ruled out.
At the end of October, Robert Anderson asked police surgeon Thomas Bond to give his opinion on the extent of the murderer's surgical skill and knowledge. The opinion offered by Bond on the character of the "Whitechapel murderer" is the earliest surviving offender profile. Bond's assessment was based on his own examination of the most extensively mutilated victim and the post mortem notes from the four previous canonical murders. In his opinion the killer must have been a man of solitary habits, subject to "periodical attacks of homicidal and erotic mania", with the character of the mutilations possibly indicating "satyriasis". Bond also stated that "the homicidal impulse may have developed from a revengeful or brooding condition of the mind, or that religious mania may have been the original disease but I do not think either hypothesis is likely".
Handbook for Coroners, police officials, military policemen was written by the Austrian criminal jurist Hans Gross in 1893, and is generally acknowledged as the birth of the field of criminalistics. The work combined in one system fields of knowledge that had not been previously integrated, such as psychology and physical science, and which could be successfully used against crime. Gross adapted some fields to the needs of criminal investigation, such as crime scene photography. He went on to found the Institute of Criminalistics in 1912, as part of the University of Graz' Law School. This Institute was followed by many similar institutes all over the world.
In 1909, Archibald Reiss founded the Institut de police scientifique of the University of Lausanne (UNIL), the first school of forensic science in the world. Dr. Edmond Locard, became known as the "Sherlock Holmes of France". He formulated the basic principle of forensic science: "Every contact leaves a trace", which became known as Locard's exchange principle. In 1910, he founded what may have been the first criminal laboratory in the world, after persuading the Police Department of Lyon (France) to give him two attic rooms and two assistants.
Symbolic of the newfound prestige of forensics and the use of reasoning in detective work was the popularity of the fictional character Sherlock Holmes, written by Arthur Conan Doyle in the late 19th century. He remains a great inspiration for forensic science, especially for the way his acute study of a crime scene yielded small clues as to the precise sequence of events. He made great use of trace evidence such as shoe and tire impressions, as well as fingerprints, ballistics and handwriting analysis, now known as questioned document examination. Such evidence is used to test theories conceived by the police, for example, or by the investigator himself. All of the techniques advocated by Holmes later became reality, but were generally in their infancy at the time Conan Doyle was writing. In many of his reported cases, Holmes frequently complains of the way the crime scene has been contaminated by others, especially by the police, emphasising the critical importance of maintaining its integrity, a now well-known feature of crime scene examination. He used analytical chemistry for blood residue analysis as well as toxicology examination and determination for poisons. He used ballistics by measuring bullet calibres and matching them with a suspected murder weapon.
Late 19th – early 20th century figures
Hans Gross applied scientific methods to crime scenes and was responsible for the birth of criminalistics.
Edmond Locard expanded on Gross' work with Locard%27s exchange principle which stated "whenever two objects come into contact with one another, materials are exchanged between them". This means that every contact by a criminal leaves a trace.
Alexandre Lacassagne, who taught Locard, produced autopsy standards on actual forensic cases.
Alphonse Bertillon was a French criminologist and founder of Anthropometry (scientific study of measurements and proportions of the human body). He used anthropometry for identification, stating that, since each individual is unique, by measuring aspects of physical difference there could be a personal identification system. He created the Bertillon System around 1879, a way of identifying criminals and citizens by measuring 20 parts of the body. In 1884, over 240 repeat offenders were caught using the Bertillon system, but the system was largely superseded by fingerprinting.
Joseph Thomas Walker, known for his work at Massachusetts State Police Chemical Laboratory, for developing many modern forensic techniques which he frequently published in academic journals, and for teaching at the Department of Legal Medicine, Harvard University.
Frances Glessner Lee, known as "the mother of forensic science", was instrumental in the development of forensic science in the US. She lobbied to have coroners replaced by medical professionals, endowed the Harvard Associates in Police Science, and conducted many seminars to educate homicide investigators. She also created the Nutshell Studies of Unexplained Death, intricate crime scene dioramas used to train investigators, which are still in use today.
20th century
Later in the 20th century several British pathologists, Mikey Rochman, Francis Camps, Sydney Smith and Keith Simpson pioneered new forensic science methods. Alec Jeffreys pioneered the use of DNA profiling in forensic science in 1984. He realized the scope of DNA fingerprinting, which uses variations in the genetic code to identify individuals. The method has since become important in forensic science to assist police detective work, and it has also proved useful in resolving paternity and immigration disputes. DNA fingerprinting was first used as a police forensic test to identify the rapist and killer of two teenagers, Lynda Mann and Dawn Ashworth, who were both murdered in Narborough, Leicestershire, in 1983 and 1986 respectively. Colin Pitchfork was identified and convicted of murder after samples taken from him matched semen samples taken from the two dead girls.
Forensic science has been fostered by a number of national and international forensic science learned bodies including the American Academy of Forensic Sciences (founded 1948), publishers of the Journal of Forensic Sciences; the Canadian Society of Forensic Science (founded 1953), publishers of the Journal of the Canadian Society of Forensic Science; the Chartered Society of Forensic Sciences, (founded 1959), then known as the Forensic Science Society, publisher of Science & Justice; the British Academy of Forensic Sciences (founded 1960), publishers of Medicine, Science and the Law; the Australian Academy of Forensic Sciences (founded 1967), publishers of the Australian Journal of Forensic Sciences; and the European Network of Forensic Science Institutes (founded 1995).
21st century
In the past decade, documenting forensics scenes has become more efficient. Forensic scientists have started using laser scanners, drones and photogrammetry to obtain 3D point clouds of accidents or crime scenes. Reconstruction of an accident scene on a highway using drones involves data acquisition time of only 10–20 minutes and can be performed without shutting down traffic. The results are not just accurate, in centimeters, for measurement to be presented in court but also easy to digitally preserve in the long term.
Now, in the 21st century, much of forensic science's future is up for discussion. The National Institute of Standards and Technology (NIST) has several forensic science-related programs: CSAFE, a NIST Center of Excellence in Forensic Science, the National Commission on Forensic Science (now concluded), and administration of the Organization of Scientific Area Committees for Forensic Science (OSAC). One of the more recent additions by NIST is a document called NISTIR-7941, titled "Forensic Science Laboratories: Handbook for Facility Planning, Design, Construction, and Relocation". The handbook provides a clear blueprint for approaching forensic science. The details even include what type of staff should be hired for certain positions.
Subdivisions
Art forensics concerns the art authentication cases to help research the work's authenticity. Art authentication methods are used to detect and identify forgery, faking and copying of art works, e.g. paintings.
Bloodstain pattern analysis is the scientific examination of blood spatter patterns found at a crime scene to reconstruct the events of the crime.
Comparative forensics is the application of visual comparison techniques to verify similarity of physical evidence. This includes fingerprint analysis, toolmark analysis, and ballistic analysis.
Computational forensics concerns the development of algorithms and software to assist forensic examination.
Criminalistics is the application of various sciences to answer questions relating to examination and comparison of biological evidence, trace evidence, impression evidence (such as fingerprints, footwear impressions, and tire tracks), controlled substances, ballistics, firearm and toolmark examination, and other evidence in criminal investigations. In typical circumstances, evidence is processed in a crime lab.
Digital forensics is the application of proven scientific methods and techniques in order to recover data from electronic / digital media. Digital Forensic specialists work in the field as well as in the lab.
Ear print analysis is used as a means of forensic identification intended as an identification tool similar to fingerprinting. An earprint is a two-dimensional reproduction of the parts of the outer ear that have touched a specific surface (most commonly the helix, antihelix, tragus and antitragus).
Election forensics is the use of statistics to determine if election results are normal or abnormal. It is also used to look into and detect the cases concerning gerrymandering.
Forensic accounting is the study and interpretation of accounting evidence, financial statement namely: Balance sheet, Income statement, Cash flow statement.
Forensic aerial photography is the study and interpretation of aerial photographic evidence.
Forensic anthropology is the application of physical anthropology in a legal setting, usually for the recovery and identification of skeletonized human remains.
Forensic archaeology is the application of a combination of archaeological techniques and forensic science, typically in law enforcement.
Forensic astronomy uses methods from astronomy to determine past celestial constellations for forensic purposes.
Forensic botany is the study of plant life in order to gain information regarding possible crimes.
Forensic chemistry is the study of detection and identification of illicit drugs, accelerants used in arson cases, explosive and gunshot residue.
Forensic dactyloscopy is the study of fingerprints.
Forensic document examination or questioned document examination answers questions about a disputed document using a variety of scientific processes and methods. Many examinations involve a comparison of the questioned document, or components of the document, with a set of known standards. The most common type of examination involves handwriting, whereby the examiner tries to address concerns about potential authorship.
Forensic DNA analysis takes advantage of the uniqueness of an individual's DNA to answer forensic questions such as paternity/maternity testing and placing a suspect at a crime scene, e.g. in a rape investigation.
Forensic engineering is the scientific examination and analysis of structures and products relating to their failure or cause of damage.
Forensic entomology deals with the examination of insects in, on and around human remains to assist in determination of time or location of death. It is also possible to determine if the body was moved after death using entomology.
Forensic geology deals with trace evidence in the form of soils, minerals and petroleum.
Forensic geomorphology is the study of the ground surface to look for potential location(s) of buried object(s).
Forensic geophysics is the application of geophysical techniques such as radar for detecting objects hidden underground or underwater.
Forensic intelligence process starts with the collection of data and ends with the integration of results within into the analysis of crimes under investigation.
Forensic interviews are conducted using the science of professionally using expertise to conduct a variety of investigative interviews with victims, witnesses, suspects or other sources to determine the facts regarding suspicions, allegations or specific incidents in either public or private sector settings.
Forensic histopathology is the application of histological techniques and examination to forensic pathology practice.
Forensic limnology is the analysis of evidence collected from crime scenes in or around fresh-water sources. Examination of biological organisms, in particular diatoms, can be useful in connecting suspects with victims.
Forensic linguistics deals with issues in the legal system that requires linguistic expertise.
Forensic meteorology is a site-specific analysis of past weather conditions for a point of loss.
Forensic metrology is the application of metrology to assess the reliability of scientific evidence obtained through measurements
Forensic microbiology is the study of the necrobiome.
Forensic nursing is the application of Nursing sciences to abusive crimes, like child abuse, or sexual abuse. Categorization of wounds and traumas, collection of bodily fluids and emotional support are some of the duties of forensic nurses.
Forensic odontology is the study of the uniqueness of dentition, better known as the study of teeth.
Forensic optometry is the study of glasses and other eyewear relating to crime scenes and criminal investigations.
Forensic pathology is a field in which the principles of medicine and pathology are applied to determine a cause of death or injury in the context of a legal inquiry.
Forensic podiatry is an application of the study of feet footprint or footwear and their traces to analyze scene of crime and to establish personal identity in forensic examinations.
Forensic psychiatry is a specialized branch of psychiatry as applied to and based on scientific criminology.
Forensic psychology is the study of the mind of an individual, using forensic methods. Usually it determines the circumstances behind a criminal's behavior.
Forensic seismology is the study of techniques to distinguish the seismic signals generated by underground nuclear explosions from those generated by earthquakes.
Forensic serology is the study of the body fluids.
Forensic social work is the specialist study of social work theories and their applications to a clinical, criminal justice or psychiatric setting. Practitioners of forensic social work connected with the criminal justice system are often termed Social Supervisors, whilst the remaining use the interchangeable titles forensic social worker, approved mental health professional or forensic practitioner and they conduct specialist assessments of risk, care planning and act as an officer of the court.
Forensic toxicology is the study of the effect of drugs and poisons on/in the human body.
Forensic video analysis is the scientific examination, comparison and evaluation of video in legal matters.
Mobile device forensics is the scientific examination and evaluation of evidence found in mobile phones, e.g. Call History and Deleted SMS, and includes SIM Card Forensics.
Trace evidence analysis is the analysis and comparison of trace evidence including glass, paint, fibres and hair (e.g., using micro-spectrophotometry).
Wildlife forensic science applies a range of scientific disciplines to legal cases involving non-human biological evidence, to solve crimes such as poaching, animal abuse, and trade in endangered species.
Questionable techniques
Some forensic techniques, believed to be scientifically sound at the time they were used, have turned out later to have much less scientific merit or none. Some such techniques include:
Comparative bullet-lead analysis was used by the FBI for over four decades, starting with the John F. Kennedy assassination in 1963. The theory was that each batch of ammunition possessed a chemical makeup so distinct that a bullet could be traced back to a particular batch or even a specific box. Internal studies and an outside study by the National Academy of Sciences found that the technique was unreliable due to improper interpretation, and the FBI abandoned the test in 2005.
Forensic dentistry has come under fire: in at least three cases bite-mark evidence has been used to convict people of murder who were later freed by DNA evidence. A 1999 study by a member of the American Board of Forensic Odontology found a 63 percent rate of false identifications and is commonly referenced within online news stories and conspiracy websites. The study was based on an informal workshop during an ABFO meeting, which many members did not consider a valid scientific setting. The theory is that each person has a unique and distinctive set of teeth, which leave a pattern after biting someone. They analyze the dental characteristics such as size, shape, and arch form.
Police Access to Genetic Genealogy Databases: There are privacy concerns with the police being able to access personal genetic data that is on genealogy services. Individuals can become criminal informants to their own families or to themselves simply by participating in genetic genealogy databases. The Combined DNA Index System (CODIS) is a database that the FBI uses to hold genetic profiles of all known felons, misdemeanants, and arrestees. Some people argue that individuals who are using genealogy databases should have an expectation of privacy in their data that is or may be violated by genetic searches by law enforcement. These different services have warning signs about potential third parties using their information, but most individuals do not read the agreement thoroughly. According to a study by Christi Guerrini, Jill Robinson, Devan Petersen, and Amy McGuire, they found that the majority of the people who took the survey support police searches of genetic websites that identify genetic relatives. People who responded to the survey are more supportive of police activities using genetic genealogy when it is for the purpose of identifying offenders of violent crimes, suspects of crimes against children or missing people. The data from the surveys that were given show that individuals are not concerned about police searches using personal genetic data if it is justified. It was found in this study that offenders are disproportionally low-income and black and the average person of genetic testing is wealthy and white. The results from the study had different results. In 2016, there was a survey called the National Crime Victimization Survey (NCVS) that was provided by the US Bureau of Justice Statistics. In that survey, it was found that 1.3% of people aged 12 or older were victims of violent crimes, and 8.85 of households were victims of property crimes. There were some issues with this survey though. The NCVS produces only the annual estimates of victimization. The survey that Christi Guerrini, Jill Robinson, Devan Petersen, and Amy McGuire produced asked the participants about the incidents of victimization over one's lifetime. Their survey also did not restrict other family members to one household. Around 25% of people who responded to the survey said that they have had family members that have been employed by law enforcement which includes security guards and bailiffs. Throughout these surveys, it has been found that there is public support for law enforcement to access genetic genealogy databases.
Litigation science
"Litigation science" describes analysis or data developed or produced expressly for use in a trial versus those produced in the course of independent research. This distinction was made by the U.S. 9th Circuit Court of Appeals when evaluating the admissibility of experts.
This uses demonstrative evidence, which is evidence created in preparation of trial by attorneys or paralegals.
Demographics
In the United States there are over 17,200 forensic science technicians as of 2019.
Media impact
Real-life crime scene investigators and forensic scientists warn that popular television shows do not give a realistic picture of the work, often wildly distorting its nature, and exaggerating the ease, speed, effectiveness, drama, glamour, influence and comfort level of their jobs—which they describe as far more mundane, tedious and boring.
Some claim these modern TV shows have changed individuals' expectations of forensic science, sometimes unrealistically—an influence termed the "CSI effect".
Further, research has suggested that public misperceptions about criminal forensics can create, in the mind of a juror, unrealistic expectations of forensic evidence—which they expect to see before convicting—implicitly biasing the juror towards the defendant. Citing the "CSI effect," at least one researcher has suggested screening jurors for their level of influence from such TV programs.
Controversies
Questions about certain areas of forensic science, such as fingerprint evidence and the assumptions behind these disciplines have been brought to light in some publications including the New York Post. The article stated that "No one has proved even the basic assumption: That everyone's fingerprint is unique." The article also stated that "Now such assumptions are being questioned—and with it may come a radical change in how forensic science is used by police departments and prosecutors." Law professor Jessica Gabel said on NOVA that forensic science "lacks the rigors, the standards, the quality controls and procedures that we find, usually, in science".
The National Institute of Standards and Technology has reviewed the scientific foundations of bite-mark analysis used in forensic science. Bite mark analysis is a forensic science technique that analyzes the marks on the victim's skin compared to the suspects teeth. NIST reviewed the findings of the National Academies of Sciences, Engineering, and Medicine 2009 study. The National Academics of Sciences, Engineering, and Medicine conducted research to address the issues of reliability, accuracy, and reliability of bitemark analysis, where they concluded that there is a lack of sufficient scientific foundation to support the data. Yet the technique is still legal to use in court as evidence. NIST funded a 2019 meeting that consisted of dentists, lawyers, researchers and others to address the gaps in this field.
In the US, on 25 June 2009, the Supreme Court issued a 5-to-4 decision in Melendez-Diaz v. Massachusetts stating that crime laboratory reports may not be used against criminal defendants at trial unless the analysts responsible for creating them give testimony and subject themselves to cross-examination. The Supreme Court cited the National Academies of Sciences report Strengthening Forensic Science in the United States in their decision. Writing for the majority, Justice Antonin Scalia referred to the National Research Council report in his assertion that "Forensic evidence is not uniquely immune from the risk of manipulation."
In the US, another area of forensic science that has come under question in recent years is the lack of laws requiring the accreditation of forensic labs. Some states require accreditation, but some states do not. Because of this, many labs have been caught performing very poor work resulting in false convictions or acquittals. For example, it was discovered after an audit of the Houston Police Department in 2002 that the lab had fabricated evidence which led George Rodriguez being convicted of raping a fourteen-year-old girl. The former director of the lab, when asked, said that the total number of cases that could have been contaminated by improper work could be in the range of 5,000 to 10,000.
The Innocence Project database of DNA exonerations shows that many wrongful convictions contained forensic science errors. According to the Innocence project and the US Department of Justice, forensic science has contributed to about 39 percent to 46 percent of wrongful convictions. As indicated by the National Academy of Sciences report Strengthening Forensic Sciences in the United States, part of the problem is that many traditional forensic sciences have never been empirically validated; and part of the problem is that all examiners are subject to forensic confirmation biases and should be shielded from contextual information not relevant to the judgment they make.
Many studies have discovered a difference in rape-related injuries reporting based on race, with white victims reporting a higher frequency of injuries than black victims. However, since current forensic examination techniques may not be sensitive to all injuries across a range of skin colors, more research needs to be conducted to understand if this trend is due to skin confounding healthcare providers when examining injuries or if darker skin extends a protective element. In clinical practice, for patients with darker skin, one study recommends that attention must be paid to the thighs, labia majora, posterior fourchette and fossa navicularis, so that no rape-related injuries are missed upon close examination.
Forensic science and humanitarian work
The International Committee of the Red Cross (ICRC) uses forensic science for humanitarian purposes to clarify the fate of missing persons after armed conflict, disasters or migration, and is one of the services related to Restoring Family Links and Missing Persons. Knowing what has happened to a missing relative can often make it easier to proceed with the grieving process and move on with life for families of missing persons.
Forensic science is used by various other organizations to clarify the fate and whereabouts of persons who have gone missing. Examples include the NGO Argentine Forensic Anthropology Team, working to clarify the fate of people who disappeared during the period of the 1976–1983 military dictatorship. The International Commission on Missing Persons (ICMP) used forensic science to find missing persons, for example after the conflicts in the Balkans.
Recognising the role of forensic science for humanitarian purposes, as well as the importance of forensic investigations in fulfilling the state's responsibilities to investigate human rights violations, a group of experts in the late-1980s devised a UN Manual on the Prevention and Investigation of Extra-Legal, Arbitrary and Summary Executions, which became known as the Minnesota Protocol. This document was revised and re-published by the Office of the High Commissioner for Human Rights in 2016.
See also
(forensic paleography)
(RSID)
References
Bibliography
Anil Aggrawal's Internet Journal of Forensic Medicine and Toxicology .
Forensic Magazine – Forensicmag.com.
Forensic Science Communications, an open access journal of the FBI.
Forensic sciences international – An international journal dedicated to the applications of medicine and science in the administration of justice – – Elsevier
"The Real CSI", PBS Frontline documentary, 17 April 2012.
Baden, Michael; Roach, Marion. Dead Reckoning: The New Science of Catching Killers, Simon & Schuster, 2001. .
Bartos, Leah, "No Forensic Background? No Problem", ProPublica, 17 April 2012.
Guatelli-Steinberg, Debbie; Mitchell, John C. Structure Magazine no. 40, "RepliSet: High Resolution Impressions of the Teeth of Human Ancestors".
Holt, Cynthia. Guide to Information Sources in the Forensic Sciences Libraries Unlimited, 2006. .
Jamieson, Allan; Moenssens, Andre (eds). Wiley Encyclopedia of Forensic Science John Wiley & Sons Ltd, 2009. . Online version.
Kind, Stuart; Overman, Michael. Science Against Crime Doubleday, 1972. .
Lewis, Peter Rhys; Gagg Colin; Reynolds, Ken. Forensic Materials Engineering: Case Studies CRC Press, 2004.
Nickell, Joe; Fischer, John F. Crime Science: Methods of Forensic Detection, University Press of Kentucky, 1999. .
Owen, D. (2000). Hidden Evidence: The Story of Forensic Science and how it Helped to Solve 40 of the World's Toughest Crimes Quintet Publishing, London. .
Quinche, Nicolas, and Margot, Pierre, "Coulier, Paul-Jean (1824–1890): A precursor in the history of fingermark detection and their potential use for identifying their source (1863)", Journal of forensic identification (Californie), 60 (2), March–April 2010, pp. 129–134.
Silverman, Mike; Thompson, Tony. Written in Blood: A History of Forensic Science. 2014.
External links
Forensic educational resources
Applied sciences
Criminology
Heuristics
Medical aspects of death
Chromatography | Forensic science | [
"Chemistry"
] | 9,571 | [
"Chromatography",
"Separation processes"
] |
45,752 | https://en.wikipedia.org/wiki/Topological%20vector%20space | In mathematics, a topological vector space (also called a linear topological space and commonly abbreviated TVS or t.v.s.) is one of the basic structures investigated in functional analysis.
A topological vector space is a vector space that is also a topological space with the property that the vector space operations (vector addition and scalar multiplication) are also continuous functions. Such a topology is called a and every topological vector space has a uniform topological structure, allowing a notion of uniform convergence and completeness. Some authors also require that the space is a Hausdorff space (although this article does not). One of the most widely studied categories of TVSs are locally convex topological vector spaces. This article focuses on TVSs that are not necessarily locally convex. Other well-known examples of TVSs include Banach spaces, Hilbert spaces and Sobolev spaces.
Many topological vector spaces are spaces of functions, or linear operators acting on topological vector spaces, and the topology is often defined so as to capture a particular notion of convergence of sequences of functions.
In this article, the scalar field of a topological vector space will be assumed to be either the complex numbers or the real numbers unless clearly stated otherwise.
Motivation
Normed spaces
Every normed vector space has a natural topological structure: the norm induces a metric and the metric induces a topology.
This is a topological vector space because:
The vector addition map defined by is (jointly) continuous with respect to this topology. This follows directly from the triangle inequality obeyed by the norm.
The scalar multiplication map defined by where is the underlying scalar field of is (jointly) continuous. This follows from the triangle inequality and homogeneity of the norm.
Thus all Banach spaces and Hilbert spaces are examples of topological vector spaces.
Non-normed spaces
There are topological vector spaces whose topology is not induced by a norm, but are still of interest in analysis. Examples of such spaces are spaces of holomorphic functions on an open domain, spaces of infinitely differentiable functions, the Schwartz spaces, and spaces of test functions and the spaces of distributions on them. These are all examples of Montel spaces. An infinite-dimensional Montel space is never normable. The existence of a norm for a given topological vector space is characterized by Kolmogorov's normability criterion.
A topological field is a topological vector space over each of its subfields.
Definition
A topological vector space (TVS) is a vector space over a topological field (most often the real or complex numbers with their standard topologies) that is endowed with a topology such that vector addition and scalar multiplication are continuous functions (where the domains of these functions are endowed with product topologies). Such a topology is called a or a on
Every topological vector space is also a commutative topological group under addition.
Hausdorff assumption
Many authors (for example, Walter Rudin), but not this page, require the topology on to be T1; it then follows that the space is Hausdorff, and even Tychonoff. A topological vector space is said to be if it is Hausdorff; importantly, "separated" does not mean separable. The topological and linear algebraic structures can be tied together even more closely with additional assumptions, the most common of which are listed below.
Category and morphisms
The category of topological vector spaces over a given topological field is commonly denoted or The objects are the topological vector spaces over and the morphisms are the continuous -linear maps from one object to another.
A (abbreviated ), also called a , is a continuous linear map between topological vector spaces (TVSs) such that the induced map is an open mapping when which is the range or image of is given the subspace topology induced by
A (abbreviated ), also called a , is an injective topological homomorphism. Equivalently, a TVS-embedding is a linear map that is also a topological embedding.
A (abbreviated ), also called a or an , is a bijective linear homeomorphism. Equivalently, it is a surjective TVS embedding
Many properties of TVSs that are studied, such as local convexity, metrizability, completeness, and normability, are invariant under TVS isomorphisms.
A necessary condition for a vector topology
A collection of subsets of a vector space is called if for every there exists some such that
All of the above conditions are consequently a necessity for a topology to form a vector topology.
Defining topologies using neighborhoods of the origin
Since every vector topology is translation invariant (which means that for all the map defined by is a homeomorphism), to define a vector topology it suffices to define a neighborhood basis (or subbasis) for it at the origin.
In general, the set of all balanced and absorbing subsets of a vector space does not satisfy the conditions of this theorem and does not form a neighborhood basis at the origin for any vector topology.
Defining topologies using strings
Let be a vector space and let be a sequence of subsets of Each set in the sequence is called a of and for every index is called the -th knot of The set is called the beginning of The sequence is/is a:
if for every index
Balanced (resp. absorbing, closed, convex, open, symmetric, barrelled, absolutely convex/disked, etc.) if this is true of every
if is summative, absorbing, and balanced.
or a in a TVS if is a string and each of its knots is a neighborhood of the origin in
If is an absorbing disk in a vector space then the sequence defined by forms a string beginning with This is called the natural string of Moreover, if a vector space has countable dimension then every string contains an absolutely convex string.
Summative sequences of sets have the particularly nice property that they define non-negative continuous real-valued subadditive functions. These functions can then be used to prove many of the basic properties of topological vector spaces.
A proof of the above theorem is given in the article on metrizable topological vector spaces.
If and are two collections of subsets of a vector space and if is a scalar, then by definition:
contains : if and only if for every index
Set of knots:
Kernel:
Scalar multiple:
Sum:
Intersection:
If is a collection sequences of subsets of then is said to be directed (downwards) under inclusion or simply directed downward if is not empty and for all there exists some such that and (said differently, if and only if is a prefilter with respect to the containment defined above).
Notation: Let be the set of all knots of all strings in
Defining vector topologies using collections of strings is particularly useful for defining classes of TVSs that are not necessarily locally convex.
If is the set of all topological strings in a TVS then A Hausdorff TVS is metrizable if and only if its topology can be induced by a single topological string.
Topological structure
A vector space is an abelian group with respect to the operation of addition, and in a topological vector space the inverse operation is always continuous (since it is the same as multiplication by ). Hence, every topological vector space is an abelian topological group. Every TVS is completely regular but a TVS need not be normal.
Let be a topological vector space. Given a subspace the quotient space with the usual quotient topology is a Hausdorff topological vector space if and only if is closed. This permits the following construction: given a topological vector space (that is probably not Hausdorff), form the quotient space where is the closure of is then a Hausdorff topological vector space that can be studied instead of
Invariance of vector topologies
One of the most used properties of vector topologies is that every vector topology is :
for all the map defined by is a homeomorphism, but if then it is not linear and so not a TVS-isomorphism.
Scalar multiplication by a non-zero scalar is a TVS-isomorphism. This means that if then the linear map defined by is a homeomorphism. Using produces the negation map defined by which is consequently a linear homeomorphism and thus a TVS-isomorphism.
If and any subset then and moreover, if then is a neighborhood (resp. open neighborhood, closed neighborhood) of in if and only if the same is true of at the origin.
Local notions
A subset of a vector space is said to be
absorbing (in ): if for every there exists a real such that for any scalar satisfying
balanced or circled: if for every scalar
convex: if for every real
a disk or absolutely convex: if is convex and balanced.
symmetric: if or equivalently, if
Every neighborhood of the origin is an absorbing set and contains an open balanced neighborhood of so every topological vector space has a local base of absorbing and balanced sets. The origin even has a neighborhood basis consisting of closed balanced neighborhoods of if the space is locally convex then it also has a neighborhood basis consisting of closed convex balanced neighborhoods of the origin.
Bounded subsets
A subset of a topological vector space is bounded if for every neighborhood of the origin there exists such that .
The definition of boundedness can be weakened a bit; is bounded if and only if every countable subset of it is bounded. A set is bounded if and only if each of its subsequences is a bounded set. Also, is bounded if and only if for every balanced neighborhood of the origin, there exists such that Moreover, when is locally convex, the boundedness can be characterized by seminorms: the subset is bounded if and only if every continuous seminorm is bounded on
Every totally bounded set is bounded. If is a vector subspace of a TVS then a subset of is bounded in if and only if it is bounded in
Metrizability
A TVS is pseudometrizable if and only if it has a countable neighborhood basis at the origin, or equivalent, if and only if its topology is generated by an F-seminorm. A TVS is metrizable if and only if it is Hausdorff and pseudometrizable.
More strongly: a topological vector space is said to be normable if its topology can be induced by a norm. A topological vector space is normable if and only if it is Hausdorff and has a convex bounded neighborhood of the origin.
Let be a non-discrete locally compact topological field, for example the real or complex numbers. A Hausdorff topological vector space over is locally compact if and only if it is finite-dimensional, that is, isomorphic to for some natural number
Completeness and uniform structure
The canonical uniformity on a TVS is the unique translation-invariant uniformity that induces the topology on
Every TVS is assumed to be endowed with this canonical uniformity, which makes all TVSs into uniform spaces. This allows one to talk about related notions such as completeness, uniform convergence, Cauchy nets, and uniform continuity, etc., which are always assumed to be with respect to this uniformity (unless indicated other). This implies that every Hausdorff topological vector space is Tychonoff. A subset of a TVS is compact if and only if it is complete and totally bounded (for Hausdorff TVSs, a set being totally bounded is equivalent to it being precompact). But if the TVS is not Hausdorff then there exist compact subsets that are not closed. However, the closure of a compact subset of a non-Hausdorff TVS is again compact (so compact subsets are relatively compact).
With respect to this uniformity, a net (or sequence) is Cauchy if and only if for every neighborhood of there exists some index such that whenever and
Every Cauchy sequence is bounded, although Cauchy nets and Cauchy filters may not be bounded. A topological vector space where every Cauchy sequence converges is called sequentially complete; in general, it may not be complete (in the sense that all Cauchy filters converge).
The vector space operation of addition is uniformly continuous and an open map. Scalar multiplication is Cauchy continuous but in general, it is almost never uniformly continuous. Because of this, every topological vector space can be completed and is thus a dense linear subspace of a complete topological vector space.
Every TVS has a completion and every Hausdorff TVS has a Hausdorff completion. Every TVS (even those that are Hausdorff and/or complete) has infinitely many non-isomorphic non-Hausdorff completions.
A compact subset of a TVS (not necessarily Hausdorff) is complete. A complete subset of a Hausdorff TVS is closed.
If is a complete subset of a TVS then any subset of that is closed in is complete.
A Cauchy sequence in a Hausdorff TVS is not necessarily relatively compact (that is, its closure in is not necessarily compact).
If a Cauchy filter in a TVS has an accumulation point then it converges to
If a series converges in a TVS then in
Examples
Finest and coarsest vector topology
Let be a real or complex vector space.
Trivial topology
The trivial topology or indiscrete topology is always a TVS topology on any vector space and it is the coarsest TVS topology possible. An important consequence of this is that the intersection of any collection of TVS topologies on always contains a TVS topology. Any vector space (including those that are infinite dimensional) endowed with the trivial topology is a compact (and thus locally compact) complete pseudometrizable seminormable locally convex topological vector space. It is Hausdorff if and only if
Finest vector topology
There exists a TVS topology on called the on that is finer than every other TVS-topology on (that is, any TVS-topology on is necessarily a subset of ). Every linear map from into another TVS is necessarily continuous. If has an uncountable Hamel basis then is locally convex and metrizable.
Cartesian products
A Cartesian product of a family of topological vector spaces, when endowed with the product topology, is a topological vector space. Consider for instance the set of all functions where carries its usual Euclidean topology. This set is a real vector space (where addition and scalar multiplication are defined pointwise, as usual) that can be identified with (and indeed, is often defined to be) the Cartesian product which carries the natural product topology. With this product topology, becomes a topological vector space whose topology is called The reason for this name is the following: if is a sequence (or more generally, a net) of elements in and if then converges to in if and only if for every real number converges to in This TVS is complete, Hausdorff, and locally convex but not metrizable and consequently not normable; indeed, every neighborhood of the origin in the product topology contains lines (that is, 1-dimensional vector subspaces, which are subsets of the form with ).
Finite-dimensional spaces
By F. Riesz's theorem, a Hausdorff topological vector space is finite-dimensional if and only if it is locally compact, which happens if and only if it has a compact neighborhood of the origin.
Let denote or and endow with its usual Hausdorff normed Euclidean topology. Let be a vector space over of finite dimension and so that is vector space isomorphic to (explicitly, this means that there exists a linear isomorphism between the vector spaces and ). This finite-dimensional vector space always has a unique vector topology, which makes it TVS-isomorphic to where is endowed with the usual Euclidean topology (which is the same as the product topology). This Hausdorff vector topology is also the (unique) finest vector topology on has a unique vector topology if and only if If then although does not have a unique vector topology, it does have a unique vector topology.
If then has exactly one vector topology: the trivial topology, which in this case (and in this case) is Hausdorff. The trivial topology on a vector space is Hausdorff if and only if the vector space has dimension
If then has two vector topologies: the usual Euclidean topology and the (non-Hausdorff) trivial topology.
Since the field is itself a -dimensional topological vector space over and since it plays an important role in the definition of topological vector spaces, this dichotomy plays an important role in the definition of an absorbing set and has consequences that reverberate throughout functional analysis.
If then has distinct vector topologies:
Some of these topologies are now described: Every linear functional on which is vector space isomorphic to induces a seminorm defined by where Every seminorm induces a (pseudometrizable locally convex) vector topology on and seminorms with distinct kernels induce distinct topologies so that in particular, seminorms on that are induced by linear functionals with distinct kernels will induce distinct vector topologies on
However, while there are infinitely many vector topologies on when there are, , only vector topologies on For instance, if then the vector topologies on consist of the trivial topology, the Hausdorff Euclidean topology, and then the infinitely many remaining non-trivial non-Euclidean vector topologies on are all TVS-isomorphic to one another.
Non-vector topologies
Discrete and cofinite topologies
If is a non-trivial vector space (that is, of non-zero dimension) then the discrete topology on (which is always metrizable) is a TVS topology because despite making addition and negation continuous (which makes it into a topological group under addition), it fails to make scalar multiplication continuous. The cofinite topology on (where a subset is open if and only if its complement is finite) is also a TVS topology on
Linear maps
A linear operator between two topological vector spaces which is continuous at one point is continuous on the whole domain. Moreover, a linear operator is continuous if is bounded (as defined below) for some neighborhood of the origin.
A hyperplane in a topological vector space is either dense or closed. A linear functional on a topological vector space has either dense or closed kernel. Moreover, is continuous if and only if its kernel is closed.
Types
Depending on the application additional constraints are usually enforced on the topological structure of the space. In fact, several principal results in functional analysis fail to hold in general for topological vector spaces: the closed graph theorem, the open mapping theorem, and the fact that the dual space of the space separates points in the space.
Below are some common topological vector spaces, roughly in order of increasing "niceness."
F-spaces are complete topological vector spaces with a translation-invariant metric. These include spaces for all
Locally convex topological vector spaces: here each point has a local base consisting of convex sets. By a technique known as Minkowski functionals it can be shown that a space is locally convex if and only if its topology can be defined by a family of seminorms. Local convexity is the minimum requirement for "geometrical" arguments like the Hahn–Banach theorem. The spaces are locally convex (in fact, Banach spaces) for all but not for
Barrelled spaces: locally convex spaces where the Banach–Steinhaus theorem holds.
Bornological space: a locally convex space where the continuous linear operators to any locally convex space are exactly the bounded linear operators.
Stereotype space: a locally convex space satisfying a variant of reflexivity condition, where the dual space is endowed with the topology of uniform convergence on totally bounded sets.
Montel space: a barrelled space where every closed and bounded set is compact
Fréchet spaces: these are complete locally convex spaces where the topology comes from a translation-invariant metric, or equivalently: from a countable family of seminorms. Many interesting spaces of functions fall into this class -- is a Fréchet space under the seminorms A locally convex F-space is a Fréchet space.
LF-spaces are limits of Fréchet spaces. ILH spaces are inverse limits of Hilbert spaces.
Nuclear spaces: these are locally convex spaces with the property that every bounded map from the nuclear space to an arbitrary Banach space is a nuclear operator.
Normed spaces and seminormed spaces: locally convex spaces where the topology can be described by a single norm or seminorm. In normed spaces a linear operator is continuous if and only if it is bounded.
Banach spaces: Complete normed vector spaces. Most of functional analysis is formulated for Banach spaces. This class includes the spaces with the space of functions of bounded variation, and certain spaces of measures.
Reflexive Banach spaces: Banach spaces naturally isomorphic to their double dual (see below), which ensures that some geometrical arguments can be carried out. An important example which is reflexive is , whose dual is but is strictly contained in the dual of
Hilbert spaces: these have an inner product; even though these spaces may be infinite-dimensional, most geometrical reasoning familiar from finite dimensions can be carried out in them. These include spaces, the Sobolev spaces and Hardy spaces.
Euclidean spaces: or with the topology induced by the standard inner product. As pointed out in the preceding section, for a given finite there is only one -dimensional topological vector space, up to isomorphism. It follows from this that any finite-dimensional subspace of a TVS is closed. A characterization of finite dimensionality is that a Hausdorff TVS is locally compact if and only if it is finite-dimensional (therefore isomorphic to some Euclidean space).
Dual space
Every topological vector space has a continuous dual space—the set of all continuous linear functionals, that is, continuous linear maps from the space into the base field A topology on the dual can be defined to be the coarsest topology such that the dual pairing each point evaluation is continuous. This turns the dual into a locally convex topological vector space. This topology is called the weak-* topology. This may not be the only natural topology on the dual space; for instance, the dual of a normed space has a natural norm defined on it. However, it is very important in applications because of its compactness properties (see Banach–Alaoglu theorem). Caution: Whenever is a non-normable locally convex space, then the pairing map is never continuous, no matter which vector space topology one chooses on A topological vector space has a non-trivial continuous dual space if and only if it has a proper convex neighborhood of the origin.
Properties
For any of a TVS the convex (resp. balanced, disked, closed convex, closed balanced, closed disked) hull of is the smallest subset of that has this property and contains The closure (respectively, interior, convex hull, balanced hull, disked hull) of a set is sometimes denoted by (respectively, ).
The convex hull of a subset is equal to the set of all of elements in which are finite linear combinations of the form where is an integer, and sum to The intersection of any family of convex sets is convex and the convex hull of a subset is equal to the intersection of all convex sets that contain it.
Neighborhoods and open setsProperties of neighborhoods and open setsEvery TVS is connected and locally connected and any connected open subset of a TVS is arcwise connected. If and is an open subset of then is an open set in and if has non-empty interior then is a neighborhood of the origin.
The open convex subsets of a TVS (not necessarily Hausdorff or locally convex) are exactly those that are of the form for some and some positive continuous sublinear functional on
If is an absorbing disk in a TVS and if is the Minkowski functional of then where importantly, it was assumed that had any topological properties nor that was continuous (which happens if and only if is a neighborhood of the origin).
Let and be two vector topologies on Then if and only if whenever a net in converges in then in
Let be a neighborhood basis of the origin in let and let Then if and only if there exists a net in (indexed by ) such that in This shows, in particular, that it will often suffice to consider nets indexed by a neighborhood basis of the origin rather than nets on arbitrary directed sets.
If is a TVS that is of the second category in itself (that is, a nonmeager space) then any closed convex absorbing subset of is a neighborhood of the origin. This is no longer guaranteed if the set is not convex (a counter-example exists even in ) or if is not of the second category in itself.InteriorIf and has non-empty interior then
and
The topological interior of a disk is not empty if and only if this interior contains the origin.
More generally, if is a balanced set with non-empty interior in a TVS then will necessarily be balanced; consequently, will be balanced if and only if it contains the origin. For this (i.e. ) to be true, it suffices for to also be convex (in addition to being balanced and having non-empty interior).;
The conclusion could be false if is not also convex; for example, in the interior of the closed and balanced set is
If is convex and then
Explicitly, this means that if is a convex subset of a TVS (not necessarily Hausdorff or locally convex), and then the open line segment joining and belongs to the interior of that is,
If is any balanced neighborhood of the origin in then where is the set of all scalars such that
If belongs to the interior of a convex set and then the half-open line segment and
If is a balanced neighborhood of in and then by considering intersections of the form (which are convex symmetric neighborhoods of in the real TVS ) it follows that: and furthermore, if then and if then
Non-Hausdorff spaces and the closure of the origin
A topological vector space is Hausdorff if and only if is a closed subset of or equivalently, if and only if Because is a vector subspace of the same is true of its closure which is referred to as in This vector space satisfies so that in particular, every neighborhood of the origin in contains the vector space as a subset.
The subspace topology on is always the trivial topology, which in particular implies that the topological vector space a compact space (even if its dimension is non-zero or even infinite) and consequently also a bounded subset of In fact, a vector subspace of a TVS is bounded if and only if it is contained in the closure of
Every subset of also carries the trivial topology and so is itself a compact, and thus also complete, subspace (see footnote for a proof). In particular, if is not Hausdorff then there exist subsets that are both but in ; for instance, this will be true of any non-empty proper subset of
If is compact, then and this set is compact. Thus the closure of a compact subset of a TVS is compact (said differently, all compact sets are relatively compact), which is not guaranteed for arbitrary non-Hausdorff topological spaces.
For every subset and consequently, if is open or closed in then (so that this open closed subsets can be described as a "tube" whose vertical side is the vector space ).
For any subset of this TVS the following are equivalent:
is totally bounded.
is totally bounded.
is totally bounded.
The image if under the canonical quotient map is totally bounded.
If is a vector subspace of a TVS then is Hausdorff if and only if is closed in
Moreover, the quotient map is always a closed map onto the (necessarily) Hausdorff TVS.
Every vector subspace of that is an algebraic complement of (that is, a vector subspace that satisfies and ) is a topological complement of
Consequently, if is an algebraic complement of in then the addition map defined by is a TVS-isomorphism, where is necessarily Hausdorff and has the indiscrete topology. Moreover, if is a Hausdorff completion of then is a completion of
Closed and compact setsCompact and totally bounded setsA subset of a TVS is compact if and only if it is complete and totally bounded. Thus, in a complete topological vector space, a closed and totally bounded subset is compact.
A subset of a TVS is totally bounded if and only if is totally bounded, if and only if its image under the canonical quotient map is totally bounded.
Every relatively compact set is totally bounded and the closure of a totally bounded set is totally bounded.
The image of a totally bounded set under a uniformly continuous map (such as a continuous linear map for instance) is totally bounded.
If is a subset of a TVS such that every sequence in has a cluster point in then is totally bounded.
If is a compact subset of a TVS and is an open subset of containing then there exists a neighborhood of 0 such that Closure and closed setThe closure of any convex (respectively, any balanced, any absorbing) subset of any TVS has this same property. In particular, the closure of any convex, balanced, and absorbing subset is a barrel.
The closure of a vector subspace of a TVS is a vector subspace. Every finite dimensional vector subspace of a Hausdorff TVS is closed.
The sum of a closed vector subspace and a finite-dimensional vector subspace is closed.
If is a vector subspace of and is a closed neighborhood of the origin in such that is closed in then is closed in
The sum of a compact set and a closed set is closed. However, the sum of two closed subsets may fail to be closed (see this footnote for examples).
If and is a scalar then where if is Hausdorff, then equality holds: In particular, every non-zero scalar multiple of a closed set is closed.
If and if is a set of scalars such that neither contain zero then
If then is convex.
If then and so consequently, if is closed then so is
If is a real TVS and then where the left hand side is independent of the topology on moreover, if is a convex neighborhood of the origin then equality holds.
For any subset where is any neighborhood basis at the origin for
However, and it is possible for this containment to be proper (for example, if and is the rational numbers). It follows that for every neighborhood of the origin in Closed hullsIn a locally convex space, convex hulls of bounded sets are bounded. This is not true for TVSs in general.
The closed convex hull of a set is equal to the closure of the convex hull of that set; that is, equal to
The closed balanced hull of a set is equal to the closure of the balanced hull of that set; that is, equal to
The closed disked hull of a set is equal to the closure of the disked hull of that set; that is, equal to
If and the closed convex hull of one of the sets or is compact then
If each have a closed convex hull that is compact (that is, and are compact) then Hulls and compactnessIn a general TVS, the closed convex hull of a compact set may to be compact.
The balanced hull of a compact (respectively, totally bounded) set has that same property.
The convex hull of a finite union of compact sets is again compact and convex.
Other propertiesMeager, nowhere dense, and BaireA disk in a TVS is not nowhere dense if and only if its closure is a neighborhood of the origin.
A vector subspace of a TVS that is closed but not open is nowhere dense.
Suppose is a TVS that does not carry the indiscrete topology. Then is a Baire space if and only if has no balanced absorbing nowhere dense subset.
A TVS is a Baire space if and only if is nonmeager, which happens if and only if there does not exist a nowhere dense set such that
Every nonmeager locally convex TVS is a barrelled space.Important algebraic facts and common misconceptionsIf then ; if is convex then equality holds. For an example where equality does hold, let be non-zero and set also works.
A subset is convex if and only if for all positive real or equivalently, if and only if for all
The convex balanced hull of a set is equal to the convex hull of the balanced hull of that is, it is equal to But in general, where the inclusion might be strict since the balanced hull of a convex set need not be convex (counter-examples exist even in ).
If and is a scalar then
If are convex non-empty disjoint sets and then or
In any non-trivial vector space there exist two disjoint non-empty convex subsets whose union is Other properties'
Every TVS topology can be generated by a of F-seminorms.
If is some unary predicate (a true or false statement dependent on ) then for any
So for example, if denotes "" then for any Similarly, if is a scalar then The elements of these sets must range over a vector space (that is, over ) rather than not just a subset or else these equalities are no longer guaranteed; similarly, must belong to this vector space (that is, ).
Properties preserved by set operators
The balanced hull of a compact (respectively, totally bounded, open) set has that same property.
The (Minkowski) sum of two compact (respectively, bounded, balanced, convex) sets has that same property. But the sum of two closed sets need be closed.
The convex hull of a balanced (resp. open) set is balanced (respectively, open). However, the convex hull of a closed set need be closed. And the convex hull of a bounded set need be bounded.
The following table, the color of each cell indicates whether or not a given property of subsets of (indicated by the column name, "convex" for instance) is preserved under the set operator (indicated by the row's name, "closure" for instance). If in every TVS, a property is preserved under the indicated set operator then that cell will be colored green; otherwise, it will be colored red.
So for instance, since the union of two absorbing sets is again absorbing, the cell in row "" and column "Absorbing" is colored green. But since the arbitrary intersection of absorbing sets need not be absorbing, the cell in row "Arbitrary intersections (of at least 1 set)" and column "Absorbing" is colored red. If a cell is not colored then that information has yet to be filled in.
See also
Notes
Proofs
Citations
Bibliography
Further reading
External links
Articles containing proofs
Topology of function spaces
Topological spaces
Vector spaces | Topological vector space | [
"Mathematics"
] | 7,185 | [
"Mathematical structures",
"Vector spaces",
"Topological vector spaces",
"Space (mathematics)",
"Topological spaces",
"Topology",
"Articles containing proofs"
] |
45,754 | https://en.wikipedia.org/wiki/Where%20Mathematics%20Comes%20From | Where Mathematics Comes From: How the Embodied Mind Brings Mathematics into Being (hereinafter WMCF) is a book by George Lakoff, a cognitive linguist, and Rafael E. Núñez, a psychologist. Published in 2000, WMCF seeks to found a cognitive science of mathematics, a theory of embodied mathematics based on conceptual metaphor.
WMCF definition of mathematics
Mathematics makes up that part of the human conceptual system that is special in the following way:
It is precise, consistent, stable across time and human communities, symbolizable, calculable, generalizable, universally available, consistent within each of its subject matters, and effective as a general tool for description, explanation, and prediction in a vast number of everyday activities, [ranging from] sports, to building, business, technology, and science. - WMCF, pp. 50, 377
Nikolay Lobachevsky said "There is no branch of mathematics, however abstract, which may not some day be applied to phenomena of the real world." A common type of conceptual blending process would seem to apply to the entire mathematical procession.
Human cognition and mathematics
Lakoff and Núñez's avowed purpose is to begin laying the foundations for a truly scientific understanding of mathematics, one grounded in processes common to all human cognition. They find that four distinct but related processes metaphorically structure basic arithmetic: object collection, object construction, using a measuring stick, and moving along a path.
WMCF builds on earlier books by Lakoff (1987) and Lakoff and Johnson (1980, 1999), which analyze such concepts of metaphor and image schemata from second-generation cognitive science. Some of the concepts in these earlier books, such as the interesting technical ideas in Lakoff (1987), are absent from WMCF.
Lakoff and Núñez hold that mathematics results from the human cognitive apparatus and must therefore be understood in cognitive terms. WMCF advocates (and includes some examples of) a cognitive idea analysis of mathematics which analyzes mathematical ideas in terms of the human experiences, metaphors, generalizations, and other cognitive mechanisms giving rise to them. A standard mathematical education does not develop such idea analysis techniques because it does not pursue considerations of A) what structures of the mind allow it to do mathematics or B) the philosophy of mathematics.
Lakoff and Núñez start by reviewing the psychological literature, concluding that human beings appear to have an innate ability, called subitizing, to count, add, and subtract up to about 4 or 5. They document this conclusion by reviewing the literature, published in recent decades, describing experiments with infant subjects. For example, infants quickly become excited or curious when presented with "impossible" situations, such as having three toys appear when only two were initially present.
The authors argue that mathematics goes far beyond this very elementary level due to a large number of metaphorical constructions. For example, the Pythagorean position that all is number, and the associated crisis of confidence that came about with the discovery of the irrationality of the square root of two, arises solely from a metaphorical relation between the length of the diagonal of a square, and the possible numbers of objects.
Much of WMCF deals with the important concepts of infinity and of limit processes, seeking to explain how finite humans living in a finite world could ultimately conceive of the actual infinite. Thus much of WMCF is, in effect, a study of the epistemological foundations of the calculus. Lakoff and Núñez conclude that while the potential infinite is not metaphorical, the actual infinite is. Moreover, they deem all manifestations of actual infinity to be instances of what they call the "Basic Metaphor of Infinity", as represented by the ever-increasing sequence 1, 2, 3, ...
WMCF emphatically rejects the Platonistic philosophy of mathematics. They emphasize that all we know and can ever know is human mathematics, the mathematics arising from the human intellect. The question of whether there is a "transcendent" mathematics independent of human thought is a meaningless question, like asking if colors are transcendent of human thought—colors are only varying wavelengths of light, it is our interpretation of physical stimuli that make them colors.
WMCF (p. 81) likewise criticizes the emphasis mathematicians place on the concept of closure. Lakoff and Núñez argue that the expectation of closure is an artifact of the human mind's ability to relate fundamentally different concepts via metaphor.
WMCF concerns itself mainly with proposing and establishing an alternative view of mathematics, one grounding the field in the realities of human biology and experience. It is not a work of technical mathematics or philosophy. Lakoff and Núñez are not the first to argue that conventional approaches to the philosophy of mathematics are flawed. For example, they do not seem all that familiar with the content of Davis and Hersh (1981), even though the book warmly acknowledges Hersh's support.
Lakoff and Núñez cite Saunders Mac Lane (the inventor, with Samuel Eilenberg, of category theory) in support of their position. Mathematics, Form and Function (1986), an overview of mathematics intended for philosophers, proposes that mathematical concepts are ultimately grounded in ordinary human activities, mostly interactions with the physical world.
Examples of mathematical metaphors
Conceptual metaphors described in WMCF, in addition to the Basic Metaphor of Infinity, include:
Arithmetic is motion along a path, object collection/construction;
Change is motion;
Sets are containers, objects;
Continuity is gapless;
Mathematical systems have an "essence," namely their axiomatic algebraic structure;
Functions are sets of ordered pairs, curves in the Cartesian plane;
Geometric figures are objects in space;
Logical independence is geometric orthogonality;
Numbers are sets, object collections, physical segments, points on a line;
Recurrence is circular.
Mathematical reasoning requires variables ranging over some universe of discourse, so that we can reason about generalities rather than merely about particulars. WMCF argues that reasoning with such variables implicitly relies on what it terms the Fundamental Metonymy of Algebra.
Example of metaphorical ambiguity
WMCF (p. 151) includes the following example of what the authors term "metaphorical ambiguity." Take the set Then recall two bits of standard terminology from elementary set theory:
The recursive construction of the ordinal natural numbers, whereby 0 is , and is
The ordered pair (a,b), defined as
By (1), A is the set {1,2}. But (1) and (2) together say that A is also the ordered pair (0,1). Both statements cannot be correct; the ordered pair (0,1) and the unordered pair {1,2} are fully distinct concepts. Lakoff and Johnson (1999) term this situation "metaphorically ambiguous." This simple example calls into question any Platonistic foundations for mathematics.
While (1) and (2) above are admittedly canonical, especially within the consensus set theory known as the Zermelo–Fraenkel axiomatization, WMCF does not let on that they are but one of several definitions that have been proposed since the dawning of set theory. For example, Frege, Principia Mathematica, and New Foundations (a body of axiomatic set theory begun by Quine in 1937) define cardinals and ordinals as equivalence classes under the relations of equinumerosity and similarity, so that this conundrum does not arise. In Quinian set theory, A is simply an instance of the number 2. For technical reasons, defining the ordered pair as in (2) above is awkward in Quinian set theory. Two solutions have been proposed:
A variant set-theoretic definition of the ordered pair more complicated than the usual one;
Taking ordered pairs as primitive.
The Romance of Mathematics
The "Romance of Mathematics" is WMCFs light-hearted term for a perennial philosophical viewpoint about mathematics which the authors describe and then dismiss as an intellectual myth:
Mathematics is transcendent, namely it exists independently of human beings, and structures our actual physical universe and any possible universe. Mathematics is the language of nature, and is the primary conceptual structure we would have in common with extraterrestrial aliens, if any such there be.
Mathematical proof is the gateway to a realm of transcendent truth.
Reasoning is logic, and logic is essentially mathematical. Hence mathematics structures all possible reasoning.
Because mathematics exists independently of human beings, and reasoning is essentially mathematical, reason itself is disembodied. Therefore, artificial intelligence is possible, at least in principle.
It is very much an open question whether WMCF will eventually prove to be the start of a new school in the philosophy of mathematics. Hence the main value of WMCF so far may be a critical one: its critique of Platonism and romanticism in mathematics.
Critical response
Many working mathematicians resist the approach and conclusions of Lakoff and Núñez. Reviews of WMCF by mathematicians in professional journals, while often respectful of its focus on conceptual strategies and metaphors as paths for understanding mathematics, have taken exception to some of the WMCFs philosophical arguments on the grounds that mathematical statements have lasting 'objective' meanings. For example, Fermat's Last Theorem means exactly what it meant when Fermat initially proposed it in 1664. Other reviewers have pointed out that multiple conceptual strategies can be employed in connection with the same mathematically defined term, often by the same person (a point that is compatible with the view that we routinely understand the 'same' concept with different metaphors). The metaphor and the conceptual strategy are not the same as the formal definition which mathematicians employ. However, WMCF points out that formal definitions are built using words and symbols that have meaning only in terms of human experience.
Critiques of WMCF include the humorous:
and the physically informed:
Lakoff and Núñez tend to dismiss the negative opinions mathematicians have expressed about WMCF, because their critics do not appreciate the insights of cognitive science. Lakoff and Núñez maintain that their argument can only be understood using the discoveries of recent decades about the way human brains process language and meaning. They argue that any arguments or criticisms that are not grounded in this understanding cannot address the content of the book.
It has been pointed out that it is not at all clear that WMCF establishes that the claim "intelligent alien life would have mathematical ability" is a myth. To do this, it would be required to show that intelligence and mathematical ability are separable, and this has not been done. On Earth, intelligence and mathematical ability seem to go hand in hand in all life-forms, as pointed out by Keith Devlin among others. The authors of WMCF have not explained how this situation would (or even could) be different anywhere else.
Lakoff and Núñez also appear not to appreciate the extent to which intuitionists and constructivists have presaged their attack on the Romance of (Platonic) Mathematics. Brouwer, the founder of the intuitionist/constructivist point of view, in his dissertation On the Foundation of Mathematics, argued that mathematics was a mental construction, a free creation of the mind and totally independent of logic and language. He goes on to criticize the formalists for building verbal structures that are studied without intuitive interpretation. Symbolic language should not be confused with mathematics; it reflects, but does not contain, mathematical reality.
Educators have taken some interest in what WMCF suggests about how mathematics is learned, and why students find some elementary concepts more difficult than others.
However, even from an educational perspective, WMCF is still problematic. From the conceptual metaphor theory's point of view, metaphors reside in a different realm, the abstract, from that of 'real world', the concrete. In other words, despite their claim of mathematics being human, established mathematical knowledge — which is what we learn in school — is assumed to be and treated as abstract, completely detached from its physical origin. It cannot account for the way learners could access to such knowledge.
WMCF is also criticized for its monist approach. First, it ignores the fact that the sensori-motor experience upon which our linguistic structure — thus, mathematics — is assumed to be based may vary across cultures and situations. Second, the mathematics WMCF is concerned with is "almost entirely... standard utterances in textbooks and curricula", which is the most-well established body of knowledge. It is negligent of the dynamic and diverse nature of the history of mathematics.
WMCF's logo-centric approach is another target for critics. While it is predominantly interested in the association between language and mathematics, it does not account for how non-linguistic factors contribute to the emergence of mathematical ideas (e.g. See Radford, 2009; Rotman, 2008).
Summing up
WMCF (pp. 378–79) concludes with some key points, a number of which follow. Mathematics arises from our bodies and brains, our everyday experiences, and the concerns of human societies and cultures. It is:
The result of normal adult cognitive capacities, in particular the capacity for conceptual metaphor, and as such is a human universal. The ability to construct conceptual metaphors is neurologically based, and enables humans to reason about one domain using the language and concepts of another domain. Conceptual metaphor is both what enabled mathematics to grow out of everyday activities, and what enables mathematics to grow by a continual process of analogy and abstraction;
Symbolic, thereby enormously facilitating precise calculation;
Not transcendent, but the result of human evolution and culture, to which it owes its effectiveness. During experience of the world a connection to mathematical ideas is going on within the human mind;
A system of human concepts making extraordinary use of the ordinary tools of human cognition;
An open-ended creation of human beings, who remain responsible for maintaining and extending it;
One of the greatest products of the collective human imagination, and a magnificent example of the beauty, richness, complexity, diversity, and importance of human ideas.
The cognitive approach to formal systems, as described and implemented in WMCF, need not be confined to mathematics, but should also prove fruitful when applied to formal logic, and to formal philosophy such as Edward Zalta's theory of abstract objects. Lakoff and Johnson (1999) fruitfully employ the cognitive approach to rethink a good deal of the philosophy of mind, epistemology, metaphysics, and the history of ideas.
See also
Abstract object
Cognitive science
Cognitive science of mathematics
Conceptual metaphor
Embodied philosophy
Foundations of mathematics
From Action to Mathematics per Mac Lane
Metaphor
Philosophy of mathematics
The Unreasonable Effectiveness of Mathematics in the Natural Sciences
Footnotes
References
Davis, Philip J., and Reuben Hersh, 1999 (1981). The Mathematical Experience. Mariner Books. First published by Houghton Mifflin.
George Lakoff, 1987. Women, Fire and Dangerous Things. Univ. of Chicago Press.
------ and Mark Johnson, 1999. Philosophy in the Flesh. Basic Books.
George Lakoff and Rafael Núñez, 2000, Where Mathematics Comes From. Basic Books.
John Randolph Lucas, 2000. The Conceptual Roots of Mathematics. Routledge.
Saunders Mac Lane, 1986. Mathematics: Form and Function. Springer Verlag.
External links
WMCF web site.
Reviews of WMCF.
Joseph Auslander in American Scientist;
Bonnie Gold, MAA Reviews 2001
Lakoff's response to Gold's MAA review.
Books about philosophy of mathematics
Infinity
Linguistics books
2000 non-fiction books
Mathematics books
Books about metaphors
Cognitive science literature | Where Mathematics Comes From | [
"Mathematics"
] | 3,191 | [
"Mathematical objects",
"Infinity"
] |
45,756 | https://en.wikipedia.org/wiki/Pyrite | The mineral pyrite ( ), or iron pyrite, also known as fool's gold, is an iron sulfide with the chemical formula FeS2 (iron (II) disulfide). Pyrite is the most abundant sulfide mineral.
Pyrite's metallic luster and pale brass-yellow hue give it a superficial resemblance to gold, hence the well-known nickname of fool's gold. The color has also led to the nicknames brass, brazzle, and brazil, primarily used to refer to pyrite found in coal.
The name pyrite is derived from the Greek (), 'stone or mineral which strikes fire', in turn from (), 'fire'. In ancient Roman times, this name was applied to several types of stone that would create sparks when struck against steel; Pliny the Elder described one of them as being brassy, almost certainly a reference to what is now called pyrite.
By Georgius Agricola's time, , the term had become a generic term for all of the sulfide minerals.
Pyrite is usually found associated with other sulfides or oxides in quartz veins, sedimentary rock, and metamorphic rock, as well as in coal beds and as a replacement mineral in fossils, but has also been identified in the sclerites of scaly-foot gastropods. Despite being nicknamed "fool's gold", pyrite is sometimes found in association with small quantities of gold. A substantial proportion of the gold is "invisible gold" incorporated into the pyrite (see Carlin-type gold deposit). It has been suggested that the presence of both gold and arsenic is a case of coupled substitution but as of 1997 the chemical state of the gold remained controversial.
Uses
Pyrite gained a brief popularity in the 16th and 17th centuries as a source of ignition in early firearms, most notably the wheellock, where a sample of pyrite was placed against a circular file to strike the sparks needed to fire the gun.
Pyrite is used with flintstone and a form of tinder made of stringybark by the Kaurna people of South Australia, as a traditional method of starting fires.
Pyrite has been used since classical times to manufacture copperas (ferrous sulfate). Iron pyrite was heaped up and allowed to weather (an example of an early form of heap leaching). The acidic runoff from the heap was then boiled with iron to produce iron sulfate. In the 15th century, new methods of such leaching began to replace the burning of sulfur as a source of sulfuric acid. By the 19th century, it had become the dominant method.
Pyrite remains in commercial use for the production of sulfur dioxide, for use in such applications as the paper industry, and in the manufacture of sulfuric acid. Thermal decomposition of pyrite into FeS (iron(II) sulfide) and elemental sulfur starts at ; at around , pS2 is about .
A newer commercial use for pyrite is as the cathode material in Energizer brand non-rechargeable lithium metal batteries (Energizer Ultimate Lithium™) .
Pyrite is a semiconductor material with a band gap of 0.95 eV. Pure pyrite is naturally n-type, in both crystal and thin-film forms, potentially due to sulfur vacancies in the pyrite crystal structure acting as n-dopants.
During the early years of the 20th century, pyrite was used as a mineral detector in radio receivers, and is still used by crystal radio hobbyists. Until the vacuum tube matured, the crystal detector was the most sensitive and dependable detector available—with considerable variation between mineral types and even individual samples within a particular type of mineral. Pyrite detectors occupied a midway point between galena detectors and the more mechanically complicated perikon mineral pairs. Pyrite detectors can be as sensitive as a modern 1N34A germanium diode detector.
Pyrite has been proposed as an abundant, non-toxic, inexpensive material in low-cost photovoltaic solar panels. Synthetic iron sulfide was used with copper sulfide to create the photovoltaic material. More recent efforts are working toward thin-film solar cells made entirely of pyrite.
Pyrite is used to make marcasite jewelry. Marcasite jewelry, using small faceted pieces of pyrite, often set in silver, has been made since ancient times and was popular in the Victorian era. At the time when the term became common in jewelry making, "marcasite" referred to all iron sulfides including pyrite, and not to the orthorhombic FeS2 mineral marcasite which is lighter in color, brittle and chemically unstable, and thus not suitable for jewelry making. Marcasite jewelry does not actually contain the mineral marcasite. The specimens of pyrite, when it appears as good quality crystals, are used in decoration. They are also very popular in mineral collecting. Among the sites that provide the best specimens are Soria and La Rioja provinces (Spain).
In value terms, China ($47 million) constitutes the largest market for imported unroasted iron pyrites worldwide, making up 65% of global imports. China is also the fastest growing in terms of the unroasted iron pyrites imports, with a CAGR of +27.8% from 2007 to 2016.
Research
In July 2020 scientists reported that they have observed a voltage-induced transformation of normally diamagnetic pyrite into a ferromagnetic material, which may lead to applications in devices such as solar cells or magnetic data storage.
Researchers at Trinity College Dublin, Ireland have demonstrated that FeS2 can be exfoliated into few-layers just like other two-dimensional layered materials such as graphene by a simple liquid-phase exfoliation route. This is the first study to demonstrate the production of non-layered 2D-platelets from 3D bulk FeS2. Furthermore, they have used these 2D-platelets with 20% single walled carbon-nanotube as an anode material in lithium-ion batteries, reaching a capacity of 1000 mAh/g close to the theoretical capacity of FeS2.
In 2021, a natural pyrite stone has been crushed and pre-treated followed by liquid-phase exfoliation into two-dimensional nanosheets, which has shown capacities of 1200 mAh/g as an anode in lithium-ion batteries.
Formal oxidation states for pyrite, marcasite, molybdenite and arsenopyrite
From the perspective of classical inorganic chemistry, which assigns formal oxidation states to each atom, pyrite and marcasite are probably best described as Fe2+[S2]2−. This formalism recognizes that the sulfur atoms in pyrite occur in pairs with clear S–S bonds. These persulfide [–S–S–] units can be viewed as derived from hydrogen disulfide, H2S2. Thus pyrite would be more descriptively called iron persulfide, not iron disulfide. In contrast, molybdenite, MoS2, features isolated sulfide S2− centers and the oxidation state of molybdenum is Mo4+. The mineral arsenopyrite has the formula FeAsS. Whereas pyrite has [S2]2– units, arsenopyrite has [AsS]3– units, formally derived from deprotonation of arsenothiol (H2AsSH). Analysis of classical oxidation states would recommend the description of arsenopyrite as Fe3+[AsS]3−.
Crystallography
Iron-pyrite FeS2 represents the prototype compound of the crystallographic pyrite structure. The structure is cubic and was among the first crystal structures solved by X-ray diffraction. It belongs to the crystallographic space group Pa and is denoted by the Strukturbericht notation C2. Under thermodynamic standard conditions the lattice constant of stoichiometric iron pyrite FeS2 amounts to . The unit cell is composed of a Fe face-centered cubic sublattice into which the ions are embedded. (Note though that the iron atoms in the faces are not equivalent by translation alone to the iron atoms at the corners.) The pyrite structure is also seen in other MX2 compounds of transition metals M and chalcogens X = O, S, Se and Te. Certain dipnictides with X standing for P, As and Sb etc. are also known to adopt the pyrite structure.
The Fe atoms are bonded to six S atoms, giving a distorted octahedron. The material is a semiconductor. The Fe ions are usually considered to be low spin divalent state (as shown by Mössbauer spectroscopy as well as XPS). The material as a whole behaves as a Van Vleck paramagnet, despite its low-spin divalency.
The sulfur centers occur in pairs, described as S22−. Reduction of pyrite with potassium gives potassium dithioferrate, KFeS2. This material features ferric ions and isolated sulfide (S2-) centers.
The S atoms are tetrahedral, being bonded to three Fe centers and one other S atom. The site symmetry at Fe and S positions is accounted for by point symmetry groups C3i and C3, respectively. The missing center of inversion at S lattice sites has important consequences for the crystallographic and physical properties of iron pyrite. These consequences derive from the crystal electric field active at the sulfur lattice site, which causes a polarization of S ions in the pyrite lattice. The polarisation can be calculated on the basis of higher-order Madelung constants and has to be included in the calculation of the lattice energy by using a generalised Born–Haber cycle. This reflects the fact that the covalent bond in the sulfur pair is inadequately accounted for by a strictly ionic treatment.
Arsenopyrite has a related structure with heteroatomic As–S pairs rather than S-S pairs. Marcasite also possesses homoatomic anion pairs, but the arrangement of the metal and diatomic anions differs from that of pyrite. Despite its name, chalcopyrite () does not contain dianion pairs, but single S2− sulfide anions.
Crystal habit
Pyrite usually forms cuboid crystals, sometimes forming in close association to form raspberry-shaped masses called framboids. However, under certain circumstances, it can form anastomosing filaments or T-shaped crystals.
Pyrite can also form shapes almost the same as a regular dodecahedron, known as pyritohedra, and this suggests an explanation for the artificial geometrical models found in Europe as early as the 5th century BC.
Varieties
Cattierite (CoS2), vaesite (NiS2) and hauerite (MnS2), as well as sperrylite (PtAs2) are similar in their structure and belong also to the pyrite group.
is a nickel-cobalt bearing variety of pyrite, with > 50% substitution of Ni2+ for Fe2+ within pyrite. Bravoite is not a formally recognised mineral, and is named after the Peruvian scientist Jose J. Bravo (1874–1928).
Distinguishing similar minerals
Pyrite is distinguishable from native gold by its hardness, brittleness and crystal form. Pyrite fractures are very uneven, sometimes conchoidal because it does not cleave along a preferential plane. Native gold nuggets, or glitters, do not break but deform in a ductile way. Pyrite is brittle, gold is malleable.
Natural gold tends to be anhedral (irregularly shaped without well defined faces), whereas pyrite comes as either cubes or multifaceted crystals with well developed and sharp faces easy to recognise. Well crystallised pyrite crystals are euhedral (i.e., with nice faces). Pyrite can often be distinguished by the striations which, in many cases, can be seen on its surface. Chalcopyrite () is brighter yellow with a greenish hue when wet and is softer (3.5–4 on Mohs' scale). Arsenopyrite (FeAsS) is silver white and does not become more yellow when wet.
Hazards
Iron pyrite is unstable when exposed to the oxidizing conditions prevailing at the Earth's surface: iron pyrite in contact with atmospheric oxygen and water, or damp, ultimately decomposes into iron oxyhydroxides (ferrihydrite, FeO(OH)) and sulfuric acid (). This process is accelerated by the action of Acidithiobacillus bacteria which oxidize pyrite to first produce ferrous ions (), sulfate ions (), and release protons (, or ). In a second step, the ferrous ions () are oxidized by into ferric ions () which hydrolyze also releasing ions and producing FeO(OH). These oxidation reactions occur more rapidly when pyrite is finely dispersed (framboidal crystals initially formed by sulfate reducing bacteria (SRB) in argillaceous sediments or dust from mining operations).
Pyrite oxidation and acid mine drainage
Pyrite oxidation by atmospheric in the presence of moisture () initially produces ferrous ions () and sulfuric acid which dissociates into sulfate ions and protons, leading to acid mine drainage (AMD). An example of acid rock drainage caused by pyrite is the 2015 Gold King Mine waste water spill.
2FeS2{\scriptstyle (s)} + 7O2{\scriptstyle (g)} + 2H2O{\scriptstyle (l)} -> 2Fe^{2+}{\scriptstyle (aq)} + 4SO4^{2-}{\scriptstyle (aq)} + 4H+{\scriptstyle (aq)}.
Dust explosions
Pyrite oxidation is sufficiently exothermic that underground coal mines in high-sulfur coal seams have occasionally had serious problems with spontaneous combustion. The solution is the use of buffer blasting and the use of various sealing or cladding agents to hermetically seal the mined-out areas to exclude oxygen.
In modern coal mines, limestone dust is sprayed onto the exposed coal surfaces to reduce the hazard of dust explosions. This has the secondary benefit of neutralizing the acid released by pyrite oxidation and therefore slowing the oxidation cycle described above, thus reducing the likelihood of spontaneous combustion. In the long term, however, oxidation continues, and the hydrated sulfates formed may exert crystallization pressure that can expand cracks in the rock and lead eventually to roof fall.
Weakened building materials
Building stone containing pyrite tends to stain brown as pyrite oxidizes. This problem appears to be significantly worse if any marcasite is present. The presence of pyrite in the aggregate used to make concrete can lead to severe deterioration as pyrite oxidizes. In early 2009, problems with Chinese drywall imported into the United States after Hurricane Katrina were attributed to pyrite oxidation, followed by microbial sulfate reduction which released hydrogen sulfide gas (). These problems included a foul odor and corrosion of copper wiring. In the United States, in Canada, and more recently in Ireland, where it was used as underfloor infill, pyrite contamination has caused major structural damage. Concrete exposed to sulfate ions, or sulfuric acid, degrades by sulfate attack: the formation of expansive mineral phases, such as ettringite (small needle crystals exerting a huge crystallization pressure inside the concrete pores) and gypsum creates inner tensile forces in the concrete matrix which destroy the hardened cement paste, form cracks and fissures in concrete, and can lead to the ultimate ruin of the structure. Normalized tests for construction aggregate certify such materials as free of pyrite or marcasite.
Occurrence
Pyrite is the most common of sulfide minerals and is widespread in igneous, metamorphic, and sedimentary rocks. It is a common accessory mineral in igneous rocks, where it also occasionally occurs as larger masses arising from an immiscible sulfide phase in the original magma. It is found in metamorphic rocks as a product of contact metamorphism. It also forms as a high-temperature hydrothermal mineral, though it occasionally forms at lower temperatures.
Pyrite occurs both as a primary mineral, present in the original sediments, and as a secondary mineral, deposited during diagenesis. Pyrite and marcasite commonly occur as replacement pseudomorphs after fossils in black shale and other sedimentary rocks formed under reducing environmental conditions. Pyrite is common as an accessory mineral in shale, where it is formed by precipitation from anoxic seawater, and coal beds often contain significant pyrite.
Notable deposits are found as lenticular masses in Virginia, U.S., and in smaller quantities in many other locations. Large deposits are mined at Rio Tinto in Spain and elsewhere in the Iberian Peninsula.
Cultural beliefs
In the beliefs of the Thai people (especially those in the south), pyrite is known as Khao tok Phra Ruang, Khao khon bat Phra Ruang (ข้าวตอกพระร่วง, ข้าวก้นบาตรพระร่วง) or Phet na tang, Hin na tang (เพชรหน้าทั่ง, หินหน้าทั่ง). It is believed to be a sacred item that has the power to prevent evil, black magic or demons.
Images
See also
Iron–sulfur world hypothesis
Sulfur isotope biogeochemistry
References
Further reading
American Geological Institute, 2003, Dictionary of Mining, Mineral, and Related Terms, 2nd ed., Springer, New York, .
David Rickard, Pyrite: A Natural History of Fool's Gold, Oxford, New York, 2015, .
External links
Pyrite.Virtual Museum of Mineralogy. Universidad de Zaragoza, Spain
Educational article about the famous pyrite crystals from the Navajun Mine
How Minerals Form and Change "Pyrite oxidation under room conditions".
Disulfides
Fire making
Pyrite group
Iron(II) minerals
Cubic minerals
Minerals in space group 205
Sulfide minerals
Alchemical substances
Semiconductor materials
Transition metal dichalcogenides
Blendes | Pyrite | [
"Chemistry"
] | 3,860 | [
"Semiconductor materials",
"Alchemical substances"
] |
45,783 | https://en.wikipedia.org/wiki/Genotype%E2%80%93phenotype%20distinction | The genotype–phenotype distinction is drawn in genetics. The "genotype" is an organism's full hereditary information. The "phenotype" is an organism's actual observed properties, such as morphology, development, or behavior. This distinction is fundamental in the study of inheritance of traits and their evolution.
Overview
The terms "genotype" and "phenotype" were created by Wilhelm Johannsen in 1911, although the meaning of the terms and the significance of the distinction have evolved since they were introduced.
It is the organism's physical properties that directly determine its chances of survival and reproductive output, but the inheritance of physical properties is dependent on the inheritance of genes. Therefore, understanding the theory of evolution via natural selection requires understanding the genotype–phenotype distinction. The genes contribute to a trait, and the phenotype is the observable manifestation of the genes (and therefore the genotype that affects the trait). If a white mouse had recessive genes that caused the genes responsible for color to be inactive, its genotype would be responsible for its phenotype (the white color).
The mapping of a set of genotypes to a set of phenotypes is sometimes referred to as the genotype–phenotype map.
An organism's genotype is a major (the largest by far for morphology) influencing factor in the development of its phenotype, but it is not the only one. Even two organisms with identical genotypes may differ in their phenotypes, due to phenotypic plasticity. To what extent a particular genotype influences a phenotype depends on the relative dominance, penetrance, and expresivity of the alleles in question.
One experiences this in everyday life with monozygous (i.e. identical) twins. Identical twins share the same genotype, since their genomes are identical; but they never have the same phenotype, although their phenotypes may be very similar. This is apparent in the fact that close relations can always tell them apart, even though others might not be able to see the subtle differences. Further, identical twins can be distinguished by their fingerprints, which are never completely identical.
Phenotypic plasticity
The concept of phenotypic plasticity defines the degree to which an organism's phenotype is determined by its genotype. A high level of plasticity means that environmental factors have a strong influence on the particular phenotype that develops. If there is little plasticity, the phenotype of an organism can be reliably predicted from knowledge of the genotype, regardless of environmental peculiarities during development. An example of high plasticity can be observed in larval newts1: when these larvae sense the presence of predators such as dragonflies, they develop larger heads and tails relative to their body size and display darker pigmentation. Larvae with these traits have a higher chance of survival when exposed to the predators, but grow more slowly than other phenotypes.
Genetic canalization
In contrast to phenotypic plasticity, the concept of genetic canalization addresses the extent to which an organism's phenotype allows conclusions about its genotype. A phenotype is said to be canalized if mutations (changes in the genome) do not noticeably affect the physical properties of the organism. This means that a canalized phenotype may form from a large variety of different genotypes, in which case it is not possible to exactly predict the genotype from knowledge of the phenotype (i.e. the genotype–phenotype map is not invertible). If canalization is not present, small changes in the genome have an immediate effect on the phenotype that develops.
Importance to evolutionary biology
According to Lewontin, the theoretical task for population genetics is a process in two spaces: a "genotypic space" and a "phenotypic space". The challenge of a complete theory of population genetics is to provide a set of laws that predictably map a population of genotypes (G1) to a phenotype space (P1), where selection takes place, and another set of laws that map the resulting population (P2) back to genotype space (G2) where Mendelian genetics can predict the next generation of genotypes, thus completing the cycle. Even if non-Mendelian aspects of molecular genetics are ignored, this is a gargantuan task. Visualizing the transformation schematically:
(adapted from Lewontin 1974, p. 12). T1 represents the genetic and epigenetic laws, the aspects of functional biology, or development, that transform a genotype into phenotype. This is the "genotype–phenotype map". T2 is the transformation due to natural selection, T3 are epigenetic relations that predict genotypes based on the selected phenotypes and finally T4 the rules of Mendelian genetics.
In practice, there are two bodies of evolutionary theory that exist in parallel, traditional population genetics operating in the genotype space and the biometric theory used in plant and animal breeding, operating in phenotype space. The missing part is the mapping between the genotype and phenotype space. This leads to a "sleight of hand" (as Lewontin terms it) whereby variables in the equations of one domain, are considered parameters or constants, where, in a full-treatment, they would be transformed themselves by the evolutionary process and are functions of the state variables in the other domain. The "sleight of hand" is assuming that the mapping is known. Proceeding as if it is understood is enough to analyze many cases of interest. For example, if the phenotype is almost one-to-one with genotype (sickle-cell disease) or the time-scale is sufficiently short, the "constants" can be treated as such; however, there are also many situations where that assumption does not hold.
References
External links
Stanford Encyclopedia of Philosophy entry
"Wilhelm Johannsen's Genotype-Phenotype Distinction" at the Embryo Project Encyclopedia
Genetics | Genotype–phenotype distinction | [
"Biology"
] | 1,271 | [
"Genetics"
] |
45,784 | https://en.wikipedia.org/wiki/Biomimetics | Biomimetics or biomimicry is the emulation of the models, systems, and elements of nature for the purpose of solving complex human problems. The terms "biomimetics" and "biomimicry" are derived from (bios), life, and μίμησις (mīmēsis), imitation, from μιμεῖσθαι (mīmeisthai), to imitate, from μῖμος (mimos), actor. A closely related field is bionics.
Nature has gone through evolution over the 3.8 billion years since life is estimated to have appeared on the Earth. It has evolved species with high performance using commonly found materials. Surfaces of solids interact with other surfaces and the environment and derive the properties of materials. Biological materials are highly organized from the molecular to the nano-, micro-, and macroscales, often in a hierarchical manner with intricate nanoarchitecture that ultimately makes up a myriad of different functional elements. Properties of materials and surfaces result from a complex interplay between surface structure and morphology and physical and chemical properties. Many materials, surfaces, and objects in general provide multifunctionality.
Various materials, structures, and devices have been fabricated for commercial interest by engineers, material scientists, chemists, and biologists, and for beauty, structure, and design by artists and architects. Nature has solved engineering problems such as self-healing abilities, environmental exposure tolerance and resistance, hydrophobicity, self-assembly, and harnessing solar energy. Economic impact of bioinspired materials and surfaces is significant, on the order of several hundred billion dollars per year worldwide.
History
One of the early examples of biomimicry was the study of birds to enable human flight. Although never successful in creating a "flying machine", Leonardo da Vinci (1452–1519) was a keen observer of the anatomy and flight of birds, and made numerous notes and sketches on his observations as well as sketches of "flying machines". The Wright Brothers, who succeeded in flying the first heavier-than-air aircraft in 1903, allegedly derived inspiration from observations of pigeons in flight.
During the 1950s the American biophysicist and polymath Otto Schmitt developed the concept of "biomimetics". During his doctoral research he developed the Schmitt trigger by studying the nerves in squid, attempting to engineer a device that replicated the biological system of nerve propagation. He continued to focus on devices that mimic natural systems and by 1957 he had perceived a converse to the standard view of biophysics at that time, a view he would come to call biomimetics.
In 1960 Jack E. Steele coined a similar term, bionics, at Wright-Patterson Air Force Base in Dayton, Ohio, where Otto Schmitt also worked. Steele defined bionics as "the science of systems which have some function copied from nature, or which represent characteristics of natural systems or their analogues". During a later meeting in 1963 Schmitt stated,
In 1969, Schmitt used the term "biomimetic" in the title one of his papers, and by 1974 it had found its way into Webster's Dictionary. Bionics entered the same dictionary earlier in 1960 as "a science concerned with the application of data about the functioning of biological systems to the solution of engineering problems". Bionic took on a different connotation when Martin Caidin referenced Jack Steele and his work in the novel Cyborg which later resulted in the 1974 television series The Six Million Dollar Man and its spin-offs. The term bionic then became associated with "the use of electronically operated artificial body parts" and "having ordinary human powers increased by or as if by the aid of such devices". Because the term bionic took on the implication of supernatural strength, the scientific community in English speaking countries largely abandoned it.
The term biomimicry appeared as early as 1982. Biomimicry was popularized by scientist and author Janine Benyus in her 1997 book Biomimicry: Innovation Inspired by Nature. Biomimicry is defined in the book as a "new science that studies nature's models and then imitates or takes inspiration from these designs and processes to solve human problems". Benyus suggests looking to Nature as a "Model, Measure, and Mentor" and emphasizes sustainability as an objective of biomimicry.
One of the latest examples of biomimicry has been created by Johannes-Paul Fladerer and Ernst Kurzmann by the description of "managemANT". This term (a combination of the words "management" and "ant"), describes the usage of behavioural strategies of ants in economic and management strategies. The potential long-term impacts of biomimicry were quantified in a 2013 Fermanian Business & Economic Institute Report commissioned by the San Diego Zoo. The findings demonstrated the potential economic and environmental benefits of biomimicry, which can be further seen in Johannes-Paul Fladerer and Ernst Kurzmann's "managemANT" approach. This approach utilizes the behavioral strategies of ants in economic and management strategies.
Bio-inspired technologies
Biomimetics could in principle be applied in many fields. Because of the diversity and complexity of biological systems, the number of features that might be imitated is large. Biomimetic applications are at various stages of development from technologies that might become commercially usable to prototypes. Murray's law, which in conventional form determined the optimum diameter of blood vessels, has been re-derived to provide simple equations for the pipe or tube diameter which gives a minimum mass engineering system.
Locomotion
Aircraft wing design and flight techniques are being inspired by birds and bats. The aerodynamics of streamlined design of improved Japanese high speed train Shinkansen 500 Series were modelled after the beak of Kingfisher bird.
Biorobots based on the physiology and methods of locomotion of animals include BionicKangaroo which moves like a kangaroo, saving energy from one jump and transferring it to its next jump; Kamigami Robots, a children's toy, mimic cockroach locomotion to run quickly and efficiently over indoor and outdoor surfaces, and Pleobot, a shrimp-inspired robot to study metachronal swimming and the ecological impacts of this propulsive gait on the environment.
Biomimetic flying robots (BFRs)
BFRs take inspiration from flying mammals, birds, or insects. BFRs can have flapping wings, which generate the lift and thrust, or they can be propeller actuated. BFRs with flapping wings have increased stroke efficiencies, increased maneuverability, and reduced energy consumption in comparison to propeller actuated BFRs. Mammal and bird inspired BFRs share similar flight characteristics and design considerations. For instance, both mammal and bird inspired BFRs minimize edge fluttering and pressure-induced wingtip curl by increasing the rigidity of the wing edge and wingtips. Mammal and insect inspired BFRs can be impact resistant, making them useful in cluttered environments.
Mammal inspired BFRs typically take inspiration from bats, but the flying squirrel has also inspired a prototype. Examples of bat inspired BFRs include Bat Bot and the DALER. Mammal inspired BFRs can be designed to be multi-modal; therefore, they're capable of both flight and terrestrial movement. To reduce the impact of landing, shock absorbers can be implemented along the wings. Alternatively, the BFR can pitch up and increase the amount of drag it experiences. By increasing the drag force, the BFR will decelerate and minimize the impact upon grounding. Different land gait patterns can also be implemented.
Bird inspired BFRs can take inspiration from raptors, gulls, and everything in-between. Bird inspired BFRs can be feathered to increase the angle of attack range over which the prototype can operate before stalling. The wings of bird inspired BFRs allow for in-plane deformation, and the in-plane wing deformation can be adjusted to maximize flight efficiency depending on the flight gait. An example of a raptor inspired BFR is the prototype by Savastano et al. The prototype has fully deformable flapping wings and is capable of carrying a payload of up to 0.8 kg while performing a parabolic climb, steep descent, and rapid recovery. The gull inspired prototype by Grant et al. accurately mimics the elbow and wrist rotation of gulls, and they find that lift generation is maximized when the elbow and wrist deformations are opposite but equal.
Insect inspired BFRs typically take inspiration from beetles or dragonflies. An example of a beetle inspired BFR is the prototype by Phan and Park, and a dragonfly inspired BFR is the prototype by Hu et al. The flapping frequency of insect inspired BFRs are much higher than those of other BFRs; this is because of the aerodynamics of insect flight. Insect inspired BFRs are much smaller than those inspired by mammals or birds, so they are more suitable for dense environments. The prototype by Phan and Park took inspiration from the rhinoceros beetle, so it can successfully continue flight even after a collision by deforming its hindwings.
Biomimetic architecture
Living beings have adapted to a constantly changing environment during evolution through mutation, recombination, and selection. The core idea of the biomimetic philosophy is that nature's inhabitants including animals, plants, and microbes have the most experience in solving problems and have already found the most appropriate ways to last on planet Earth. Similarly, biomimetic architecture seeks solutions for building sustainability present in nature. While nature serves as a model, there are few examples of biomimetic architecture that aim to be nature positive.
The 21st century has seen a ubiquitous waste of energy due to inefficient building designs, in addition to the over-utilization of energy during the operational phase of its life cycle. In parallel, recent advancements in fabrication techniques, computational imaging, and simulation tools have opened up new possibilities to mimic nature across different architectural scales. As a result, there has been a rapid growth in devising innovative design approaches and solutions to counter energy problems. Biomimetic architecture is one of these multi-disciplinary approaches to sustainable design that follows a set of principles rather than stylistic codes, going beyond using nature as inspiration for the aesthetic components of built form but instead seeking to use nature to solve problems of the building's functioning and saving energy.
Characteristics
The term biomimetic architecture refers to the study and application of construction principles which are found in natural environments and species, and are translated into the design of sustainable solutions for architecture. Biomimetic architecture uses nature as a model, measure and mentor for providing architectural solutions across scales, which are inspired by natural organisms that have solved similar problems in nature. Using nature as a measure refers to using an ecological standard of measuring sustainability, and efficiency of man-made innovations, while the term mentor refers to learning from natural principles and using biology as an inspirational source.
Biomorphic architecture, also referred to as bio-decoration, on the other hand, refers to the use of formal and geometric elements found in nature, as a source of inspiration for aesthetic properties in designed architecture, and may not necessarily have non-physical, or economic functions. A historic example of biomorphic architecture dates back to Egyptian, Greek and Roman cultures, using tree and plant forms in the ornamentation of structural columns.
Procedures
Within biomimetic architecture, two basic procedures can be identified, namely, the bottom-up approach (biology push) and top-down approach (technology pull). The boundary between the two approaches is blurry with the possibility of transition between the two, depending on each individual case. Biomimetic architecture is typically carried out in interdisciplinary teams in which biologists and other natural scientists work in collaboration with engineers, material scientists, architects, designers, mathematicians and computer scientists.
In the bottom-up approach, the starting point is a new result from basic biological research promising for biomimetic implementation. For example, developing a biomimetic material system after the quantitative analysis of the mechanical, physical, and chemical properties of a biological system.
In the top-down approach, biomimetic innovations are sought for already existing developments that have been successfully established on the market. The cooperation focuses on the improvement or further development of an existing product.
Examples
Researchers studied the termite's ability to maintain virtually constant temperature and humidity in their termite mounds in Africa despite outside temperatures that vary from . Researchers initially scanned a termite mound and created 3-D images of the mound structure, which revealed construction that could influence human building design. The Eastgate Centre, a mid-rise office complex in Harare, Zimbabwe, stays cool via a passive cooling architecture that uses only 10% of the energy of a conventional building of the same size.
Researchers in the Sapienza University of Rome were inspired by the natural ventilation in termite mounds and designed a double façade that significantly cuts down over lit areas in a building. Scientists have imitated the porous nature of mound walls by designing a facade with double panels that was able to reduce heat gained by radiation and increase heat loss by convection in cavity between the two panels. The overall cooling load on the building's energy consumption was reduced by 15%.
A similar inspiration was drawn from the porous walls of termite mounds to design a naturally ventilated façade with a small ventilation gap. This design of façade is able to induce air flow due to the Venturi effect and continuously circulates rising air in the ventilation slot. Significant transfer of heat between the building's external wall surface and the air flowing over it was observed. The design is coupled with greening of the façade. Green wall facilitates additional natural cooling via evaporation, respiration and transpiration in plants. The damp plant substrate further support the cooling effect.
Scientists in Shanghai University were able to replicate the complex microstructure of clay-made conduit network in the mound to mimic the excellent humidity control in mounds. They proposed a porous humidity control material (HCM) using sepiolite and calcium chloride with water vapor adsorption-desorption content at 550 grams per meter squared. Calcium chloride is a desiccant and improves the water vapor adsorption-desorption property of the Bio-HCM. The proposed bio-HCM has a regime of interfiber mesopores which acts as a mini reservoir. The flexural strength of the proposed material was estimated to be 10.3 MPa using computational simulations.
In structural engineering, the Swiss Federal Institute of Technology (EPFL) has incorporated biomimetic characteristics in an adaptive deployable "tensegrity" bridge. The bridge can carry out self-diagnosis and self-repair. The arrangement of leaves on a plant has been adapted for better solar power collection.
Analysis of the elastic deformation happening when a pollinator lands on the sheath-like perch part of the flower Strelitzia reginae (known as bird-of-paradise flower) has inspired architects and scientists from the University of Freiburg and University of Stuttgart to create hingeless shading systems that can react to their environment. These bio-inspired products are sold under the name Flectofin.
Other hingeless bioinspired systems include Flectofold. Flectofold has been inspired from the trapping system developed by the carnivorous plant Aldrovanda vesiculosa.
Structural materials
There is a great need for new structural materials that are light weight but offer exceptional combinations of stiffness, strength, and toughness.
Such materials would need to be manufactured into bulk materials with complex shapes at high volume and low cost and would serve a variety of fields such as construction, transportation, energy storage and conversion. In a classic design problem, strength and toughness are more likely to be mutually exclusive, i.e., strong materials are brittle and tough materials are weak. However, natural materials with complex and hierarchical material gradients that span from nano- to macro-scales are both strong and tough. Generally, most natural materials utilize limited chemical components but complex material architectures that give rise to exceptional mechanical properties. Understanding the highly diverse and multi functional biological materials and discovering approaches to replicate such structures will lead to advanced and more efficient technologies. Bone, nacre (abalone shell), teeth, the dactyl clubs of stomatopod shrimps and bamboo are great examples of damage tolerant materials. The exceptional resistance to fracture of bone is due to complex deformation and toughening mechanisms that operate at spanning different size scales — nanoscale structure of protein molecules to macroscopic physiological scale. Nacre exhibits similar mechanical properties however with rather simpler structure. Nacre shows a brick and mortar like structure with thick mineral layer (0.2–0.9 μm) of closely packed aragonite structures and thin organic matrix (~20 nm). While thin films and micrometer sized samples that mimic these structures are already produced, successful production of bulk biomimetic structural materials is yet to be realized. However, numerous processing techniques have been proposed for producing nacre like materials. Pavement cells, epidermal cells on the surface of plant leaves and petals, often form wavy interlocking patterns resembling jigsaw puzzle pieces and are shown to enhance the fracture toughness of leaves, key to plant survival. Their pattern, replicated in laser-engraved Poly(methyl methacrylate) samples, was also demonstrated to lead to increased fracture toughness. It is suggested that the arrangement and patterning of cells play a role in managing crack propagation in tissues.
Biomorphic mineralization is a technique that produces materials with morphologies and structures resembling those of natural living organisms by using bio-structures as templates for mineralization. Compared to other methods of material production, biomorphic mineralization is facile, environmentally benign and economic.
Freeze casting (ice templating), an inexpensive method to mimic natural layered structures, was employed by researchers at Lawrence Berkeley National Laboratory to create alumina-Al-Si and IT HAP-epoxy layered composites that match the mechanical properties of bone with an equivalent mineral/organic content. Various further studies also employed similar methods to produce high strength and high toughness composites involving a variety of constituent phases.
Recent studies demonstrated production of cohesive and self supporting macroscopic tissue constructs that mimic living tissues by printing tens of thousands of heterologous picoliter droplets in software-defined, 3D millimeter-scale geometries. Efforts are also taken up to mimic the design of nacre in artificial composite materials using fused deposition modelling and the helicoidal structures of stomatopod clubs in the fabrication of high performance carbon fiber-epoxy composites.
Various established and novel additive manufacturing technologies like PolyJet printing, direct ink writing, 3D magnetic printing, multi-material magnetically assisted 3D printing and magnetically assisted slip casting have also been utilized to mimic the complex micro-scale architectures of natural materials and provide huge scope for future research.
Spider silk is tougher than Kevlar used in bulletproof vests. Engineers could in principle use such a material, if it could be reengineered to have a long enough life, for parachute lines, suspension bridge cables, artificial ligaments for medicine, and other purposes. The self-sharpening teeth of many animals have been copied to make better cutting tools.
New ceramics that exhibit giant electret hysteresis have also been realized.
Neuronal computers
Neuromorphic computers and sensors are electrical devices that copy the structure and function of biological neurons in order to compute. One example of this is the event camera in which only the
pixels that receive a new signal update to a new state. All other pixels do not update until a signal is received.
Self healing-materials
In some biological systems, self-healing occurs via chemical releases at the site of fracture, which initiate a systemic response to transport repairing agents to the fracture site. This promotes autonomic healing. To demonstrate the use of micro-vascular networks for autonomic healing, researchers developed a microvascular coating–substrate architecture that mimics human skin. Bio-inspired self-healing structural color hydrogels that maintain the stability of an inverse opal structure and its resultant structural colors were developed. A self-repairing membrane inspired by rapid self-sealing processes in plants was developed for inflatable lightweight structures such as rubber boats or Tensairity constructions. The researchers applied a thin soft cellular polyurethane foam coating on the inside of a fabric substrate, which closes the crack if the membrane is punctured with a spike. Self-healing materials, polymers and composite materials capable of mending cracks have been produced based on biological materials.
The self-healing properties may also be achieved by the breaking and reforming of hydrogen bonds upon cyclical stress of the material.
Surfaces
Surfaces that recreate the properties of shark skin are intended to enable more efficient movement through water. Efforts have been made to produce fabric that emulates shark skin.
Surface tension biomimetics are being researched for technologies such as hydrophobic or hydrophilic coatings and microactuators.
Adhesion
Wet adhesion
Some amphibians, such as tree and torrent frogs and arboreal salamanders, are able to attach to and move over wet or even flooded environments without falling. This kind of organisms have toe pads which are permanently wetted by mucus secreted from glands that open into the channels between epidermal cells. They attach to mating surfaces by wet adhesion and they are capable of climbing on wet rocks even when water is flowing over the surface. Tire treads have also been inspired by the toe pads of tree frogs. 3D printed hierarchical surface models, inspired from tree and torrent frogs toe pad design, have been observed to produce better wet traction than conventional tire design.
Marine mussels can stick easily and efficiently to surfaces underwater under the harsh conditions of the ocean. Mussels use strong filaments to adhere to rocks in the inter-tidal zones of wave-swept beaches, preventing them from being swept away in strong sea currents. Mussel foot proteins attach the filaments to rocks, boats and practically any surface in nature including other mussels. These proteins contain a mix of amino acid residues which has been adapted specifically for adhesive purposes. Researchers from the University of California Santa Barbara borrowed and simplified chemistries that the mussel foot uses to overcome this engineering challenge of wet adhesion to create copolyampholytes, and one-component adhesive systems with potential for employment in nanofabrication protocols. Other research has proposed adhesive glue from mussels.
Dry adhesion
Leg attachment pads of several animals, including many insects (e.g., beetles and flies), spiders and lizards (e.g., geckos), are capable of attaching to a variety of surfaces and are used for locomotion, even on vertical walls or across ceilings. Attachment systems in these organisms have similar structures at their terminal elements of contact, known as setae. Such biological examples have offered inspiration in order to produce climbing robots, boots and tape. Synthetic setae have also been developed for the production of dry adhesives.
Liquid repellency
Superliquiphobicity refers to a remarkable surface property where a solid surface exhibits an extreme aversion to liquids, causing droplets to bead up and roll off almost instantaneously upon contact. This behavior arises from intricate surface textures and interactions at the nanoscale, effectively preventing liquids from wetting or adhering to the surface. The term "superliquiphobic" is derived from "superhydrophobic," which describes surfaces highly resistant to water. Superliquiphobic surfaces go beyond water repellency and display repellent characteristics towards a wide range of liquids, including those with very low surface tension or containing surfactants.
Superliquiphobicity emerges when a solid surface possesses minute roughness, forming interfaces with droplets through wetting while altering contact angles. This behavior hinges on the roughness factor (Rf), defining the ratio of solid-liquid area to its projection, influencing contact angles. On rough surfaces, non-wetting liquids give rise to composite solid-liquid-air interfaces, their contact angles determined by the distribution of wet and air-pocket areas. The achievement of superliquiphobicity involves increasing the fractional flat geometrical area (fLA) and Rf, leading to surfaces that actively repel liquids.
The inspiration for crafting such surfaces draws from nature's ingenuity, illustrated by the "lotus effect". Leaves of water-repellent plants, like the lotus, exhibit inherent hierarchical structures featuring nanoscale wax-coated formations. Other natural surfaces with these capabilities can include Beetle carapaces and cacti spines, which may exhibit rough features at multiple size scales. These structures lead to superhydrophobicity, where water droplets perch on trapped air bubbles, resulting in high contact angles and minimal contact angle hysteresis. This natural example guides the development of superliquiphobic surfaces, capitalizing on re-entrant geometries that can repel low surface tension liquids and achieve near-zero contact angles.
Creating superliquiphobic surfaces involves pairing re-entrant geometries with low surface energy materials, such as fluorinated substances or liquid-like silocones . These geometries include overhangs that widen beneath the surface, enabling repellency even for minimal contact angles. These surfaces find utility in self-cleaning, anti-icing, anti-fogging, antifouling, enhanced condensation, and more, presenting innovative solutions to challenges in biomedicine, desalination, atmospheric water harvesting, and energy conversion.
In essence, superliquiphobicity, inspired by natural models like the lotus leaf, capitalizes on re-entrant geometries and surface properties to create interfaces that actively repel liquids. These surfaces hold immense promise across a range of applications, promising enhanced functionality and performance in various technological and industrial contexts.
Optics
Biomimetic materials are gaining increasing attention in the field of optics and photonics. There are still little known bioinspired or biomimetic products involving the photonic properties of plants or animals. However, understanding how nature designed such optical materials from biological resources is a current field of research.
Inspiration from fruits and plants
One source of biomimetic inspiration is from plants. Plants have proven to be concept generations for the following functions; re(action)-coupling, self (adaptability), self-repair, and energy-autonomy. As plants do not have a centralized decision making unit (i.e. a brain), most plants have a decentralized autonomous system in various organs and tissues of the plant. Therefore, they react to multiple stimulus such as light, heat, and humidity.
One example is the carnivorous plant species Dionaea muscipula (Venus flytrap). For the last 25 years, there has been research focus on the motion principles of the plant to develop AVFT (artificial Venus flytrap robots). Through the movement during prey capture, the plant inspired soft robotic motion systems. The fast snap buckling (within 100–300 ms) of the trap closure movement is initiated when prey triggers the hairs of the plant within a certain time (twice within 20 s). AVFT systems exist, in which the trap closure movements are actuated via magnetism, electricity, pressurized air, and temperature changes.
Another example of mimicking plants, is the Pollia condensata, also known as the marble berry. The chiral self-assembly of cellulose inspired by the Pollia condensata berry has been exploited to make optically active films. Such films are made from cellulose which is a biodegradable and biobased resource obtained from wood or cotton. The structural colours can potentially be everlasting and have more vibrant colour than the ones obtained from chemical absorption of light. Pollia condensata is not the only fruit showing a structural coloured skin; iridescence is also found in berries of other species such as Margaritaria nobilis. These fruits show iridescent colors in the blue-green region of the visible spectrum which gives the fruit a strong metallic and shiny visual appearance. The structural colours come from the organisation of cellulose chains in the fruit's epicarp, a part of the fruit skin. Each cell of the epicarp is made of a multilayered envelope that behaves like a Bragg reflector. However, the light which is reflected from the skin of these fruits is not polarised unlike the one arising from man-made replicates obtained from the self-assembly of cellulose nanocrystals into helicoids, which only reflect left-handed circularly polarised light.
The fruit of Elaeocarpus angustifolius also show structural colour that come arises from the presence of specialised cells called iridosomes which have layered structures. Similar iridosomes have also been found in Delarbrea michieana fruits.
In plants, multi layer structures can be found either at the surface of the leaves (on top of the epidermis), such as in Selaginella willdenowii or within specialized intra-cellular organelles, the so-called iridoplasts, which are located inside the cells of the upper epidermis. For instance, the rain forest plants Begonia pavonina have iridoplasts located inside the epidermal cells.
Structural colours have also been found in several algae, such as in the red alga Chondrus crispus (Irish Moss).
Inspiration from animals
Structural coloration produces the rainbow colours of soap bubbles, butterfly wings and many beetle scales. Phase-separation has been used to fabricate ultra-white scattering membranes from polymethylmethacrylate, mimicking the beetle Cyphochilus. LED lights can be designed to mimic the patterns of scales on fireflies' abdomens, improving their efficiency.
Morpho butterfly wings are structurally coloured to produce a vibrant blue that does not vary with angle. This effect can be mimicked by a variety of technologies. Lotus Cars claim to have developed a paint that mimics the Morpho butterfly's structural blue colour. In 2007, Qualcomm commercialised an interferometric modulator display technology, "Mirasol", using Morpho-like optical interference. In 2010, the dressmaker Donna Sgro made a dress from Teijin Fibers' Morphotex, an undyed fabric woven from structurally coloured fibres, mimicking the microstructure of Morpho butterfly wing scales.
Canon Inc.'s SubWavelength structure Coating uses wedge-shaped structures the size of the wavelength of visible light. The wedge-shaped structures cause a continuously changing refractive index as light travels through the coating, significantly reducing lens flare. This imitates the structure of a moth's eye. Notable figures such as the Wright Brothers and Leonardo da Vinci attempted to replicate the flight observed in birds. In an effort to reduce aircraft noise researchers have looked to the leading edge of owl feathers, which have an array of small finlets or rachis adapted to disperse aerodynamic pressure and provide nearly silent flight to the bird.
Agricultural systems
Holistic planned grazing, using fencing and/or herders, seeks to restore grasslands by carefully planning movements of large herds of livestock to mimic the vast herds found in nature. The natural system being mimicked and used as a template is grazing animals concentrated by pack predators that must move on after eating, trampling, and manuring an area, and returning only after it has fully recovered. Its founder Allan Savory and some others have claimed potential in building soil, increasing biodiversity, and reversing desertification. However, many researchers have disputed Savory's claim. Studies have often found that the method increases desertification instead of reducing it.
Other uses
Some air conditioning systems use biomimicry in their fans to increase airflow while reducing power consumption.
Technologists like Jas Johl have speculated that the functionality of vacuole cells could be used to design highly adaptable security systems. "The functionality of a vacuole, a biological structure that guards and promotes growth, illuminates the value of adaptability as a guiding principle for security." The functions and significance of vacuoles are fractal in nature, the organelle has no basic shape or size; its structure varies according to the requirements of the cell. Vacuoles not only isolate threats, contain what's necessary, export waste, maintain pressure—they also help the cell scale and grow. Johl argues these functions are necessary for any security system design. The 500 Series Shinkansen used biomimicry to reduce energy consumption and noise levels while increasing passenger comfort. With reference to space travel, NASA and other firms have sought to develop swarm-type space drones inspired by bee behavioural patterns, and oxtapod terrestrial drones designed with reference to desert spiders.
Other technologies
Protein folding has been used to control material formation for self-assembled functional nanostructures. Polar bear fur has inspired the design of thermal collectors and clothing. The light refractive properties of the moth's eye has been studied to reduce the reflectivity of solar panels.
The Bombardier beetle's powerful repellent spray inspired a Swedish company to develop a "micro mist" spray technology, which is claimed to have a low carbon impact (compared to aerosol sprays). The beetle mixes chemicals and releases its spray via a steerable nozzle at the end of its abdomen, stinging and confusing the victim.
Most viruses have an outer capsule 20 to 300 nm in diameter. Virus capsules are remarkably robust and capable of withstanding temperatures as high as 60 °C; they are stable across the pH range 2–10. Viral capsules can be used to create nano device components such as nanowires, nanotubes, and quantum dots. Tubular virus particles such as the tobacco mosaic virus (TMV) can be used as templates to create nanofibers and nanotubes, since both the inner and outer layers of the virus are charged surfaces which can induce nucleation of crystal growth. This was demonstrated through the production of platinum and gold nanotubes using TMV as a template. Mineralized virus particles have been shown to withstand various pH values by mineralizing the viruses with different materials such as silicon, PbS, and CdS and could therefore serve as a useful carriers of material. A spherical plant virus called cowpea chlorotic mottle virus (CCMV) has interesting expanding properties when exposed to environments of pH higher than 6.5. Above this pH, 60 independent pores with diameters about 2 nm begin to exchange substance with the environment. The structural transition of the viral capsid can be utilized in Biomorphic mineralization for selective uptake and deposition of minerals by controlling the solution pH. Possible applications include using the viral cage to produce uniformly shaped and sized quantum dot semiconductor nanoparticles through a series of pH washes. This is an alternative to the apoferritin cage technique currently used to synthesize uniform CdSe nanoparticles. Such materials could also be used for targeted drug delivery since particles release contents upon exposure to specific pH levels.
See also
Artificial photosynthesis
Artificial enzyme
Bio-inspired computing
Bioinspiration & Biomimetics
Biomimetic synthesis
Carbon sequestration
Reverse engineering
Synthetic biology
References
Further reading
Benyus, J. M. (2001). Along Came a Spider. Sierra, 86(4), 46–47.
Hargroves, K. D. & Smith, M. H. (2006). Innovation inspired by nature Biomimicry. Ecos, (129), 27–28.
Marshall, A. (2009). Wild Design: The Ecomimicry Project, North Atlantic Books: Berkeley.
Passino, Kevin M. (2004). Biomimicry for Optimization, Control, and Automation. Springer.
Pyper, W. (2006). Emulating nature: The rise of industrial ecology. Ecos, (129), 22–26.
Smith, J. (2007). It's only natural. The Ecologist, 37(8), 52–55.
Thompson, D'Arcy W., On Growth and Form. Dover 1992 reprint of 1942 2nd ed. (1st ed., 1917).
Vogel, S. (2000). Cats' Paws and Catapults: Mechanical Worlds of Nature and People. Norton.
External links
Biomimetics MIT
Sex, Velcro and Biomimicry with Janine Benyus
Janine Benyus: Biomimicry in Action from TED 2009
Design by Nature - National Geographic
Michael Pawlyn: Using nature's genius in architecture from TED 2010
Robert Full shows how human engineers can learn from animals' tricks from TED 2002
The Fast Draw: Biomimicry from CBS News
Evolutionary biology
Biotechnology
Bioinformatics
Biological engineering
Biophysics
Industrial ecology
Bionics
Water conservation
Renewable energy
Sustainable transport | Biomimetics | [
"Physics",
"Chemistry",
"Engineering",
"Biology"
] | 7,658 | [
"Evolutionary biology",
"Biological engineering",
"Applied and interdisciplinary physics",
"Bionics",
"Industrial engineering",
"Biotechnology",
"Physical systems",
"Transport",
"Sustainable transport",
"Bioinformatics",
"Biophysics",
"nan",
"Environmental engineering",
"Industrial ecology... |
45,795 | https://en.wikipedia.org/wiki/Natural%20capital | Natural capital is the world's stock of natural resources, which includes geology, soils, air, water and all living organisms. Some natural capital assets provide people with free goods and services, often called ecosystem services. All of these underpin our economy and society, and thus make human life possible.
It is an extension of the economic notion of capital (resources which enable the production of more resources) to goods and services provided by the natural environment. For example, a well-maintained forest or river may provide an indefinitely sustainable flow of new trees or fish, whereas over-use of those resources may lead to a permanent decline in timber availability or fish stocks. Natural capital also provides people with essential services, like water catchment, erosion control and crop pollination by insects, which in turn ensure the long-term viability of other natural resources. Since the continuous supply of services from the available natural capital assets is dependent upon a healthy, functioning environment, the structure and diversity of habitats and ecosystems are important components of natural capital. Methods, called 'natural capital asset checks', help decision-makers understand how changes in the current and future performance of natural capital assets will impact human well-being and the economy. Unpriced natural capital is what we refer to when businesses or individuals exploit or abuse nature without being held accountable, which can harm ecosystems and the environment.
History of the concept
The term 'natural capital' was first used in 1973 by E. F. Schumacher in his book Small Is Beautiful and was developed further by Herman Daly, Robert Costanza, and other founders of the science of Ecological Economics, as part of a comprehensive critique of the shortcomings of conventional economics. Natural capital is a concept central to economic assessment ecosystem services valuation which revolves around the idea, that non-human life produces goods and services that are essential to life. Thus, natural capital is essential to the sustainability of the economy.
In a traditional economic analysis of the factors of production, natural capital would usually be classified as "land" distinct from traditional "capital". The historical distinction between "land" and "capital" defined "land" as naturally occurring with a fixed supply, whereas "capital", as originally defined referred only to man-made goods. (e.g., Georgism) It is, however, misleading to view "land" as if its productive capacity is fixed, because natural capital can be improved or degraded by the actions of man over time (see Environmental degradation). Moreover, natural capital yields benefits and goods, such as timber or food, which can be harvested by humans. These benefits are similar to those realized by owners of infrastructural capital which yields more goods, such as a factory that produces automobiles just as an apple tree produces apples.
Ecologists are teaming up with economists to measure and express values of the wealth of ecosystems as a way of finding solutions to the biodiversity crisis. Some researchers have attempted to place a dollar figure on ecosystem services such as the value that the Canadian boreal forest's contribution to global ecosystem services. If ecologically intact, the boreal forest has an estimated value of US$3.7 trillion. The boreal forest ecosystem is one of the planet's great atmospheric regulators and it stores more carbon than any other biome on the planet. The annual value for ecological services of the Boreal Forest is estimated at US$93.2 billion, or 2.5 greater than the annual value of resource extraction.
The economic value of 17 ecosystem services for the entire biosphere (calculated in 1997) has an estimated average value of US$33 trillion per year. These ecological economic values are not currently included in calculations of national income accounts, the GDP and they have no price attributes because they exist mostly outside of the global markets. The loss of natural capital continues to accelerate and goes undetected or ignored by mainstream monetary analysis.
Within the international community the basic principle is not controversial, although much uncertainty exists over how best to value different aspects of ecological health, natural capital and ecosystem services. Full-cost accounting, triple bottom line, measuring well-being and other proposals for accounting reform often include suggestions to measure an "ecological deficit" or "natural deficit" alongside a social and financial deficit. It is difficult to measure such a deficit without some agreement on methods of valuation and auditing of at least the global forms of natural capital (e.g. value of air, water, soil).
All uses of the term currently differentiate natural from man-made or infrastructural capital in some way. Indicators adopted by United Nations Environment Programme's World Conservation Monitoring Centre and the Organisation for Economic Co-operation and Development (OECD) to measure natural biodiversity use the term in a slightly more specific way. According to the OECD, natural capital is "natural assets in their role of providing natural resource inputs and environmental services for economic production" and is "generally considered to comprise three principal categories: natural resources stocks, land, and ecosystems."
The concept of "natural capital" has also been used by the Biosphere 2 project, and the Natural Capitalism economic model of Paul Hawken, Amory Lovins, and Hunter Lovins. Recently, it has begun to be used by politicians, notably Ralph Nader, Paul Martin Jr., and agencies of the UK government, including its Natural Capital Committee and the London Health Observatory.
In Natural Capitalism: Creating the Next Industrial Revolution the author claims that the "next industrial revolution" depends on the espousal of four central strategies: "the conservation of resources through more effective manufacturing processes, the reuse of materials as found in natural systems, a change in values from quantity to quality, and investing in natural capital, or restoring and sustaining natural resources."
Natural capital declaration
In June 2012 a 'natural capital declaration' (NCD) was launched at the Rio+20 summit held in Brazil. An initiative of the global finance sector, it was signed by 40 CEOs to 'integrate natural capital considerations into loans, equity, fixed income and insurance products, as well as in accounting, disclosure and reporting frameworks.' They worked with supporting organisations to develop tools and metrics to integrate natural capital factors into existing business structures.
In summary, its four key aims are to:
Increase understanding of business dependency on natural capital assets;
Support development of tools to integrate natural capital considerations into the decision-making process of all financial products and services;
Help build a global consensus on integrating natural capital into private sector accounting and decision-making;
Encourage a consensus on integrated reporting to include natural capital as one of the key components of an organisation's success.
Natural Capital Protocol
In July 2016, the Natural Capital Coalition (now known as Capitals Coalition) released the Natural Capital Protocol. The Protocol provides a standardised framework for organisations to identify, measure and value their direct and indirect impacts and dependencies on natural capital. The Protocol harmonises existing tools and methodologies, and guides organisations towards the information they need to make strategic and operational decisions that include impacts and dependencies on natural capital.
The Protocol was developed in a unique collaboration between 38 organisations who signed voluntary, pre-competitive contracts. This collaboration was led by Mark Gough, who is now the CEO of the Capitals Coalition.
The Protocol is available on a creative commons license and is free for organisations to apply.
Internationally agreed standard
Environmental-economic accounts provide the conceptual framework for integrated statistics on the environment and its relationship with the economy, including the impacts of the economy on the environment and the contribution of the environment to the economy. A coherent set of indicators and descriptive statistics can be derived from the accounts that inform a wide range of policies.
These include, but are not limited to:
Green economy/green growth
Natural resource management
Sustainable development
The System of Integrated Environmental and Economic Accounting (SEEA) contains the internationally agreed standard concepts, definitions, classifications, accounting rules and tables for producing internationally comparable statistics on the environment and its relationship with the economy. The SEEA is a flexible system in the sense that its implementation can be adapted to countries' specific situations and priorities. Coordination of the implementation of the SEEA and ongoing work on new methodological developments is managed and supervised by the UN Committee of Experts on Environmental-Economic Accounting (UNCEEA). The final, official version of the SEEA Central Framework was published in February 2014.
In March 2021, the United Nations Statistical Commission adopted the SEEA Ecosystem Accounting (SEEA EA) standard at its 52nd session. The SEEA EA is a statistical framework that provides a coherent accounting approach to the measurement of ecosystems. Ecosystem accounts enable the presentation of data and indicators of ecosystem extent, ecosystem condition, and ecosystem services in both physical and monetary terms in a spatially explicit way. Following its adoption, the Statistics Division of the United Nations Department of Economic and Social Affairs (UN DESA) in collaboration with the United Nations Environment Programme (UNEP) and the Basque Centre for Climate Change (BC3) released the ARIES for SEEA Explorer in April 2021, an artificial intelligence-powered tool based on the Artificial Intelligence for Environment and Sustainability (ARIES) platform for rapid, standardized and customizable natural capital accounting. The ARIES for SEEA Explorer was made available on the UN Global Platform in order to accelerate SEEA's implementation worldwide.
Private sector approaches
Some studies envisage a private sector natural capital 'ecosystem', including investors, assets and regulators.
Criticism
Whilst measuring the components of natural capital in any region is a relatively straightforward process, both the task and the rationale of putting a monetary valuation on them, or on the value of the goods and services they freely give us, has proved more contentious.
Within the UK, Guardian columnist, George Monbiot, has been critical of the work of the government's Natural Capital Committee and of other attempts to place any sort of monetary value on natural capital assets, or on the free ecosystem services they provide us with. In a speech referring to a report to government which suggested that better protection of the UK's freshwater ecosystems would yield an enhancement in aesthetic value of £700m, he derided attempts 'to compare things which cannot be directly compared'. He went on to say:
Others have defended efforts to integrate the valuation of natural capital into local and national economic decision-making, arguing that it puts the environment on a more balanced footing when weighed against other commercial pressures, and that 'valuation' of those assets is not the same as monetisation.
See also
Bioeconomics (biophysical)
Biocapacity
Conservation biology
Earth Economics (organization)
Ecodynamics
Ecological deficit
Ecological economics
Ecological footprint
Ecology
Econophysics
Ecosystem services
Energy accounting
Environmental degradation
Environmental economics
Environmental protection
Habitat conservation
Natural capital accounting
Natural Capital Committee
Natural Capital Initiative
Oil depletion
Payment for ecosystem services
Population dynamics
Renewable resource
Sustainability
Sustainable development
The Economics of Ecosystems and Biodiversity
Thermoeconomics
True cost accounting
Value of Earth
References
Notes
Further reading
Pearce, D. 1993. Blueprint 3: Measuring Sustainable Development. Earthscan.
Jansson, AnnMari; et al. (1994). Investing in Natural Capital : The Ecological Economics Approach to Sustainability. Washington, D.C.: Island Press, 504 pp. .
Daily, Gretchen C. (editor) (1997). Nature's Services: Societal Dependence on Natural Ecosystems. Washington, D.C.: Island Press, 392 pp. .
Prugh, Thomas; Robert Costanza et al. (1999). Natural capital and human economic survival. Solomons, Md.: International Society for Ecological Economics, 180 pp. .
Helm, Dieter (2015). Natural Capital – Valuing Our Planet. Yale University Press; 277 pp.
Costanza, Robert (Lead Author); Cutler J. Cleveland (Topic Editor). 2008. "Natural capital." In: Encyclopedia of Earth. Eds. Cutler J. Cleveland (Washington, D.C.: Environmental Information Coalition, National Council for Science and the Environment). First published in the Encyclopedia of Earth February 26, 2007; Last revised July 31, 2008; Retrieved September 5, 2008.
Earth Economics Natural Capital Accounting Solutions Article
Lacombe Morgane and Aronson James, 2009. Restoring Natural Capital in Arid and Semiarid Regions Combining Ecosystem Health with Human Wellbeing . Les dossiers thématiques du CSFD. N° 7.
United Nations - SEEA Central Framework
Bastien-Olvera, B.A., Conte, M.N., Dong, X. et al. Unequal climate impacts on global values of natural capital. Nature (2023). https://doi.org/10.1038/s41586-023-06769-z
External links
United Nations - System of Environmental-Economic Accounting (SEEA)
Natural Capital Project. A joint venture among The Woods Institute for the Environment at Stanford University, The Nature Conservancy, World Wildlife Fund.
Earth Economics
Natural Capital Forum
Case studies and examples to attribute economic values to ecosystems and their services (German website)
The Economics of Ecosystems and Biodiversity (TEEB)
Nick Breeze Interview with Economist Sir Partha Dasgupta, "What is Natural Capital?"
NOAA Economics - US Economic Benefits of Natural Systems to Business and Society.
Ecosystem Valuation Toolkit
Natural Capital Coalition
Capital (economics)
Natural resources
Ecological economics
Environmental social science concepts | Natural capital | [
"Environmental_science"
] | 2,715 | [
"Environmental social science concepts",
"Environmental social science"
] |
45,809 | https://en.wikipedia.org/wiki/Dijkstra%27s%20algorithm | Dijkstra's algorithm ( ) is an algorithm for finding the shortest paths between nodes in a weighted graph, which may represent, for example, a road network. It was conceived by computer scientist Edsger W. Dijkstra in 1956 and published three years later.
Dijkstra's algorithm finds the shortest path from a given source node to every other node. It can be used to find the shortest path to a specific destination node, by terminating the algorithm after determining the shortest path to the destination node. For example, if the nodes of the graph represent cities, and the costs of edges represent the average distances between pairs of cities connected by a direct road, then Dijkstra's algorithm can be used to find the shortest route between one city and all other cities. A common application of shortest path algorithms is network routing protocols, most notably IS-IS (Intermediate System to Intermediate System) and OSPF (Open Shortest Path First). It is also employed as a subroutine in algorithms such as Johnson's algorithm.
The algorithm uses a min-priority queue data structure for selecting the shortest paths known so far. Before more advanced priority queue structures were discovered, Dijkstra's original algorithm ran in time, where is the number of nodes. proposed a Fibonacci heap priority queue to optimize the running time complexity to . This is asymptotically the fastest known single-source shortest-path algorithm for arbitrary directed graphs with unbounded non-negative weights. However, specialized cases (such as bounded/integer weights, directed acyclic graphs etc.) can be improved further. If preprocessing is allowed, algorithms such as contraction hierarchies can be up to seven orders of magnitude faster.
Dijkstra's algorithm is commonly used on graphs where the edge weights are positive integers or real numbers. It can be generalized to any graph where the edge weights are partially ordered, provided the subsequent labels (a subsequent label is produced when traversing an edge) are monotonically non-decreasing.
In many fields, particularly artificial intelligence, Dijkstra's algorithm or a variant offers a uniform cost search and is formulated as an instance of the more general idea of best-first search.
History
Dijkstra thought about the shortest path problem while working as a programmer at the Mathematical Center in Amsterdam in 1956. He wanted to demonstrate the capabilities of the new ARMAC computer. His objective was to choose a problem and a computer solution that non-computing people could understand. He designed the shortest path algorithm and later implemented it for ARMAC for a slightly simplified transportation map of 64 cities in the Netherlands (he limited it to 64, so that 6 bits would be sufficient to encode the city number). A year later, he came across another problem advanced by hardware engineers working on the institute's next computer: minimize the amount of wire needed to connect the pins on the machine's back panel. As a solution, he re-discovered Prim's minimal spanning tree algorithm (known earlier to Jarník, and also rediscovered by Prim). Dijkstra published the algorithm in 1959, two years after Prim and 29 years after Jarník.
Algorithm
The algorithm requires a starting node, and node N, with a distance between the starting node and N. Dijkstra's algorithm starts with infinite distances and tries to improve them step by step:
Create a set of all unvisited nodes: the unvisited set.
Assign to every node a distance from start value: for the starting node, it is zero, and for all other nodes, it is infinity, since initially no path is known to these nodes. During execution, the distance of a node N is the length of the shortest path discovered so far between the starting node and N.
From the unvisited set, select the current node to be the one with the smallest (finite) distance; initially, this is the starting node (distance zero). If the unvisited set is empty, or contains only nodes with infinite distance (which are unreachable), then the algorithm terminates by skipping to step 6. If the only concern is the path to a target node, the algorithm terminates once the current node is the target node. Otherwise, the algorithm continues.
For the current node, consider all of its unvisited neighbors and update their distances through the current node; compare the newly calculated distance to the one currently assigned to the neighbor and assign the smaller one to it. For example, if the current node A is marked with a distance of 6, and the edge connecting it with its neighbor B has length 2, then the distance to B through A is 6 + 2 = 8. If B was previously marked with a distance greater than 8, then update it to 8 (the path to B through A is shorter). Otherwise, keep its current distance (the path to B through A is not the shortest).
After considering all of the current node's unvisited neighbors, the current node is removed from the unvisited set. Thus a visited node is never rechecked, which is correct because the distance recorded on the current node is minimal (as ensured in step 3), and thus final. Repeat from step 3.
Once the loop exits (steps 3–5), every visited node contains its shortest distance from the starting node.
Description
The shortest path between two intersections on a city map can be found by this algorithm using pencil and paper. Every intersection is listed on a separate line: one is the starting point and is labeled (given a distance of) 0. Every other intersection is initially labeled with a distance of infinity. This is done to note that no path to these intersections has yet been established. At each iteration one intersection becomes the current intersection. For the first iteration, this is the starting point.
From the current intersection, the distance to every neighbor (directly-connected) intersection is assessed by summing the label (value) of the current intersection and the distance to the neighbor and then relabeling the neighbor with the lesser of that sum and the neighbor's existing label. I.e., the neighbor is relabeled if the path to it through the current intersection is shorter than previously assessed paths. If so, mark the road to the neighbor with an arrow pointing to it, and erase any other arrow that points to it. After the distances to each of the current intersection's neighbors have been assessed, the current intersection is marked as visited. The unvisited intersection with the smallest label becomes the current intersection and the process repeats until all nodes with labels less than the destination's label have been visited.
Once no unvisited nodes remain with a label smaller than the destination's label, the remaining arrows show the shortest path.
Pseudocode
In the following pseudocode, is an array that contains the current distances from the to other vertices, i.e. is the current distance from the source to the vertex . The array contains pointers to previous-hop nodes on the shortest path from source to the given vertex (equivalently, it is the next-hop on the path from the given vertex to the source). The code , searches for the vertex in the vertex set that has the least value. returns the length of the edge joining (i.e. the distance between) the two neighbor-nodes and . The variable on line 14 is the length of the path from the node to the neighbor node if it were to go through . If this path is shorter than the current shortest path recorded for , then the distance of is updated to .
1 function Dijkstra(Graph, source):
2
3 for each vertex v in Graph.Vertices:
4 dist[v] ← INFINITY
5 prev[v] ← UNDEFINED
6 add v to Q
7 dist[source] ← 0
8
9 while Q is not empty:
10 u ← vertex in Q with minimum dist[u]
11 remove u from Q
12
13 for each neighbor v of u still in Q:
14 alt ← dist[u] + Graph.Edges(u, v)
15 if alt < dist[v]:
16 dist[v] ← alt
17 prev[v] ← u
18
19 return dist[], prev[]
To find the shortest path between vertices and , the search terminates after line 10 if . The shortest path from to can be obtained by reverse iteration:
1 S ← empty sequence
2 u ← target
3 if prev[u] is defined or u = source: // Proceed if the vertex is reachable
4 while u is defined: // Construct the shortest path with a stack S
5 insert u at the beginning of S // Push the vertex onto the stack
6 u ← prev[u] // Traverse from target to source
Now sequence is the list of vertices constituting one of the shortest paths from to , or the empty sequence if no path exists.
A more general problem is to find all the shortest paths between and (there might be several of the same length). Then instead of storing only a single node in each entry of all nodes satisfying the relaxation condition can be stored. For example, if both and connect to and they lie on different shortest paths through (because the edge cost is the same in both cases), then both and are added to . When the algorithm completes, data structure describes a graph that is a subset of the original graph with some edges removed. Its key property is that if the algorithm was run with some starting node, then every path from that node to any other node in the new graph is the shortest path between those nodes graph, and all paths of that length from the original graph are present in the new graph. Then to actually find all these shortest paths between two given nodes, a path finding algorithm on the new graph, such as depth-first search would work.
Using a priority queue
A min-priority queue is an abstract data type that provides 3 basic operations: , and . As mentioned earlier, using such a data structure can lead to faster computing times than using a basic queue. Notably, Fibonacci heap or Brodal queue offer optimal implementations for those 3 operations. As the algorithm is slightly different in appearance, it is mentioned here, in pseudocode as well:
1 function Dijkstra(Graph, source):
2 create vertex priority queue Q
3
4 dist[source] ← 0 // Initialization
5 Q.add_with_priority(source, 0) // associated priority equals dist[·]
6
7 for each vertex v in Graph.Vertices:
8 if v ≠ source
9 prev[v] ← UNDEFINED // Predecessor of v
10 dist[v] ← INFINITY // Unknown distance from source to v
11 Q.add_with_priority(v, INFINITY)
12
13
14 while Q is not empty: // The main loop
15 u ← Q.extract_min() // Remove and return best vertex
16 for each neighbor v of u: // Go through all v neighbors of u
17 alt ← dist[u] + Graph.Edges(u, v)
18 if alt < dist[v]:
19 prev[v] ← u
20 dist[v] ← alt
21 Q.decrease_priority(v, alt)
22
23 return dist, prev
Instead of filling the priority queue with all nodes in the initialization phase, it is possible to initialize it to contain only source; then, inside the if alt < dist[v] block, the becomes an operation.
Yet another alternative is to add nodes unconditionally to the priority queue and to instead check after extraction (u ← Q.extract_min()) that it isn't revisiting, or that no shorter connection was found yet in the if alt < dist[v] block. This can be done by additionally extracting the associated priority p from the queue and only processing further if p == dist[u] inside the while Q is not empty loop.
These alternatives can use entirely array-based priority queues without decrease-key functionality, which have been found to achieve even faster computing times in practice. However, the difference in performance was found to be narrower for denser graphs.
Proof
To prove the correctness of Dijkstra's algorithm, mathematical induction can be used on the number of visited nodes.
Invariant hypothesis: For each visited node , is the shortest distance from to , and for each unvisited node , is the shortest distance from to when traveling via visited nodes only, or infinity if no such path exists. (Note: we do not assume is the actual shortest distance for unvisited nodes, while is the actual shortest distance)
Base case
The base case is when there is just one visited node, . Its distance is defined to be zero, which is the shortest distance, since negative weights are not allowed. Hence, the hypothesis holds.
Induction
Assuming that the hypothesis holds for visited nodes, to show it holds for nodes, let be the next visited node, i.e. the node with minimum . The claim is that is the shortest distance from to .
The proof is by contradiction. If a shorter path were available, then this shorter path either contains another unvisited node or not.
In the former case, let be the first unvisited node on this shorter path. By induction, the shortest paths from to and through visited nodes only have costs and respectively. This means the cost of going from to via has the cost of at least + the minimal cost of going from to . As the edge costs are positive, the minimal cost of going from to is a positive number. However, is at most because otherwise w would have been picked by the priority queue instead of u. This is a contradiction, since it has already been established that + a positive number < .
In the latter case, let be the last but one node on the shortest path. That means . That is a contradiction because by the time is visited, it should have set to at most .
For all other visited nodes , the is already known to be the shortest distance from already, because of the inductive hypothesis, and these values are unchanged.
After processing , it is still true that for each unvisited node , is the shortest distance from to using visited nodes only. Any shorter path that did not use , would already have been found, and if a shorter path used it would have been updated when processing .
After all nodes are visited, the shortest path from to any node consists only of visited nodes. Therefore, is the shortest distance.
Running time
Bounds of the running time of Dijkstra's algorithm on a graph with edges and vertices can be expressed as a function of the number of edges, denoted , and the number of vertices, denoted , using big-O notation. The complexity bound depends mainly on the data structure used to represent the set . In the following, upper bounds can be simplified because is for any simple graph, but that simplification disregards the fact that in some problems, other upper bounds on may hold.
For any data structure for the vertex set , the running time i s:
where and are the complexities of the decrease-key and extract-minimum operations in , respectively.
The simplest version of Dijkstra's algorithm stores the vertex set as a linked list or array, and edges as an adjacency list or matrix. In this case, extract-minimum is simply a linear search through all vertices in , so the running time is .
For sparse graphs, that is, graphs with far fewer than edges, Dijkstra's algorithm can be implemented more efficiently by storing the graph in the form of adjacency lists and using a self-balancing binary search tree, binary heap, pairing heap, Fibonacci heap or a priority heap as a priority queue to implement extracting minimum efficiently. To perform decrease-key steps in a binary heap efficiently, it is necessary to use an auxiliary data structure that maps each vertex to its position in the heap, and to update this structure as the priority queue changes. With a self-balancing binary search tree or binary heap, the algorithm requires
time in the worst case; for connected graphs this time bound can be simplified to . The Fibonacci heap improves this to
When using binary heaps, the average case time complexity is lower than the worst-case: assuming edge costs are drawn independently from a common probability distribution, the expected number of decrease-key operations is bounded by , giving a total running time of
Practical optimizations and infinite graphs
In common presentations of Dijkstra's algorithm, initially all nodes are entered into the priority queue. This is, however, not necessary: the algorithm can start with a priority queue that contains only one item, and insert new items as they are discovered (instead of doing a decrease-key, check whether the key is in the queue; if it is, decrease its key, otherwise insert it). This variant has the same worst-case bounds as the common variant, but maintains a smaller priority queue in practice, speeding up queue operations.
Moreover, not inserting all nodes in a graph makes it possible to extend the algorithm to find the shortest path from a single source to the closest of a set of target nodes on infinite graphs or those too large to represent in memory. The resulting algorithm is called uniform-cost search (UCS) in the artificial intelligence literature and can be expressed in pseudocode as
procedure uniform_cost_search(start) is
node ← start
frontier ← priority queue containing node only
expanded ← empty set
do
if frontier is empty then
return failure
node ← frontier.pop()
if node is a goal state then
return solution(node)
expanded.add(node)
for each of node's neighbors n do
if n is not in expanded and not in frontier then
frontier.add(n)
else if n is in frontier with higher cost
replace existing node with n
Its complexity can be expressed in an alternative way for very large graphs: when is the length of the shortest path from the start node to any node satisfying the "goal" predicate, each edge has cost at least , and the number of neighbors per node is bounded by , then the algorithm's worst-case time and space complexity are both in .
Further optimizations for the single-target case include bidirectional variants, goal-directed variants such as the A* algorithm (see ), graph pruning to determine which nodes are likely to form the middle segment of shortest paths (reach-based routing), and hierarchical decompositions of the input graph that reduce routing to connecting and to their respective "transit nodes" followed by shortest-path computation between these transit nodes using a "highway". Combinations of such techniques may be needed for optimal practical performance on specific problems.
Optimality for comparison-sorting by distance
As well as simply computing distances and paths, Dijkstra's algorithm can be used to sort vertices by their distances from a given starting vertex.
In 2023, Haeupler, Rozhoň, Tětek, Hladík, and Tarjan (one of the inventors of the 1984 heap), proved that, for this sorting problem on a positively-weighted directed graph, a version of Dijkstra's algorithm with a special heap data structure has a runtime and number of comparisons that is within a constant factor of optimal among comparison-based algorithms for the same sorting problem on the same graph and starting vertex but with variable edge weights. To achieve this, they use a comparison-based heap whose cost of returning/removing the minimum element from the heap is logarithmic in the number of elements inserted after it rather than in the number of elements in the heap.
Specialized variants
When arc weights are small integers (bounded by a parameter ), specialized queues can be used for increased speed. The first algorithm of this type was Dial's algorithm for graphs with positive integer edge weights, which uses a bucket queue to obtain a running time . The use of a Van Emde Boas tree as the priority queue brings the complexity to . Another interesting variant based on a combination of a new radix heap and the well-known Fibonacci heap runs in time . Finally, the best algorithms in this special case run in time and time.
Related problems and algorithms
Dijkstra's original algorithm can be extended with modifications. For example, sometimes it is desirable to present solutions which are less than mathematically optimal. To obtain a ranked list of less-than-optimal solutions, the optimal solution is first calculated. A single edge appearing in the optimal solution is removed from the graph, and the optimum solution to this new graph is calculated. Each edge of the original solution is suppressed in turn and a new shortest-path calculated. The secondary solutions are then ranked and presented after the first optimal solution.
Dijkstra's algorithm is usually the working principle behind link-state routing protocols. OSPF and IS-IS are the most common.
Unlike Dijkstra's algorithm, the Bellman–Ford algorithm can be used on graphs with negative edge weights, as long as the graph contains no negative cycle reachable from the source vertex s. The presence of such cycles means that no shortest path can be found, since the label becomes lower each time the cycle is traversed. (This statement assumes that a "path" is allowed to repeat vertices. In graph theory that is normally not allowed. In theoretical computer science it often is allowed.) It is possible to adapt Dijkstra's algorithm to handle negative weights by combining it with the Bellman-Ford algorithm (to remove negative edges and detect negative cycles): Johnson's algorithm.
The A* algorithm is a generalization of Dijkstra's algorithm that reduces the size of the subgraph that must be explored, if additional information is available that provides a lower bound on the distance to the target.
The process that underlies Dijkstra's algorithm is similar to the greedy process used in Prim's algorithm. Prim's purpose is to find a minimum spanning tree that connects all nodes in the graph; Dijkstra is concerned with only two nodes. Prim's does not evaluate the total weight of the path from the starting node, only the individual edges.
Breadth-first search can be viewed as a special-case of Dijkstra's algorithm on unweighted graphs, where the priority queue degenerates into a FIFO queue.
The fast marching method can be viewed as a continuous version of Dijkstra's algorithm which computes the geodesic distance on a triangle mesh.
Dynamic programming perspective
From a dynamic programming point of view, Dijkstra's algorithm is a successive approximation scheme that solves the dynamic programming functional equation for the shortest path problem by the Reaching method.
In fact, Dijkstra's explanation of the logic behind the algorithm:
is a paraphrasing of Bellman's Principle of Optimality in the context of the shortest path problem.
See also
A* search algorithm
Bellman–Ford algorithm
Euclidean shortest path
Floyd–Warshall algorithm
Johnson's algorithm
Longest path problem
Parallel all-pairs shortest path algorithm
Notes
References
External links
Oral history interview with Edsger W. Dijkstra, Charles Babbage Institute, University of Minnesota, Minneapolis
Implementation of Dijkstra's algorithm using TDD, Robert Cecil Martin, The Clean Code Blog
Algorithm
1959 in computing
Graph algorithms
Search algorithms
Routing algorithms
Combinatorial optimization
Articles with example pseudocode
Dutch inventions
Graph distance | Dijkstra's algorithm | [
"Mathematics"
] | 4,824 | [
"Graph distance",
"Mathematical relations",
"Graph theory"
] |
45,810 | https://en.wikipedia.org/wiki/Subwoofer | A subwoofer (or sub) is a loudspeaker designed to reproduce low-pitched audio frequencies, known as bass and sub-bass, that are lower in frequency than those which can be (optimally) generated by a woofer. The typical frequency range that is covered by a subwoofer is about 20–200 Hz for consumer products, below 100 Hz for professional live sound, and below 80 Hz in THX-certified systems. Thus, one or more subwoofers are important for high-quality sound reproduction as they are responsible for the lowest two to three octaves of the ten octaves that are audible. This very low-frequency (VLF) range reproduces the natural fundamental tones of the bass drum, electric bass, double bass, grand piano, contrabassoon, tuba, in addition to thunder, gunshots, explosions, etc.
Subwoofers are never used alone, as they are intended to substitute the VLF sounds of "main" loudspeakers that cover the higher frequency bands. VLF and higher-frequency signals are sent separately to the subwoofer(s) and the mains by a "crossover" network, typically using active electronics, including digital signal processing (DSP). Additionally, subwoofers are fed their own low-frequency effects (LFE) signals that are reproduced at 10 dB higher than standard peak level.
Subwoofers can be positioned more favorably than the main speakers' woofers in the typical listening room acoustic, as the very low frequencies they reproduce are nearly omnidirectional and their direction largely indiscernible. However, much digitally recorded content contains lifelike binaural cues that human hearing may be able to detect in the VLF range, reproduced by a stereo crossover and two or more subwoofers. Subwoofers are not acceptable to all audiophiles, likely due to distortion artifacts produced by the subwoofer driver after the crossover and at frequencies above the crossover.
While the term "subwoofer" technically only refers to the speaker driver, in common parlance, the term often refers to a subwoofer driver mounted in a speaker enclosure (cabinet), often with a built-in amplifier.
Subwoofers are made up of one or more woofers mounted in a loudspeaker enclosure—often made of wood—capable of withstanding air pressure while resisting deformation. Subwoofer enclosures come in a variety of designs, including bass reflex (with a port or vent), using a subwoofer and one or more passive radiator speakers in the enclosure, acoustic suspension (sealed enclosure), infinite baffle, horn-loaded, tapped horn, transmission line, bandpass or isobaric designs. Each design has unique trade-offs with respect to efficiency, low-frequency range, loudness, cabinet size, and cost. Passive subwoofers have a subwoofer driver and enclosure, but they are powered by an external amplifier. Active subwoofers include a built-in amplifier.
The first home audio subwoofers were developed in the 1960s to add bass response to home stereo systems. Subwoofers came into greater popular consciousness in the 1970s with the introduction of Sensurround in movies such as Earthquake, which produced loud low-frequency sounds through large subwoofers. With the advent of the compact cassette and the compact disc in the 1980s, the reproduction of deep and loud bass was no longer limited by the ability of a phonograph record stylus to track a groove, and producers could add more low-frequency content to recordings. As well, during the 1990s, DVDs were increasingly recorded with "surround sound" processes that included a low-frequency effects (LFE) channel, which could be heard using the subwoofer in home-cinema (also called home theater) systems. During the 1990s, subwoofers also became increasingly popular in home stereo systems, custom car audio installations, and in PA systems. By the 2000s, subwoofers became almost universal in sound reinforcement systems in nightclubs and concert venues.
Unlike a system's main loudspeakers, subwoofers can be positioned more optimally in a listening room's acoustic. However, subwoofers are not universally accepted by audiophiles amid complaints of the difficulty of "splicing" the sound with that of the main speakers around the crossover frequency. This is largely due to the subwoofer driver's non-linearity producing harmonic and intermodulation distortion products well above the crossover frequency, and into the range where human hearing can "localize" them, wrecking the stereo "image".
History
1920s to 1950s precursors
From about 1900 to the 1950s, the "lowest frequency in practical use" in recordings, broadcasting and music playback was 100 Hz. When sound was developed for motion pictures, the basic RCA sound system was a single 8-inch (20 cm) speaker mounted in straight horn, an approach which was deemed unsatisfactory by Hollywood decisionmakers, who hired Western Electric engineers to develop a better speaker system. The early Western Electric experiments added a set of 18-inch drivers for the low end in a large, open-backed baffle (extending the range down to 50 Hz) and a high-frequency unit, but MGM was not pleased with the sound of the three-way system, as they had concerns about the delay between the different drivers.
In 1933, the head of MGM's sound department, Douglas Shearer, worked with John Hilliard and James B. Lansing (who would later found Altec Lansing in 1941 and JBL in 1946) to develop a new speaker system that used a two-way enclosure with a W-shaped bass horn that could go as low as 40 Hz. The Shearing-Lansing 500-A ended up being used in "screening rooms, dubbing theaters, and early sound reinforcement". In the late 1930s, Lansing created a smaller two-way speaker with a woofer in a vented enclosure, which he called the Iconic system; it was used as a studio monitor and in high-end home hi-fi set-ups.
During the 1940s swing era, to get deeper bass, "pipelike opening[s]" were cut into speaker enclosures, creating bass reflex enclosures, as it was found that even a fairly inexpensive speaker enclosure, once modified in this way, could "transmit the driving power of a heavy...drumbeat—and sometimes not much else—to a crowded dancefloor." Prior to the development of the first subwoofers, woofers were used to reproduce bass frequencies, usually with a crossover point set at 500 Hz and a loudspeaker in an infinite baffle or in professional sound applications, a "hybrid horn-loaded" bass reflex enclosure (such as the 15-inch Altec Lansing A-7 enclosure nicknamed the "Voice of the Theater", which was introduced in 1946). In the mid-1950s, the Academy of Motion Picture Arts and Sciences selected the "big, boxy" Altec A-7 as the industry standard for movie sound reproduction in theaters.
1960s: first subwoofers
In September 1964, Raymon Dones, of El Cerrito, California, received the first patent for a subwoofer specifically designed to augment omnidirectionally the low frequency range of modern stereo systems (US patent 3150739). It was able to reproduce distortion-free low frequencies down to 15 cycles per second (15 Hz). A specific objective of Dones's invention was to provide portable sound enclosures capable of high fidelity reproduction of low frequency sound waves without giving an audible indication of the direction from which they emanated. Dones's loudspeaker was marketed in the US under the trade name "The Octavium" from the early 1960s to the mid-1970s. The Octavium was utilized by several recording artists of that era, most notably the Grateful Dead, bassist Monk Montgomery, bassist Nathan East, and the Pointer Sisters. The Octavium speaker and Dones's subwoofer technology were also utilized, in a few select theaters, to reproduce low pitch frequencies for the 1974 blockbuster movie Earthquake. During the late 1960s, Dones's Octavium was favorably reviewed by audiophile publications including Hi-Fi News and Audio Magazine.
Another early subwoofer enclosure made for home and studio use was the separate bass speaker for the Servo Statik 1 by New Technology Enterprises. Designed as a prototype in 1966 by physicist Arnold Nudell and airline pilot Cary Christie in Nudell's garage, it used a second winding around a custom Cerwin-Vega 18-inch (45 cm) driver to provide servo control information to the amplifier, and it was offered for sale at $1795, some 40% more expensive than any other complete loudspeaker listed at Stereo Review. In 1968, the two found outside investors and reorganized as Infinity. The subwoofer was reviewed positively in Stereophile magazine's winter 1968 issue as the SS-1 by Infinity. The SS-1 received very good reviews in 1970 from High Fidelity magazine.
Another of the early subwoofers was developed during the late 1960s by Ken Kreisel, the former president of the Miller & Kreisel Sound Corporation in Los Angeles. When Kreisel's business partner, Jonas Miller, who owned a high-end audio store in Los Angeles, told Kreisel that some purchasers of the store's high-end electrostatic speakers had complained about a lack of bass response in the electrostatics, Kreisel designed a powered woofer that would reproduce only those frequencies that were too low for the electrostatic speakers to convey. Infinity's full range electrostatic speaker system that was developed during the 1960s also used a woofer to cover the lower frequency range that its electrostatic arrays did not handle adequately.
1970s to 1980s
The first use of a subwoofer in a recording session was in 1973 for mixing the Steely Dan album Pretzel Logic, when recording engineer Roger Nichols arranged for Kreisel to bring a prototype of his subwoofer to Village Recorders. Further design modifications were made by Kreisel over the next ten years, and in the 1970s and 1980s by engineer John P. D'Arcy; record producer Daniel Levitin served as a consultant and "golden ears" for the design of the crossover network (used to partition the frequency spectrum so that the subwoofer would not attempt to reproduce frequencies too high for its effective range, and so that the main speakers would not need to handle frequencies too low for their effective range). In 1976, Kreisel created the first satellite speakers and subwoofer system, named "David and Goliath".
Subwoofers received a great deal of publicity in 1974 with the movie Earthquake, which was released in Sensurround. Initially installed in 17 U.S. theaters, the Cerwin-Vega "Sensurround" system used large subwoofers that were driven by racks of 500 watt amplifiers, triggered by control tones printed on one of the audio tracks on the film. Four of the subwoofers were positioned in front of the audience under (or behind) the film screen and two more were placed together at the rear of the audience on a platform. Powerful noise energy and loud rumbling in the range of 17 to 120 Hz were generated at the level of 110–120 decibels of sound pressure level, abbreviated dB(SPL). The new low frequency entertainment method helped the film become a box office success. More Sensurround systems were assembled and installed. By 1976, there were almost 300 Sensurround systems leapfrogging through select theaters. Other films to use the effect include the WW II naval battle epic Midway in 1976 and Rollercoaster in 1977.
For owners of 33 rpm LPs and 45 rpm singles, loud and deep bass was limited by the ability of the phonograph record stylus to track the groove. While some hi-fi aficionados had solved the problem by using other playback sources, such as reel-to-reel tape players which were capable of delivering accurate, naturally deep bass from acoustic sources, or synthetic bass not found in nature, with the popular introduction of the compact cassette in the late 1960s it became possible to add more low frequency content to recordings. By the mid-1970s, 12-inch vinyl singles, which allowed for "more bass volume", were used to record disco, reggae, dub and hip-hop tracks; dance club DJs played these records in clubs with subwoofers to achieve "physical and emotional" reactions from dancers.
In the early 1970s, David Mancuso hired sound engineer Alex Rosner to design additional subwoofers for his disco dance events, along with "tweeter arrays" to "boost the treble and bass at opportune moments" at his private, underground parties at The Loft. The demand for sub-bass sound reinforcement in the 1970s was driven by the important role of "powerful bass drum" in disco, as compared with rock and pop; to provide this deeper range, a third crossover point from 40 to 120 Hz (centering on 80 Hz) was added. The Paradise Garage discotheque in New York City, which operated from 1977 to 1987, had "custom designed 'sub-bass' speakers" developed by Alex Rosner's disciple, sound engineer Richard ("Dick") Long that were called "Levan Horns" (in honor of resident DJ Larry Levan).
By the end of the 1970s, subwoofers were used in dance venue sound systems to enable the playing of "[b]ass-heavy dance music" that we "do not 'hear' with our ears but with our entire body". At the club, Long used four Levan bass horns, one in each corner of the dancefloor, to create a "haptic and tactile quality" in the sub-bass that you could feel in your body. To overcome the lack of sub-bass frequencies on 1970s disco records (sub-bass frequencies below 60 Hz were removed during mastering), Long added a DBX 100 "Boom Box" subharmonic pitch generator into his system to synthesize 25 to 50 Hz sub-bass from the 50 to 100 Hz bass on the records.
By the later 1970s, disco club sound engineers were using the same large Cerwin-Vega Sensurround-style folded horn subwoofers that were used in Earthquake and similar movies in dance club system installations. In the early 1980s, Long designed a sound system for the Warehouse dance club, with "huge stacks of subwoofers" which created "deep and intense" bass frequencies that "pound[ed] through your system" and "entire body", enabling clubgoers to "viscerally experience" the DJs' house music mixes.
In Jamaica in the 1970s and 1980s, sound engineers for reggae sound systems began creating "heavily customized" subwoofer enclosures by adding foam and tuning the cabinets to achieve "rich and articulate speaker output below 100 Hz". The sound engineers who developed the "bass-heavy signature sound" of sound reinforcement systems have been called "deserving as much credit for the sound of Jamaican music as their better-known music producer cousins". The sound engineers for Stone Love Movement (a sound system crew), for example, modified folded horn subwoofers they imported from the US to get more of a bass reflex sound that suited local tone preferences for dancehall audiences, as the unmodified folded horn was found to be "too aggressive" sounding and "not deep enough for Jamaican listeners".
In sound system culture, there are both "low and high bass bins" in "towering piles" that are "delivered in large trucks" and set up by a crew of "box boys", and then positioned and adjusted by the sound engineer in a process known as "stringing up", all to create the "sound of reggae music you can literally feel as it comes off these big speakers". Sound system crews hold 'sound clash' competitions, where each sound system is set up and then the two crews try to outdo each other, both in terms of loudness and the "bass it produced".
In the 1980s, the Bose Acoustimass AM-5 became a popular subwoofer and small high-range satellite speaker system for home listening. Steve Feinstein stated that with the AM-5, the system's "appearance mattered as much as, if not more than, great sound" to consumers of this era, as it was considered to be a "cool" look. The success of the AM-5 led to other makers launching subwoofer-satellite speaker systems, including Boston Acoustics Sub Sat 6 and 7, and the Cambridge SoundWorks Ensemble systems (by Kloss). Claims that these sub-satellite systems showed manufacturers and designers that home-cinema systems with a hidden subwoofer could be "feasible and workable in a normal living room" for mainstream consumers. Despite criticism of the AM-5 from audio experts, regarding a lack of bass range below 60 Hz, an "acoustic hole" in the 120 to 200 Hz range and a lack of upper range above 13 kHz for the satellites, the AM-5 system represented 30% of the US speaker market in the early 1990s.
In the 1980s, Origin Acoustics developed the first residential in-wall subwoofer named Composer. It used an aluminum 10-inch (25.4 cm) driver and a foam-lined enclosure designed to be mounted directly into wall studs during the construction of a new home. The frequency response for the Composer is 30 to 250 Hz.
1990s to 2010s
While in the 1960s and 1970s deep bass speakers were once an exotic commodity owned by audiophiles, by the mid-1990s they were much more popular and widely used, with different sizes and capabilities of sound output. An example of 1990s subwoofer use in sound reinforcement is the Ministry of Sound dance club which opened in 1991 in London. The dancefloor's sound system was based on Richard Long's design at Paradise Garage. The club spent about £500,000 on a sound system that used Martin Audio components in custom-built cabinets, including twelve 21" 9,500 watt active subwoofers, twelve 18-inch subwoofers and twelve Martin Audio W8C mid-high speakers.
The popularity of the CD made it possible to add more low frequency content to recordings and satisfy a larger number of consumers. Home subwoofers grew in popularity, as they were easy to add to existing multimedia speaker setups and they were easy to position or hide.
In 2015, Damon Krukowski wrote an article entitled "Drop the Bass: A Case Against Subwoofers" for Pitchfork magazine, based on his performing experience with Galaxie 500; he argues that "for certain styles of music", especially acoustic music genres, "these low-end behemoths are actually ruining our listening experience" by reducing the clarity of the low end. In 2015, John Hunter from REL Acoustics stated that audiophiles tend to "have a love/hate relationship with subwoofers" because most subs have "awful", "entry-level" sound quality and they are used in an "inappropriate way", without integrating the bass seamlessly.
In 2018, some electronic dance music (EDM) sound systems for venues that play hardcore bass have multiple subwoofer arrays to deal with mid-bass (80–140 Hz), bass (40–80 Hz), and "infra-bass" (20–40 Hz).
Construction and features
Loudspeaker and enclosure design
Subwoofers use speaker drivers (woofers) typically between 8-inch (20 cm) and 21-inch (53 cm) in diameter. Some uncommon subwoofers use larger drivers, and single prototype subwoofers as large as 60-inch (152 cm) have been fabricated. On the smaller end of the spectrum, subwoofer drivers as small as 4-inch (10 cm) may be used. Small subwoofer drivers in the 4-inch range are typically used in small computer speaker systems and compact home-cinema subwoofer cabinets. The size of the driver and number of drivers in a cabinet depends on the design of the loudspeaker enclosure, the size of the cabinet, the desired sound pressure level, the lowest frequency targeted and the level of permitted distortion. The most common subwoofer driver sizes used for sound reinforcement in nightclubs, raves and pop/rock concerts are 10-, 12-, 15- and 18-inch models (25 cm, 30 cm, 38 cm, and 45 cm respectively). The largest available sound reinforcement subwoofers, 21-inch (53 cm) drivers, are less commonly seen.
The reference efficiency of a loudspeaker system in its passband is given by:
where is the speed of sound in air and the variables are Thiele/Small parameters: is the resonance frequency of the driver, is the volume of air having the same acoustic compliance as the driver suspension, and is the driver at considering the electrical DC resistance of the driver voice coil. Deep low-frequency extension is a common goal for a subwoofer and small box volumes are also considered desirable, to save space and reduce the size for ease of transportation (in the case of sound reinforcement and DJ subwoofers). Hofmann's "Iron Law" therefore mandates low efficiency under those constraints, and indeed most subwoofers require considerable power, much more than other individual drivers.
So, for the example of a closed-box loudspeaker system, the box volume to achieve a given total of the system is proportional to :
where is the system compliance ratio given by the ratio of the driver compliance and the enclosure compliance, which can be written as:
where is the system resonance frequency.
Therefore, a decrease in box volume (i.e., a smaller speaker cabinet) and the same will decrease the efficiency of the subwoofer. The normalized half-power frequency of a closed-box loudspeaker system is given by:
Here we note that if , then .
As the efficiency is proportional to , small improvements in low-frequency extension with the same driver and box volume will result in very significant reductions in efficiency. For these reasons, subwoofers are typically very inefficient at converting electrical energy into sound energy. This combination of factors accounts for the higher amplifier power required to drive subwoofers, and the requirement for greater power handling for subwoofer drivers. Enclosure variations (e.g., bass reflex designs with a port in the cabinet) are often used for subwoofers to increase the efficiency of the driver/enclosure system, helping to reduce the amplifier power requirements. Vented-box loudspeaker systems have a maximum theoretical efficiency that is 2.9 dB greater than that of the closed-box system.
Subwoofers are typically constructed by mounting one or more woofers in a cabinet of medium-density fibreboard (MDF), oriented strand board (OSB), plywood, fiberglass, aluminum or other stiff materials. Because of the high air pressure that they produce in the cabinet, subwoofer enclosures often require internal bracing to distribute the resulting forces.
Subwoofers have been designed using a number of enclosure approaches: bass reflex (with a port or vent), using a subwoofer and one or more passive radiator speakers in the enclosure, acoustic suspension (sealed enclosure), infinite baffle, horn-loaded, tapped horn, transmission line and bandpass. Each enclosure type has advantages and disadvantages in terms of efficiency increase, bass extension, cabinet size, distortion, and cost.
Multiple enclosure types may even be combined in a single design, such as in computer audio with the subwoofer design of the Labtec LCS-2424 (later acquired by Logitech and used for their Z340/Z540/Z640/Z3/Z4), which is a (primitive) passive radiator bandpass enclosure with a bass reflex dividing chamber.
While not necessarily an enclosure type, isobaric (such as push-pull) coupled loading of two drivers has sometimes been used in subwoofer products of computer, home cinema and sound reinforcement class, and also DIY versions in automotive applications, to provide relatively deep bass for their size. Self-contained "isobaric-like" driver assemblies have been manufactured since the 2010s.
The smallest subwoofers are typically those designed for desktop multimedia systems. The largest common subwoofer enclosures are those used for concert sound reinforcement systems or dance club sound systems. An example of a large concert subwoofer enclosure is the 1980s-era Electro-Voice MT-4 "Bass Cube" system, which used four 18-inch (45 cm) drivers. An example of a subwoofer that uses a bass horn is the Bassmaxx B-Two, which loads an 18-inch (45 cm) driver onto an long folded horn. Folded horn-type subwoofers can typically produce a deeper range with greater efficiency than the same driver in an enclosure that lacks a horn. However, folded horn cabinets are typically larger and heavier than front-firing enclosures, so folded horns are less commonly used. Some experimental fixed-installation subwoofer horns have been constructed using brick and concrete to produce a very long horn that allows a very deep sub-bass extension.
Subwoofer output level can be increased by increasing cone surface area or by increasing cone excursion. Since large drivers require undesirably large cabinets, most subwoofer drivers have large excursions. Unfortunately, high excursion, at high power levels, tends to produce more distortion from inherent mechanical and magnetic effects in electro-dynamic drivers (the most common sort). The conflict between assorted goals can never be fully resolved; subwoofer designs necessarily involve tradeoffs and compromises. Hofmann's Iron Law (the efficiency of a woofer system is directly proportional to its cabinet volume (as in size) and to the cube of its cutoff frequency, that is how low in pitch it will go) applies to subwoofers just as it does to all loudspeakers. Thus, a subwoofer enclosure designer aiming at the deepest-pitched bass will probably have to consider using a large enclosure size; a subwoofer enclosure designer instructed to create the smallest possible cabinet (to make transportation easier) will need to compromise how low in pitch their cabinet will go.
Frequency range and frequency response
The frequency response specification of a speaker describes the range of frequencies or musical tones a speaker can reproduce, measured in hertz (Hz). The typical frequency range for a subwoofer is between 20–200 Hz. Professional concert sound system subwoofers typically operate below 100 Hz, and THX-certified systems operate below 80 Hz. Subwoofers vary in terms of the range of pitches that they can reproduce, depending on a number of factors such as the size of the cabinet and the construction and design of the enclosure and driver(s). Specifications of frequency response depend wholly for relevance on an accompanying amplitude value—measurements taken with a wider amplitude tolerance will give any loudspeaker a wider frequency response. For example, the JBL 4688 TCB Subwoofer System, a now-discontinued system which was designed for movie theaters, had a frequency response of 23–350 Hz when measured within a 10-decibel boundary (0 dB to −10 dB) and a narrower frequency response of 28–120 Hz when measured within a 6-decibel boundary (±3 dB).
Subwoofers also vary in regard to the sound pressure levels achievable and the distortion levels that they produce over their range. Some subwoofers, such as The Abyss by MartinLogan for example, can reproduce pitches down to around 18 Hz (which is about the pitch of the lowest rumbling notes on a huge pipe organ with 16 Hz bass pipes) to 120 Hz (±3 dB). Nevertheless, even though the Abyss subwoofer can go down to 18 Hz, its lowest frequency and maximum SPL with a limit of 10% distortion is 35.5 Hz and 79.8 dB at 2 meters. This means that a person choosing a subwoofer needs to consider more than just the lowest pitch that the subwoofer can reproduce.
Amplification
'Active subwoofers' include their own dedicated amplifiers within the cabinet. Some also include user-adjustable equalization that allows boosted or reduced output at particular frequencies; these vary from a simple "boost" switch, to fully parametric equalizers meant for detailed speaker and room correction. Some such systems are even supplied with a calibrated microphone to measure the subwoofer's in-room response, so the automatic equalizer can correct the combination of subwoofer, subwoofer location, and room response to minimize the effects of room modes and improve low-frequency performance.
'Passive subwoofers' have a subwoofer driver and enclosure, but they do not include an amplifier. They sometimes incorporate internal passive crossovers, with the filter frequency determined at the factory. These are generally used with third-party power amplifiers, taking their inputs from active crossovers earlier in the signal chain. Inexpensive home-theater-in-a-box (HTIB) packages often come with a passive subwoofer cabinet that is amplified by the multi-channel amplifier. While few high-end home-cinema systems use passive subwoofers, this format is still popular in the professional sound industry.
Equalization
Equalization can be used to adjust the in-room response of a subwoofer system. Designers of active subwoofers sometimes include a degree of corrective equalization to compensate for known performance issues (e.g. a steeper than desired low end roll-off rate). In addition, many amplifiers include an adjustable low-pass filter, which prevents undesired higher frequencies from reaching the subwoofer driver. For example, if a listener's main speakers are usable down to 80 Hz, then the subwoofer filter can be set so the subwoofer only works below 80 Hz. Typical filters involve some overlap in frequency ranges; a steep 4th-order 24 dB/octave low-pass filter is generally desired for subwoofers in order to minimize the overlap region. The filter section may also include a high-pass "infrasonic" or "subsonic" filter, which prevents the subwoofer driver from attempting to reproduce frequencies below its safe capabilities. Setting an infrasonic filter is important on bass reflex subwoofer cabinets, as the bass reflex design tends to create the risk of cone overexcursion at pitches below those of the port tuning, which can cause distortion and damage the subwoofer driver. For example, in a ported subwoofer enclosure tuned to 30 Hz, one may wish to filter out pitches below the tuning frequency; that is, frequencies below 30 Hz.
Some systems use parametric equalization in an attempt to correct for room frequency response irregularities. Equalization is often unable to achieve flat frequency response at all listening locations, in part because of the resonance (i.e. standing wave) patterns at low frequencies in nearly all rooms. Careful positioning of the subwoofer within the room can also help flatten the frequency response. Multiple subwoofers can manage a flatter general response since they can often be arranged to excite room modes more evenly than a single subwoofer, allowing equalization to be more effective.
Phase control
Changing the relative phase of the subwoofer with respect to the woofers in other speakers may or may not help to minimize unwanted destructive acoustic interference in the frequency region covered by both the subwoofer and the main speakers. It may not help at all frequencies, and may create further problems with frequency response, but even so is generally provided as an adjustment for subwoofer amplifiers. Phase control circuits may be a simple polarity reversal switch or a more complex continuously variable circuit.
Continuously variable phase control circuits are common in subwoofer amplifiers, and may be found in crossovers and as do-it-yourself electronics projects. Phase controls allow the listener to change the arrival time of the subwoofer sound waves relative to the same frequencies from the main speakers (i.e. at and around the crossover point to the subwoofer). A similar effect can be achieved with the delay control on many home-cinema receivers. The subwoofer phase control found on many subwoofer amplifiers is actually a polarity inversion switch. It allows users to reverse the polarity of the subwoofer relative to the audio signal it is being given. This type of control allows the subwoofer to either be in phase with the source signal, or 180 degrees out of phase.
The subwoofer phase can still be changed by moving the subwoofer closer to or further from the listening position, however this may not be always practical.
Servo subwoofers
Some active subwoofers use a servo feedback mechanism based on cone movement that modifies the signal sent to the voice coil. The servo feedback signal is derived from a comparison of the input signal to the amplifier versus the actual motion of the cone. The usual source of the feedback signal is a few turns of voice coil attached to the cone or a microchip-based accelerometer placed on the cone itself. An advantage of a well-implemented servo subwoofer design is reduced distortion making smaller enclosure sizes possible. The primary disadvantages are cost and complexity.
Servo-controlled subwoofers are not the same as Tom Danley's ServoDrive subwoofers, whose primary mechanism of sound reproduction avoids the normal voice coil and magnet combination in favor of a high-speed belt-driven servomotor. The ServoDrive design increases output power, reduces harmonic distortion and virtually eliminates power compression, the loss of loudspeaker output that results from an increase in voice coil impedance due to overheating of the voice coil. This feature allows high-power operation for extended periods of time. Intersonics was nominated for a TEC Award for its ServoDrive Loudspeaker (SDL) design in 1986 and for the Bass Tech 7 model in 1990.
Applications
Home audio
The use of a subwoofer augments the bass capability of the main speakers, and allows them to be smaller without sacrificing low-frequency capability. A subwoofer does not necessarily provide superior bass performance in comparison to large conventional loudspeakers on ordinary music recordings due to the typical lack of very low frequency content on such sources. However, there are recordings with substantial low-frequency content that most conventional loudspeakers are ill-equipped to handle without the help of a subwoofer, especially at high playback levels, such as music for pipe organs with 32' (9.75 meter) bass pipes (16 Hz), very large bass drums on symphony orchestra recordings and electronic music with extremely low synth bass parts, such as bass tests or bass songs.
Frequencies which are sufficiently low are not easily localized by humans, hence many stereo and multichannel audio systems feature only one subwoofer channel and a single subwoofer can be placed off-center without affecting the perceived sound stage, since the sound that it produces will be difficult to localize. The intention in a system with a subwoofer is often to use small main speakers (of which there are two for stereo and five or more for surround sound or movie tracks) and to hide the subwoofer elsewhere (e.g. behind furniture or under a table), or to augment an existing speaker to save it from having to handle woofer-destroying low frequencies at high levels. This effect is possible only if the subwoofer is restricted to quite low frequencies, usually taken to be, say, 100 Hz and below—still less localization is possible if restricted to even lower maximum frequencies. Higher upper limits for the subwoofer (e.g. 125 Hz) are much more easily localized, making a single subwoofer impractical. Home-cinema systems typically use one subwoofer cabinet (the "1" in 5.1 surround sound). However, to "improve bass distribution in a room that has multiple seating locations, and prevent nulls with weakened bass response, some home-cinema enthusiasts use 5.2- or 7.2- or 9.2-channel surround sound systems with two subwoofer cabinets in the same room.
Some users add a subwoofer because high levels of low-frequency bass are desired, even beyond what is in the original recording, as in the case of house music enthusiasts. Thus, subwoofers may be part of a package that includes satellite speakers, may be purchased separately, or may be built into the same cabinet as a conventional speaker system. For instance, some floor-standing tower speakers include a subwoofer driver in the lower portion of the same cabinet. Physical separation of subwoofer and satellite speakers not only allows placement in an inconspicuous location, but since sub-bass frequencies are particularly sensitive to room location (due to room resonances and reverberation 'modes'), the best position for the subwoofer is not likely to be where the satellite speakers are located.
Higher end home-cinema systems and enthusiasts may also opt to take low-frequency bass reproduction even further by incorporating two or more external subwoofers. Having two subwoofers placed around the room ensures even distribution of bass, reducing subwoofer localization and pressurizing the room with low frequency notes that can be felt, just like the cinemas.
For greatest efficiency and best coupling to the room's air volume, subwoofers can be placed in a corner of the room, far from large room openings, and closer to the listener. This is possible since low bass frequencies have a long wavelength; hence there is little difference between the information reaching a listener's left and right ears, and so they cannot be readily localized. All low-frequency information is sent to the subwoofer. However, unless the sound tracks have been carefully mixed for a single subwoofer channel, it is possible to have some cancellation of low frequencies if bass information in one channel's speaker is out of phase with another.
The physically separate subwoofer/satellite arrangement, with small satellite speakers and a large subwoofer cabinet that can be hidden behind furniture, has been popularized by multimedia speaker systems such as Bose Acoustimass Home Entertainment Systems, Polk Audio RM2008 Series and Klipsch Audio Technologies ProMedia, among many others.
Low-cost HTIB systems advertise their integration and simplicity. Particularly among lower cost HTIB systems and with boomboxes, however, the inclusion of a subwoofer may be little more than a marketing technique. It is unlikely that a small woofer in an inexpensively-built compact plastic cabinet will have better bass performance than well-designed conventional (and typically larger) speakers in a plywood or MDF cabinet. Mere use of the term "subwoofer" is no guarantee of good or extended bass performance. Many multimedia subwoofers might better be termed "mid bass cabinets" (60 to 160 Hz), as they are too small to produce deep bass in the 30 to 59 Hz range.
Further, poorly-designed systems often leave everything below about 120 Hz (or even higher) to the subwoofer, meaning that the subwoofer handles frequencies which the ear can use for sound source localization, thus introducing an undesirable subwoofer "localization effect". This is usually due to poor crossover designs or choices (too high a crossover point or insufficient crossover slope) used in many computer and home-cinema systems; localization also comes from port noise and from typically large amounts of harmonic distortion in the subwoofer design. Home subwoofers sold individually usually include crossover circuitry to assist with the integration of the subwoofer into an existing system.
Car audio
Automobiles are not well suited for the "hidden" subwoofer approach due to space limitations in the passenger compartments. It is not possible, in most circumstances, to fit such large drivers and enclosures into doors or dashboards, so subwoofers are installed in the trunk or back seat space. Some car audio enthusiasts compete to produce very high sound pressure levels in the confines of their vehicle's cabin; sometimes dangerously high sound pressure levels. The "SPL wars" have drawn much attention to subwoofers in general, but subjective competitions in sound quality ("SQ") have not gained equivalent popularity. Top SPL cars are not able to play normal music, or perhaps even to drive normally as they are designed solely for competition. Many non-competition subwoofers are also capable of generating high levels in cars due to the small volume of a typical car interior. High sound levels can cause hearing loss and tinnitus if one is exposed to them for an extended period of time.
In the 2000s, several car audio manufacturers produced subwoofers using non-circular shapes, including Boston Acoustic, Kicker, Sony, Bazooka, and X-Tant. Other major car audio manufacturers like Rockford Fosgate did not follow suit since non-circular subwoofer shapes typically carry some sort of distortion penalties. In situations of limited mounting space they provide a greater cone area and assuming all other variables are constant, greater maximum output. An important factor in the "square sub vs round sub" argument is the effects of the enclosure used. In a sealed enclosure, the maximum displacement is determined by
where
is the volume of displacement (in m3)
is the amount of linear excursion the speaker is mechanically capable of (in m)
is the cone area of the subwoofer (in m2).
These are some of the Thiele/Small parameters which can either be measured or found with the driver specifications.
Cinema sound
After the introduction of Sensurround, movie theater owners began installing permanent subwoofer systems. Dolby Stereo 70 mm Six Track was a six-channel film sound format introduced in 1976 that used two subwoofer channels for stereo reproduction of low frequencies. In 1981, Altec introduced a dedicated cinema subwoofer model tuned to around 20 Hz: the 8182. Starting in 1983, THX certification of the cinema sound experience quantified the parameters of good audio for watching films, including requirements for subwoofer performance levels and enough isolation from outside sounds so that noise did not interfere with the listening experience. This helped provide guidelines for multiplex cinema owners who wanted to isolate each individual cinema from its neighbors, even as louder subwoofers were making isolation more difficult. Specific cinema subwoofer models appeared from JBL, Electro-Voice, Eastern Acoustic Works, Kintek, Meyer Sound Laboratories and BGW Systems in the early 1990s. In 1992, Dolby Digital's six-channel film sound format incorporated a single LFE channel, the "point one" in 5.1 surround sound systems.
Tom Horral, a Boston-based acoustician, blames complaints about modern movies being too loud on subwoofers. He says that before subwoofers made it possible to have loud, relatively undistorted bass, movie sound levels were limited by the distortion in less capable systems at low frequency and high levels.
Sound reinforcement
Professional audio subwoofers used in rock concerts in stadia, DJ performances at dance music venues (e.g. electronic dance music) and similar events must be capable of very high bass output levels, at very low frequencies, with low distortion. This is reflected in the design attention given in the 2010s to the subwoofer applications for sound reinforcement, public address systems, dance club systems and concert systems. Cerwin-Vega states that when a subwoofer cabinet is added to an existing full-range speaker system, this is advantageous, as it moves the "...lowest frequencies from your main [full-range] PA speakers" thus "...eliminat[ing] a large amount of the excess work that your main top [full-range] box was trying to reproduce. As a result, your main [full-range] cabinets will run more efficiently and at higher volumes." A different argument for adding subwoofer cabinets is that they may increase the "level of clarity" and "perceived loudness" of an overall PA system, even if the SPL is not actually increased. Sound on Sound states that adding a subwoofer enclosure to a full-range system will reduce "cone excursion", thus lowering distortion, leading to an overall cleaner sound.
Consumer applications (as in home use) are considerably less demanding due to much smaller listening space and lower playback levels. Subwoofers are now almost universal in professional sound applications such as live concert sound, churches, nightclubs, and theme parks. Movie theaters certified to the THX standard for playback always include high-capability subwoofers. Some professional applications require subwoofers designed for very high sound levels, using multiple 12-, 15-, 18- or 21-inch drivers (30 cm, 40 cm, 45 cm, 53 cm respectively). Drivers as small as 10-inch (25 cm) are occasionally used, generally in horn-loaded enclosures.
The number of subwoofer enclosures used in a concert depends on a number of factors, including the size of the venue, whether it is indoors or outdoors, the amount of low-frequency content in the band's sound, the desired volume of the concert, and the design and construction of the enclosures (e.g. direct-radiating versus horn-loaded). A tiny coffeehouse may only need a single 10-inch subwoofer cabinet to augment the bass provided by the full-range speakers. A small bar may use one or two direct-radiating 15-inch (40 cm) subwoofer cabinets. A large dance club may have a row of four or five twin 18-inch (45 cm) subwoofer cabinets, or more. In the largest stadium venues, there may be a very large number of subwoofer enclosures. For example, the 2009–2010 U2 360° Tour used 24 Clair Brothers BT-218 subwoofers (a double 18-inch (45 cm) box) around the perimeter of the central circular stage, and 72 proprietary Clair Brothers cardioid S4 subwoofers placed underneath the ring-shaped "B" stage which encircles the central main stage.
The main speakers may be 'flown' from the ceiling of a venue on chain hoists, and 'flying points' (i.e. attachment points) are built into many professional loudspeaker enclosures. Subwoofers can be flown or stacked on the ground near the stage. One of the reasons subwoofers may be installed on the ground is that on-the-ground installation can increase the bass performance, particularly if the subwoofer is placed in the corner of a room (conversely, if a subwoofer cabinet is perceived as too loud, alternatives to on-the-ground or in-corner installation may be considered). There can be more than 50 double-18-inch (45 cm) cabinets in a typical rock concert system. Just as consumer subwoofer enclosures can be made of medium-density fibreboard (MDF), oriented strand board (OSB), plywood, plastic or other dense material, professional subwoofer enclosures can be built from the same materials. MDF is commonly used to construct subwoofers for permanent installations as its density is relatively high and weatherproofing is not a concern. Other permanent installation subwoofers have used very thick plywood: the Altec 8182 (1981) used 7-ply 28 mm birch-faced oak plywood. Touring subwoofers are typically built from 18–20 mm thick void-free Baltic birch (Betula pendula or Betula pubescens) plywood from Finland, Estonia or Russia; such plywood affords greater strength for frequently transported enclosures. Not naturally weatherproof, Baltic birch is coated with carpet, thick paint or spray-on truck bedliner to give the subwoofer enclosures greater durability.
Touring subwoofer cabinets are typically designed with features that facilitate moving the enclosure (e.g. wheels, a "towel bar" handle and recessed handles), a protective grille for the speaker (in direct radiating-style cabinets), metal or plastic protection for the cabinets to protect the finish as the cabinets are being slid one on top of another, and hardware to facilitate stacking the cabinets (e.g. interlocking corners) and for "flying" the cabinets from stage rigging. In the 2000s, many small- to mid-size subwoofers designed for bands' live sound use and DJ applications are "powered subs"; that is, they have an integrated power amplifier. These models typically have a built-in crossover. Some models have a metal-reinforced hole in which a speaker pole can be mounted for elevating full-frequency range cabinets.
Use in a full-range system
In professional concert sound system design, subwoofers can be incorporated seamlessly with the main speakers into a stereo or mono full-range system by using an active crossover. The audio engineer typically adjusts the frequency point at which lower frequency sounds are routed to the subwoofer speaker(s), and mid-frequency and higher frequency sounds are sent to the full-range speakers. Such a system receives its signal from the main mono or stereo mixing console mix bus and amplifies all frequencies together in the desired balance. If the main sound system is stereo, the subwoofers can also be in stereo. Otherwise, a mono subwoofer channel can be derived within the crossover from a stereo mix, depending on the crossover make and model. While 2010-era subwoofer cabinet manufacturers suggest placing subwoofers on either side of a stage (as implied by the inclusion of pole cups for the full-range PA cabinets), Dave Purton argues that for club gigs, having two subwoofer cabinets on either side of a stage will lead to gaps in bass coverage in the venue; he states that putting the two subwoofer cabinets together will create a more even, omnidirectional sub-bass tone.
Aux-fed subwoofers
Instead of being incorporated into a full-range system, concert subwoofers can be supplied with their own signal from a separate mix bus on the mixing console; often one of the auxiliary sends ("aux" or "auxes") is used. This configuration is called "aux-fed subwoofers", and has been observed to significantly reduce low-frequency "muddiness" that can build up in a concert sound system which has on stage a number of microphones each picking up low frequencies and each having different phase relationships of those low frequencies. The aux-fed subwoofers method greatly reduces the number of sources feeding the subwoofers to include only those instruments that have desired low-frequency information; sources such as kick drum, bass guitar, samplers and keyboard instruments. This simplifies the signal sent to the subwoofers and makes for greater clarity and low punch. Aux-fed subwoofers can even be stereo, if desired, using two auxiliary mix buses.
Directional bass
To keep low-frequency sound focused on the audience area and not on the stage, and to keep low frequencies from bothering people outside of the event space, a variety of techniques have been developed in concert sound to turn the naturally omnidirectional radiation of subwoofers into a more directional pattern. Several examples of sound reinforcement system applications where sound engineers seek to provide more directional bass sound are: music festivals, which often have several bands performing at the same time on different stages; large raves or EDM events, where there are multiple DJs performing at the same time in different rooms or stages; and multiplex movie theaters, in which there are many films being shown simultaneously in auditoriums that share common walls. These techniques include: setting up subwoofers in a vertical array; using combinations of delay and polarity inversion; and setting up a delay-shaded system. With a cardioid dispersion pattern, two end-fire subwoofers can be placed one in front of the other. The enclosure nearest the listener is delayed by a few milliseconds. The second subwoofer is delayed a precise amount corresponding to the time it takes sound to traverse the distance between speaker grilles.
Vertical array
Stacking or rigging the subwoofers in a vertical array focuses the low frequencies forward to a greater or lesser extent depending on the physical length of the array. Longer arrays have a more directional effect at lower frequencies. The directionality is more pronounced in the vertical dimension, yielding a radiation pattern that is wide but not tall. This helps reduce the amount of low-frequency sound bouncing off the ceiling indoors and assists in mitigating external noise complaints outdoors.
Rear delay array
Another cardioid subwoofer array pattern can be used horizontally, one which takes few channels of processing and no change in required physical space. This method is often called "cardioid subwoofer array" or "CSA" even though the pattern of all directional subwoofer methods is cardioid. The CSA method reverses the enclosure orientation and inverts the polarity of one out of every three subwoofers across the front of the stage, and delays those enclosures for maximum cancellation of the target frequency on stage. Polarity inversion can be implemented electronically, by reversing the wiring polarity, or by physically positioning the enclosure to face rearward. This method reduces forward output relative to a tight-packed, flat-fronted array of subwoofers, but can solve problems of unwanted low-frequency energy coming into microphones on stage. Compared to the end-fire array, this method has less on-axis energy but more even pattern control throughout the audience, and more predictable cancellation rearward. The effect spans a range of slightly more than one octave.
A second method of rear delay array combines end-fire topology with polarity reversal, using two subwoofers positioned front to back, the drivers spaced one-quarter wavelength apart, the rear enclosure inverted in polarity and delayed by a few milliseconds for maximum cancellation on stage of the target frequency. This method has the least output power directed toward the audience, compared to other directional methods.
End-fire array
The end-fire subwoofer method, also called "forward steered arrays", places subwoofer drivers co-axially in one or more rows, using destructive interference to reduce emissions to the sides and rear. This can be done with separate subwoofer enclosures positioned front to back with a spacing between them of one-quarter wavelength of the target frequency, the frequency that is least wanted on stage or most desired in the audience. Each row is delayed beyond the first row by an amount related to the speed of sound in air; the delay is typically a few milliseconds. The arrival time of sound energy from all the subwoofers is near-simultaneous from the audience's perspective, but is canceled out to a large degree behind the subwoofers because of offset sound wave arrival times. Directionality of the target frequency can achieve as much as 25 dB rear attenuation, and the forward sound is coherently summed in line with the subwoofers. The positional technique of end-fire subwoofers came into widespread use in European live concert sound in 2006.
The end-fire array trades a few decibels of output power for directionality, so it requires more enclosures for the same output power as a tight-packed, flat-fronted array of enclosures. Sixteen enclosures in four rows were used in 2007 at one of the stages of the Ultra Music Festival, to reduce low-frequency interference to neighboring stages. Because of the physical size of the end-fire array, few concert venues are able to implement it. The output pattern suffers from comb-filtering off-axis, but can be further shaped by adjusting the frequency response of each row of subwoofers.
Delay-shaded array
A long line of subwoofers placed horizontally along the front edge of the stage can be delayed such that the center subwoofers fire several milliseconds prior to the ones flanking them, which fire several milliseconds prior to their neighbors, continuing in this fashion until the last subwoofers are reached at the outside ends of the subwoofer row (beamforming). This method helps to counteract the extreme narrowing of the horizontal dispersion pattern seen with a horizontal subwoofer array. Such delay shading can be used to virtually reshape a loudspeaker array.
Directional enclosure
Some subwoofer enclosure designs rely on drivers facing to the sides or to the rear in order to achieve a degree of directionality. End-fire drivers can be positioned within a single enclosure that houses more than one driver.
Variants
Some less commonly-used bass enclosures are variants of the subwoofer enclosure's normal range, such as the mid-bass cabinet (60–160 Hz) and the infrasonic (extra low) subwoofer (below 30 Hz).
Enclosure designs
Front-loaded subwoofers have one or more subwoofer speakers in a cabinet, typically with a grille to protect the speakers. In practice, many front-loaded subwoofer cabinets have a vent or port in the speaker cabinet, thus creating a bass reflex enclosure. Even though a bass reflex port or vent creates some additional phase delay, it adds SPL, which is often a key factor in PA and sound reinforcement system applications. As such, non-vented front-firing subwoofer cabinets are rare in pro audio applications.
Horn-loaded subwoofers have a subwoofer speaker that has a pathway following the loudspeaker. To save space, the pathway is often folded, so that the folded pathway will fit into a box-style cabinet. Cerwin-Vega states that its folded horn subwoofer cabinets, "...on average, produce 6dB more output at 1 watt than a dual 18[-inch] vented box" giving "four times the output with half the number of drivers". The Cerwin-Vega JE-36C has a five feet long folded horn chamber length in the wooden cabinet.
Manifold subwoofers have two or more subwoofer speakers that feed the throat of a single horn. This increases SPL for the subwoofer, at the cost of increased distortion. EV has a manifold speaker cabinet in which four drivers are mounted as close together as practical. This is a different design than the "multiple drivers in one throat" approach. An unusual example of manifold subwoofer design is the Thomas Mundorf (TM) approach of having four subwoofers facing each other and sitting close together, which is used for theater in the round shows, where the audience surrounds the performers in a big circle (e.g. Metallica has used this in some concerts). The TM approach produces an omnidirectional bass sound. Cerwin-Vega defines a manifold enclosure as one in which "...the driver faces into a tuned ported cavity. You hear sound directly from the back of the driver in addition to the sound that emanates out of the port. This type of enclosure design extends the frequency capability of the driver lower than it would reproduce by itself."
Bandpass subwoofers have a sealed cabinet within another cabinet, with the "outer" cabinet typically having a vent or port.
Bass instrument amplification
In rare cases, sound reinforcement subwoofer enclosures are also used for bass instrument amplification by electric bass players and synth bass players. For most bands and most small- to mid-size venues (e.g. nightclubs and bars), standard bass guitar speaker enclosures or keyboard amplifiers will provide sufficient sound pressure levels for onstage monitoring. Since a regular electric bass has a low "E" (41 Hz) as its lowest note, most standard bass guitar cabinets are only designed with a range that goes down to about 40 Hz. However, in some cases, performers wish to have extended sub-bass response that is not available from standard instrument speaker enclosures, so they use subwoofer cabinets. Just as some electric guitarists add huge stacks of guitar cabinets mainly for show, some bassists will add immense subwoofer cabinets with 18-inch woofers mainly for show, and the extension subwoofer cabinets will be operated at a lower volume than the main bass cabinets.
Bass guitar players who may use subwoofer cabinets include performers who play with extended range basses that include a low "B" string (about 31 Hz), bassists who play in styles where a very powerful sub-bass response is an important part of the sound (e.g. funk, Latin, gospel, R & B, etc.), and/or bass players who perform in stadium-size venues or large outdoor venues. Keyboard players who use subwoofers for on-stage monitoring include electric organ players who use bass pedal keyboards (which go down to a low "C" which is about 33 Hz) and synth bass players who play rumbling sub-bass parts that go as low as 18 Hz. Of all of the keyboard instruments that are amplified onstage, synthesizers can produce some of the lowest pitches, because unlike a traditional electric piano or electric organ, which have as their lowest notes a low "A" and a low "C", respectively, a synth does not have a fixed lowest octave. A synth player can add lower octaves to a patch by pressing an "octave down" button, which can produce pitches that are at the limits of human hearing.
Several concert sound subwoofer manufacturers suggest that their subs can be used for bass instrument amplification. Meyer Sound suggests that its 650-R2 Concert Series Subwoofer, a enclosure with two 18-inch drivers (45 cm), can be used for bass instrument amplification. While performers who use concert sound subwoofers for onstage monitoring may like the powerful sub-bass sound that they get onstage, sound engineers may find the use of large subwoofers (e.g. two 18-inch drivers (45 cm)) for onstage instrument monitoring to be problematic, because it may interfere with the "Front of House" sub-bass sound.
Bass shakers
Since infrasonic bass is felt, sub-bass can be augmented using tactile transducers. Unlike a typical subwoofer driver, which produces audible vibrations, tactile transducers produce low-frequency vibrations that are designed to be felt by individuals who are touching the transducer or indirectly through a piece of furniture or a wooden floor. Tactile transducers have recently emerged as a device class, called variously "bass shakers", "butt shakers" and "throne shakers". They are attached to a seat, for instance a drummer's stool ("throne") or gamer's chair, car seat or home-cinema seating, and the vibrations of the driver are transmitted to the body then to the ear in a manner similar to bone conduction. They connect to an amplifier like a normal subwoofer. They can be attached to a large flat surface (for instance a floor or platform) to create a large low- frequency conduction area, although the transmission of low frequencies through the feet is not as efficient as through the seat.
The advantage of tactile transducers used for low frequencies is that they allow a listening environment that is not filled with loud low-frequency sound waves in the air. This helps the drummer in a rock music band to monitor their kick drum performance without filling the stage with powerful, loud low-frequency sound from a 15-inch (40 cm) subwoofer monitor and an amplifier, which can "leak" into other drum mics and lower the quality of the sound mix. By not having a large, powerful subwoofer monitor, a bass shaker also enables a drummer to lower the sound pressure levels that they are exposed to during a performance, reducing the risk of hearing damage. For home cinema or video game use, bass shakers help the user avoid disturbing others in nearby apartments or rooms, because even powerful sound effects such as explosion sounds in a war video game or the simulated rumbling of an earthquake in an adventure film will not be heard by others. However, some critics argue that the felt vibrations are disconnected from the auditory experience, and they claim that that music is less satisfying with the "butt shaker" than sound effects. As well, critics have claimed that the bass shaker itself can rattle during loud sound effects, which can distract the listener.
World record claims
With varying measures upon which to base claims, several subwoofers have been said to be the world's largest, loudest or lowest.
Matterhorn
The Matterhorn is a subwoofer model completed in March 2007 by Danley Sound Labs in Gainesville, Georgia after a U.S. military request for a loudspeaker that could project infrasonic waves over a distance. The Matterhorn was designed to reproduce a continuous sine wave from 15 to 20 Hz, and generate 94 dB at a distance of , and more than 140 dB for music playback measured at the horn mouth. It can generate a constant 15 Hz sine wave tone at 140 dB for 24 hours a day, seven days a week with extremely low harmonic distortion. The subwoofer has a flat frequency response from 15 to 80 Hz, and is down 3 dB at 12 Hz. It was built within an intermodal container long and square. The container doors swing open to reveal a tapped horn driven by 40 long-throw 15-inch (40 cm) MTX speaker drivers each powered by its own 1000-watt amplifier. The manufacturer claims that 53 13-ply 18 mm sheets of plywood were used in its construction, though one of the fabricators wrote that double-thickness 26-ply sheets were used for convenience.
A diesel generator is housed within the enclosure to supply electricity when external power is unavailable. At the annual National Systems Contractors Association (NSCA) convention in March 2007, the Matterhorn was barred from making any loud demonstrations of its power because of concerns about damaging the building of the Orange County Convention Center. Instead, using only a single 20 amp electrical circuit for safety, visitors were allowed to step inside the horn of the subwoofer for an "acoustic massage" as the fractionally powered Matterhorn reproduced low level 10–15 Hz waves.
Royal Device custom installation
Another subwoofer claimed to be the world's biggest is a custom installation in Italy made by Royal Device primarily of bricks, concrete and sound-deadening material consisting of two subwoofers embedded in the foundation of a listening room. The horn-loaded subwoofers each have a floor mouth that is , and a horn length that is , in a cavity under the floor of the listening room. Each subwoofer is driven by eight 18-inch subwoofer drivers with voice coils. The designers assert that the floor mouths of the horns are additionally loaded acoustically by a vertical wooden horn expansion and the room's ceiling to create a 10 Hz "full power" wave at the listening position.
Concept Design 60-inch
A single diameter subwoofer driver was designed by Richard Clark and David Navone with the help of Eugene Patronis of the Georgia Institute of Technology. The driver was intended to break sound pressure level records when mounted in a road vehicle, calculated to be able to achieve more than 180 dBSPL. It was built in 1997, driven by DC motors connected to a rotary crankshaft somewhat like in a piston engine. The cone diameter was and was held in place with a surround. With a peak-to-peak stroke, it created a one-way air displacement of . It was capable of generating 5–20 Hz sine waves at various DC motor speeds—not as a response to audio signal—it could not play music. The driver was mounted in a stepvan owned by Tim Maynor but was too powerful for the amount of applied reinforcement and damaged the vehicle. MTX's Loyd Ivey helped underwrite the project and the driver was then called the MTX "Thunder 1000000" (one million).
Still unfinished, the vehicle was entered in an SPL competition in 1997 at which a complaint was lodged against the computer control of the DC motor. Instead of using the controller, two leads were touched together in the hope that the motor speed was set correctly. The drive shaft broke after one positive stroke which created an interior pressure wave of 162 dB. The Concept Design 60-inch was not shown in public after 1998.
MTX Jackhammer
The heaviest production subwoofer intended for use in automobiles is the MTX Jackhammer by MTX Audio, which features a diameter cone. The Jackhammer has been known to take upwards of 6000 watts sent to a dual voice coil moving within a strontium ferrite magnet. The Jackhammer weighs in at and has an aluminum heat sink. The Jackhammer has been featured on the reality TV show Pimp My Ride.
See also
7.1 surround sound
Bass management
Mid-range speaker
Power alley
Rotary woofer
Super tweeter
Thiele/Small
Tweeter
Woofer
Notes
References
External links
Audio engineering
Audio hobbies
Bass (sound)
Film and video technology
In-car entertainment
Loudspeaker technology
Loudspeakers | Subwoofer | [
"Engineering"
] | 14,616 | [
"Electrical engineering",
"Audio engineering"
] |
45,829 | https://en.wikipedia.org/wiki/Structural%20engineering | Structural engineering is a sub-discipline of civil engineering in which structural engineers are trained to design the 'bones and joints' that create the form and shape of human-made structures. Structural engineers also must understand and calculate the stability, strength, rigidity and earthquake-susceptibility of built structures for buildings and nonbuilding structures. The structural designs are integrated with those of other designers such as architects and building services engineer and often supervise the construction of projects by contractors on site. They can also be involved in the design of machinery, medical equipment, and vehicles where structural integrity affects functioning and safety. See glossary of structural engineering.
Structural engineering theory is based upon applied physical laws and empirical knowledge of the structural performance of different materials and geometries. Structural engineering design uses a number of relatively simple structural concepts to build complex structural systems. Structural engineers are responsible for making creative and efficient use of funds, structural elements and materials to achieve these goals.
History
Structural engineering dates back to 2700 B.C. when the step pyramid for Pharaoh Djoser was built by Imhotep, the first engineer in history known by name. Pyramids were the most common major structures built by ancient civilizations because the structural form of a pyramid is inherently stable and can be almost infinitely scaled (as opposed to most other structural forms, which cannot be linearly increased in size in proportion to increased loads).
The structural stability of the pyramid, whilst primarily gained from its shape, relies also on the strength of the stone from which it is constructed, and its ability to support the weight of the stone above it. The limestone blocks were often taken from a quarry near the building site and have a compressive strength from 30 to 250 MPa (MPa = Pa × 106). Therefore, the structural strength of the pyramid stems from the material properties of the stones from which it was built rather than the pyramid's geometry.
Throughout ancient and medieval history most architectural design and construction were carried out by artisans, such as stonemasons and carpenters, rising to the role of master builder. No theory of structures existed, and understanding of how structures stood up was extremely limited, and based almost entirely on empirical evidence of 'what had worked before' and intuition. Knowledge was retained by guilds and seldom supplanted by advances. Structures were repetitive, and increases in scale were incremental.
No record exists of the first calculations of the strength of structural members or the behavior of structural material, but the profession of a structural engineer only really took shape with the Industrial Revolution and the re-invention of concrete (see History of Concrete). The physical sciences underlying structural engineering began to be understood in the Renaissance and have since developed into computer-based applications pioneered in the 1970s.
Timeline
1452–1519 Leonardo da Vinci made many contributions.
1638: Galileo Galilei published the book Two New Sciences in which he examined the failure of simple structures.
1660: Hooke's law by Robert Hooke.
1687: Isaac Newton published Philosophiæ Naturalis Principia Mathematica, which contains his laws of motion.
1750: Euler–Bernoulli beam equation.
1700–1782: Daniel Bernoulli introduced the principle of virtual work.
1707–1783: Leonhard Euler developed the theory of buckling of columns.
1826: Claude-Louis Navier published a treatise on the elastic behaviors of structures.
1873: Carlo Alberto Castigliano presented his dissertation "Intorno ai sistemi elastici", which contains his theorem for computing displacement as the partial derivative of the strain energy. This theorem includes the method of "least work" as a special case.
1874: Otto Mohr formalized the idea of a statically indeterminate structure.
1922: Timoshenko corrects the Euler–Bernoulli beam equation.
1936: Hardy Cross' publication of the moment distribution method, an important innovation in the design of continuous frames.
1941: Alexander Hrennikoff solved the discretization of plane elasticity problems using a lattice framework.
1942: Richard Courant divided a domain into finite subregions.
1956: J. Turner, R. W. Clough, H. C. Martin, and L. J. Topp's paper on the "Stiffness and Deflection of Complex Structures" introduces the name "finite-element method" and is widely recognized as the first comprehensive treatment of the method as it is known today.
Structural failure
The history of structural engineering contains many collapses and failures. Sometimes this is due to obvious negligence, as in the case of the Pétion-Ville school collapse, in which Rev. Fortin Augustin " constructed the building all by himself, saying he didn't need an engineer as he had good knowledge of construction" following a partial collapse of the three-story schoolhouse that sent neighbors fleeing. The final collapse killed 94 people, mostly children.
In other cases structural failures require careful study, and the results of these inquiries have resulted in improved practices and a greater understanding of the science of structural engineering. Some such studies are the result of forensic engineering investigations where the original engineer seems to have done everything in accordance with the state of the profession and acceptable practice yet a failure still eventuated. A famous case of structural knowledge and practice being advanced in this manner can be found in a series of failures involving box girders which collapsed in Australia during the 1970s.
Theory
Structural engineering depends upon a detailed knowledge of applied mechanics, materials science, and applied mathematics to understand and predict how structures support and resist self-weight and imposed loads. To apply the knowledge successfully a structural engineer generally requires detailed knowledge of relevant empirical and theoretical design codes, the techniques of structural analysis, as well as some knowledge of the corrosion resistance of the materials and structures, especially when those structures are exposed to the external environment. Since the 1990s, specialist software has become available to aid in the design of structures, with the functionality to assist in the drawing, analyzing and designing of structures with maximum precision; examples include AutoCAD, StaadPro, ETABS, Prokon, Revit Structure, Inducta RCB, etc. Such software may also take into consideration environmental loads, such as earthquakes and winds.
Profession
Structural engineers are responsible for engineering design and structural analysis. Entry-level structural engineers may design the individual structural elements of a structure, such as the beams and columns of a building. More experienced engineers may be responsible for the structural design and integrity of an entire system, such as a building.
Structural engineers often specialize in particular types of structures, such as buildings, bridges, pipelines, industrial, tunnels, vehicles, ships, aircraft, and spacecraft. Structural engineers who specialize in buildings may specialize in particular construction materials such as concrete, steel, wood, masonry, alloys and composites.
Structural engineering has existed since humans first started to construct their structures. It became a more defined and formalized profession with the emergence of architecture as a distinct profession from engineering during the industrial revolution in the late 19th century. Until then, the architect and the structural engineer were usually one and the same thing – the master builder. Only with the development of specialized knowledge of structural theories that emerged during the 19th and early 20th centuries, did the professional structural engineers come into existence.
The role of a structural engineer today involves a significant understanding of both static and dynamic loading and the structures that are available to resist them. The complexity of modern structures often requires a great deal of creativity from the engineer in order to ensure the structures support and resist the loads they are subjected to. A structural engineer will typically have a four or five-year undergraduate degree, followed by a minimum of three years of professional practice before being considered fully qualified.
Structural engineers are licensed or accredited by different learned societies and regulatory bodies around the world (for example, the Institution of Structural Engineers in the UK). Depending on the degree course they have studied and/or the jurisdiction they are seeking licensure in, they may be accredited (or licensed) as just structural engineers, or as civil engineers, or as both civil and structural engineers.
Another international organisation is IABSE(International Association for Bridge and Structural Engineering). The aim of that association is to exchange knowledge and to advance the practice of structural engineering worldwide in the service of the profession and society.
Specializations
Building structures
Structural building engineering is primarily driven by the creative manipulation of materials and forms and the underlying mathematical and scientific ideas to achieve an end that fulfills its functional requirements and is structurally safe when subjected to all the loads it could reasonably be expected to experience. This is subtly different from architectural design, which is driven by the creative manipulation of materials and forms, mass, space, volume, texture, and light to achieve an end which is aesthetic, functional, and often artistic.
The structural design for a building must ensure that the building can stand up safely, able to function without excessive deflections or movements which may cause fatigue of structural elements, cracking or failure of fixtures, fittings or partitions, or discomfort for occupants. It must account for movements and forces due to temperature, creep, cracking, and imposed loads. It must also ensure that the design is practically buildable within acceptable manufacturing tolerances of the materials. It must allow the architecture to work, and the building services to fit within the building and function (air conditioning, ventilation, smoke extract, electrics, lighting, etc.). The structural design of a modern building can be extremely complex and often requires a large team to complete.
Structural engineering specialties for buildings include:
Earthquake engineering
Façade engineering
Fire engineering
Roof engineering
Tower engineering
Wind engineering
Earthquake engineering structures
Earthquake engineering structures are those engineered to withstand earthquakes.
The main objectives of earthquake engineering are to understand the interaction of structures with the shaking ground, foresee the consequences of possible earthquakes, and design and construct the structures to perform during an earthquake.
Earthquake-proof structures are not necessarily extremely strong like the El Castillo pyramid at Chichen Itza shown above.
One important tool of earthquake engineering is base isolation, which allows the base of a structure to move freely with the ground.
Civil engineering structures
Civil structural engineering includes all structural engineering related to the built environment. It includes:
The structural engineer is the lead designer on these structures, and often the sole designer. In the design of structures such as these, structural safety is of paramount importance (in the UK, designs for dams, nuclear power stations and bridges must be signed off by a chartered engineer).
Civil engineering structures are often subjected to very extreme forces, such as large variations in temperature, dynamic loads such as waves or traffic, or high pressures from water or compressed gases. They are also often constructed in corrosive environments, such as at sea, in industrial facilities, or below ground.
resisted and significant deflections of structures.
The forces which parts of a machine are subjected to can vary significantly and can do so at a great rate. The forces which a boat or aircraft are subjected to vary enormously and will do so thousands of times over the structure's lifetime. The structural design must ensure that such structures can endure such loading for their entire design life without failing.
These works can require mechanical structural engineering:
Boilers and pressure vessels
Coachworks and carriages
Cranes
Elevators
Escalators
Marine vessels and hulls
Aerospace structures
Aerospace structure types include launch vehicles, (Atlas, Delta, Titan), missiles (ALCM, Harpoon), Hypersonic vehicles (Space Shuttle), military aircraft (F-16, F-18) and commercial aircraft (Boeing 777, MD-11). Aerospace structures typically consist of thin plates with stiffeners for the external surfaces, bulkheads, and frames to support the shape and fasteners such as welds, rivets, screws, and bolts to hold the components together.
Nanoscale structures
A nanostructure is an object of intermediate size between molecular and microscopic (micrometer-sized) structures. In describing nanostructures it is necessary to differentiate between the number of dimensions on the nanoscale. Nanotextured surfaces have one dimension on the nanoscale, i.e., only the thickness of the surface of an object is between 0.1 and 100 nm. Nanotubes have two dimensions on the nanoscale, i.e., the diameter of the tube is between 0.1 and 100 nm; its length could be much greater. Finally, spherical nanoparticles have three dimensions on the nanoscale, i.e., the particle is between 0.1 and 100 nm in each spatial dimension. The terms nanoparticles and ultrafine particles (UFP) often are used synonymously although UFP can reach into the micrometer range. The term 'nanostructure' is often used when referring to magnetic technology.
Structural engineering for medical science
Medical equipment (also known as armamentarium) is designed to aid in the diagnosis, monitoring or treatment of medical conditions. There are several basic types: diagnostic equipment includes medical imaging machines, used to aid in diagnosis; equipment includes infusion pumps, medical lasers, and LASIK surgical machines; medical monitors allow medical staff to measure a patient's medical state. Monitors may measure patient vital signs and other parameters including ECG, EEG, blood pressure, and dissolved gases in the blood; diagnostic medical equipment may also be used in the home for certain purposes, e.g. for the control of diabetes mellitus. A biomedical equipment technician (BMET) is a vital component of the healthcare delivery system. Employed primarily by hospitals, BMETs are the people responsible for maintaining a facility's medical equipment.
Structural elements
Any structure is essentially made up of only a small number of different types of elements:
Columns
Beams
Plates
Arches
Shells
Catenaries
Many of these elements can be classified according to form (straight, plane / curve) and dimensionality (one-dimensional / two-dimensional):
Columns
Columns are elements that carry only axial force (compression) or both axial force and bending (which is technically called a beam-column but practically, just a column). The design of a column must check the axial capacity of the element and the buckling capacity.
The buckling capacity is the capacity of the element to withstand the propensity to buckle. Its capacity depends upon its geometry, material, and the effective length of the column, which depends upon the restraint conditions at the top and bottom of the column. The effective length is where is the real length of the column and K is the factor dependent on the restraint conditions.
The capacity of a column to carry axial load depends on the degree of bending it is subjected to, and vice versa. This is represented on an interaction chart and is a complex non-linear relationship.
Beams
A beam may be defined as an element in which one dimension is much greater than the other two and the applied loads are usually normal to the main axis of the element. Beams and columns are called line elements and are often represented by simple lines in structural modeling.
cantilevered (supported at one end only with a fixed connection)
simply supported (fixed against vertical translation at each end and horizontal translation at one end only, and able to rotate at the supports)
fixed (supported in all directions for translation and rotation at each end)
continuous (supported by three or more supports)
a combination of the above (ex. supported at one end and in the middle)
Beams are elements that carry pure bending only. Bending causes one part of the section of a beam (divided along its length) to go into compression and the other part into tension. The compression part must be designed to resist buckling and crushing, while the tension part must be able to adequately resist the tension.
Trusses
A truss is a structure comprising members and connection points or nodes. When members are connected at nodes and forces are applied at nodes members can act in tension or compression. Members acting in compression are referred to as compression members or struts while members acting in tension are referred to as tension members or ties. Most trusses use gusset plates to connect intersecting elements. Gusset plates are relatively flexible and unable to transfer bending moments. The connection is usually arranged so that the lines of force in the members are coincident at the joint thus allowing the truss members to act in pure tension or compression.
Trusses are usually used in large-span structures, where it would be uneconomical to use solid beams.
Plates
Plates carry bending in two directions. A concrete flat slab is an example of a plate. Plates are understood by using continuum mechanics, but due to the complexity involved they are most often designed using a codified empirical approach, or computer analysis.
They can also be designed with yield line theory, where an assumed collapse mechanism is analyzed to give an upper bound on the collapse load. This technique is used in practice but because the method provides an upper-bound (i.e. an unsafe prediction of the collapse load) for poorly conceived collapse mechanisms, great care is needed to ensure that the assumed collapse mechanism is realistic.
Shells
Shells derive their strength from their form and carry forces in compression in two directions. A dome is an example of a shell. They can be designed by making a hanging-chain model, which will act as a catenary in pure tension and inverting the form to achieve pure compression.
Arches
Arches carry forces in compression in one direction only, which is why it is appropriate to build arches out of masonry. They are designed by ensuring that the line of thrust of the force remains within the depth of the arch. It is mainly used to increase the bountifulness of any structure.
Catenaries
Catenaries derive their strength from their form and carry transverse forces in pure tension by deflecting (just as a tightrope will sag when someone walks on it). They are almost always cable or fabric structures. A fabric structure acts as a catenary in two directions.
Materials
Structural engineering depends on the knowledge of materials and their properties, in order to understand how different materials support and resist loads. It also involves a knowledge of Corrosion engineering to avoid for example galvanic coupling of dissimilar materials.
Common structural materials are:
Iron: wrought iron, cast iron
Concrete: reinforced concrete, prestressed concrete
Alloy: steel, stainless steel
Masonry
Timber: hardwood, softwood
Aluminium
Composite materials: plywood
Other structural materials: adobe, bamboo, carbon fibre, fiber reinforced plastic, mudbrick, roofing materials
See also
Glossary of structural engineering
Aircraft structures
Architects
Architectural engineering
Building officials
Building services engineering
Civil engineering
Construction engineering
Corrosion engineering
Earthquake engineering
Forensic engineering
Index of structural engineering articles
List of bridge disasters
List of structural engineers
List of structural engineering software
Mechanical engineering
Nanostructure
Prestressed structure
Structurae
Structural engineer
Structural engineering software
Structural fracture mechanics
Structural failure
Structural robustness
Structural steel
Structural testing
Notes
References
Hibbeler, R. C. (2010). Structural Analysis. Prentice-Hall.
Blank, Alan; McEvoy, Michael; Plank, Roger (1993). Architecture and Construction in Steel. Taylor & Francis. .
Hewson, Nigel R. (2003). Prestressed Concrete Bridges: Design and Construction. Thomas Telford. .
Heyman, Jacques (1999). The Science of Structural Engineering. Imperial College Press. .
Hosford, William F. (2005). Mechanical Behavior of Materials. Cambridge University Press. .
Further reading
Blockley, David (2014). A Very Short Introduction to Structural Engineering. Oxford University Press .
Bradley, Robert E.; Sandifer, Charles Edward (2007). Leonhard Euler: Life, Work, and Legacy. Elsevier. .
Chapman, Allan. (2005). England's Leornardo: Robert Hooke and the Seventeenth Century's Scientific Revolution. CRC Press. .
Dugas, René (1988). A History of Mechanics. Courier Dover Publications. .
Feld, Jacob; Carper, Kenneth L. (1997). Construction Failure. John Wiley & Sons. .
Galilei, Galileo. (translators: Crew, Henry; de Salvio, Alfonso) (1954). Dialogues Concerning Two New Sciences. Courier Dover Publications.
Kirby, Richard Shelton (1990). Engineering in History. Courier Dover Publications. .
Heyman, Jacques (1998). Structural Analysis: A Historical Approach. Cambridge University Press. .
Labrum, E.A. (1994). Civil Engineering Heritage. Thomas Telford. .
Lewis, Peter R. (2004). Beautiful Bridge of the Silvery Tay. Tempus.
Mir, Ali (2001). Art of the Skyscraper: the Genius of Fazlur Khan. Rizzoli International Publications. .
Rozhanskaya, Mariam; Levinova, I. S. (1996). "Statics" in Morelon, Régis & Rashed, Roshdi (1996). Encyclopedia of the History of Arabic Science, vol. 2–3, Routledge.
Whitbeck, Caroline (1998). Ethics in Engineering Practice and Research. Cambridge University Press. .
Hoogenboom P.C.J. (1998). "Discrete Elements and Nonlinearity in Design of Structural Concrete Walls", Section 1.3 Historical Overview of Structural Concrete Modelling, .
Nedwell, P.J.; Swamy, R.N.(ed) (1994). Ferrocement:Proceedings of the Fifth International Symposium. Taylor & Francis. .
External links
Structural Engineering Association – International
National Council of Structural Engineers Associations
Structural Engineering Institute, an institute of the American Society of Civil Engineers
Structurae database of structures
Structural Engineering Association – International
The EN Eurocodes are a series of 10 European Standards, EN 1990 – EN 1999, providing a common approach for the design of buildings and other civil engineering works and construction products
Civil engineering
Engineering disciplines | Structural engineering | [
"Engineering"
] | 4,439 | [
"Structural engineering",
"Civil engineering",
"Construction",
"nan"
] |
45,831 | https://en.wikipedia.org/wiki/Tetanus | Tetanus (), also known as lockjaw, is a bacterial infection caused by Clostridium tetani and characterized by muscle spasms. In the most common type, the spasms begin in the jaw and then progress to the rest of the body. Each spasm usually lasts for a few minutes. Spasms occur frequently for three to four weeks. Some spasms may be severe enough to fracture bones. Other symptoms of tetanus may include fever, sweating, headache, trouble swallowing, high blood pressure, and a fast heart rate. The onset of symptoms is typically 3 to 21 days following infection. Recovery may take months; about 10% of cases prove to be fatal.
C. tetani is commonly found in soil, saliva, dust, and manure. The bacteria generally enter through a break in the skin, such as a cut or puncture wound caused by a contaminated object. They produce toxins that interfere with normal muscle contractions. Diagnosis is based on the presenting signs and symptoms. The disease does not spread between people.
Tetanus can be prevented by immunization with the tetanus vaccine. In those who have a significant wound and have had fewer than three doses of the vaccine, both vaccination and tetanus immune globulin are recommended. The wound should be cleaned, and any dead tissue should be removed. In those who are infected, tetanus immune globulin, or, if unavailable, intravenous immunoglobulin (IVIG) is used. Muscle relaxants may be used to control spasms. Mechanical ventilation may be required if a person's breathing is affected.
Tetanus occurs in all parts of the world but is most frequent in hot and wet climates where the soil has a high organic content. In 2015, there were about 209,000 infections and about 59,000 deaths globally. This is down from 356,000 deaths in 1990. In the US, there are about 30 cases per year, almost all of which were in people who had not been vaccinated. An early description of the disease was made by Hippocrates in the 5th century BC. The cause of the disease was determined in 1884 by Antonio Carle and Giorgio Rattone at the University of Turin, and a vaccine was developed in 1924.
Signs and symptoms
Tetanus often begins with mild spasms in the jaw muscles—also known as lockjaw. Similar spasms can also be a feature of trismus. The spasms can also affect the facial muscles, resulting in an appearance called risus sardonicus. Chest, neck, back, abdominal muscles, and buttocks may be affected. Back muscle spasms often cause arching, called opisthotonus. Sometimes, the spasms affect muscles utilized during inhalation and exhalation, which can lead to breathing problems.
Prolonged muscular action causes sudden, powerful, and painful contractions of muscle groups, called tetany. These episodes can cause fractures and muscle tears. Other symptoms include fever, headache, restlessness, irritability, feeding difficulties, breathing problems, burning sensation during urination, urinary retention, and loss of stool control.
Even with treatment, about 10% of people who contract tetanus die. The mortality rate is higher in unvaccinated individuals, and in people over 60 years of age.
Incubation period
The incubation period of tetanus may be up to several months but is usually about ten days. In general, the farther the injury site is from the central nervous system, the longer the incubation period. However, shorter incubation periods will have more severe symptoms. In trismus nascentium (i.e. neonatal tetanus), symptoms usually appear from 4 to 14 days after birth, averaging about 7 days. On the basis of clinical findings, four different forms of tetanus have been described.
Generalized tetanus
Generalized tetanus is the most common type of tetanus, representing about 80% of cases. The generalized form usually presents with a descending pattern. The first sign is trismus or lockjaw, then facial spasms (called risus sardonicus), followed by stiffness of the neck, difficulty in swallowing, and rigidity of pectoral and calf muscles. Other symptoms include elevated temperature, sweating, elevated blood pressure, and episodic rapid heart rate. Spasms may occur frequently and last for several minutes, with the body shaped into a characteristic form called opisthotonos. Spasms continue for up to four weeks, and complete recovery may take months.
Neonatal tetanus
Neonatal tetanus (trismus nascentium) is a form of generalized tetanus that occurs in newborns, usually those born to mothers who themselves have not been vaccinated. If the mother has been vaccinated against tetanus, the infants acquire passive immunity, and are thus protected. It usually occurs through infection of the unhealed umbilical stump, particularly when the stump is cut with a non-sterile instrument. As of 1998, neonatal tetanus was common in many developing countries, and was responsible for about 14% (215,000) of all neonatal deaths. In 2010, the worldwide death toll was approximately 58,000 newborns. As the result of a public health campaign, the death toll from neonatal tetanus was reduced by 90% between 1990 and 2010, and by 2013, the disease had been largely eliminated from all but 25 countries. Neonatal tetanus is rare in developed countries.
Local tetanus
Local tetanus is an uncommon form of the disease, in which people have persistent contraction of muscles in the same anatomic area as the injury. The contractions may persist for many weeks before gradually subsiding. Local tetanus is generally milder; only about 1% of cases are fatal, but it may precede the onset of generalized tetanus.
Cephalic tetanus
Cephalic tetanus is the rarest form of the disease (0.9–3% of cases), and is limited to muscles and nerves in the head. It usually occurs after trauma to the head area, including: skull fracture, laceration, eye injury, dental extraction, and otitis media, but it has been observed from injuries to other parts of the body. Paralysis of the facial nerve is most frequently implicated, which may cause lockjaw, facial palsy, or ptosis, but other cranial nerves can also be affected. Cephalic tetanus may progress to a more generalized form of the disease. Due to its rarity, clinicians may be unfamiliar with the clinical presentation, and may not suspect tetanus as the illness. Treatment can be complicated, as symptoms may be concurrent with the initial injury that caused the infection. Cephalic tetanus is more likely than other forms of tetanus to be fatal, with the progression to generalized tetanus carrying a 15–30% case fatality rate.
Cause
Tetanus is caused by the tetanus bacterium, Clostridium tetani. The disease is an international health problem, as C. tetani endospores are ubiquitous. Endospores can be introduced into the body through a puncture wound (penetrating trauma). Due to C. tetani being an anaerobic bacterium, it and its endospores thrive in environments that lack oxygen, such as a puncture wound. With the changes in oxygen levels, the turkey drumstick-shaped endospore can quickly spread.
The disease occurs almost exclusively in people who are inadequately immunized. It is more common in hot, damp climates with soil rich in organic matter. Manure-treated soils may contain spores, as they are widely distributed in the intestines and feces of many animals, such as horses, sheep, cattle, dogs, cats, rats, guinea pigs, and chickens. In agricultural areas, a significant number of human adults may harbor the organism.
The spores can also be found on skin surfaces and in contaminated heroin. Rarely, tetanus can be contracted through surgical procedures, intramuscular injections, compound fractures, and dental infections. Animal bites can transmit tetanus.
Tetanus is often associated with rust, especially rusty nails. Although rust itself does not cause tetanus, objects that accumulate rust are often found outdoors or in places that harbor soil bacteria. Additionally, the rough surface of rusty metal provides crevices for dirt containing C. tetani, while a nail affords a means to puncture the skin and deliver endospores deep within the body at the site of the wound. An endospore is a non-metabolizing survival structure that begins to metabolize and cause infection once in an adequate environment. Hence, stepping on a nail (rusty or not) may result in a tetanus infection, as the low-oxygen (anaerobic) environment may exist under the skin, and the puncturing object can deliver endospores to a suitable environment for growth. It is a common misconception that rust itself is the cause; a related misconception is that a puncture from a rust-free nail is not a risk.
Pathophysiology
Tetanus neurotoxin (TeNT) binds to the presynaptic membrane of the neuromuscular junction, is internalized, and is transported back through the axon until it reaches the central nervous system. Here, it selectively binds to and is transported into inhibitory neurons via endocytosis. It then leaves the vesicle for the neuron cytosol, where it cleaves vesicle associated membrane protein (VAMP) synaptobrevin, which is necessary for membrane fusion of small synaptic vesicles (SSV's). SSV's carry neurotransmitter to the membrane for release, so inhibition of this process blocks neurotransmitter release.
Tetanus toxin specifically blocks the release of the neurotransmitters GABA and glycine from inhibitory neurons. These neurotransmitters keep overactive motor neurons from firing and also play a role in the relaxation of muscles after contraction. When inhibitory neurons are unable to release their neurotransmitters, motor neurons fire out of control, and muscles have difficulty relaxing. This causes the muscle spasms and spastic paralysis seen in tetanus infection.
The tetanus toxin, tetanospasmin, is made up of a heavy chain and a light chain. There are three domains, each of which contributes to the pathophysiology of the toxin. The heavy chain has two of the domains. The N-terminal side of the heavy chain helps with membrane translocation, and the C-terminal side helps the toxin locate the specific receptor site on the correct neuron. The light chain domain cleaves the VAMP protein once it arrives in the inhibitory neuron cytosol.
There are four main steps in tetanus's mechanism of action: binding to the neuron, internalization of the toxin, membrane translocation, and cleavage of the target VAMP.
Neurospecific binding
The toxin travels from the wound site to the neuromuscular junction through the bloodstream, where it binds to the presynaptic membrane of a motor neuron. The heavy chain C-terminal domain aids in binding to the correct site, recognizing and binding to the correct glycoproteins and glycolipids in the presynaptic membrane. The toxin binds to a site that will be taken into the neuron as an endocytic vesicle that will travel down the axon, past the cell body, and down the dendrites to the dendritic terminal at the spine and central nervous system. Here, it will be released into the synaptic cleft, and allowed to bind with the presynaptic membrane of inhibitory neurons in a similar manner seen with the binding to the motor neuron.
Internalization
Tetanus toxin is then internalized again via endocytosis, this time, in an acidic vesicle. In a mechanism not well understood, depolarization caused by the firing of the inhibitory neuron causes the toxin to be pulled into the neuron inside vesicles.
Membrane translocation
The toxin then needs a way to get out of the vesicle and into the neuron cytosol for it to act on its target. The low pH of the vesicle lumen causes a conformational change in the toxin, shifting it from a water-soluble form to a hydrophobic form. With the hydrophobic patches exposed, the toxin can slide into the vesicle membrane. The toxin forms an ion channel in the membrane that is nonspecific for Na+, K+, Ca2+, and Cl− ions. There is a consensus among experts that this new channel is involved in the translocation of the toxin's light chain from the inside of the vesicle to the neuron cytosol, but the mechanism is not well understood or agreed upon. It has been proposed that the channel could allow the light chain (unfolded from the low pH environment) to leave through the toxin pore, or that the pore could alter the electrochemical gradient enough, by letting in or out ions, to cause osmotic lysis of the vesicle, spilling the vesicle's contents.
Enzymatic target cleavage
The light chain of the tetanus toxin is zinc-dependent protease. It shares a common zinc protease motif (His-Glu-Xaa-Xaa-His) that researchers hypothesized was essential for target cleavage until this was more recently confirmed by experiment: when all zinc was removed from the neuron with heavy metal chelators, the toxin was inhibited, only to be reactivated when the zinc was added back in. The light chain binds to VAMP, and cleaves it between Gln76 and Phe77. Without VAMP, vesicles holding the neurotransmitters needed for motor neuron regulation (GABA and glycine) cannot be released, causing the above-mentioned deregulation of motor neurons and muscle tension.
Diagnosis
There are currently no blood tests for diagnosing tetanus. The diagnosis is based on the presentation of tetanus symptoms and does not depend upon isolation of the bacterium, which is recovered from the wound in only 30% of cases and can be isolated from people without tetanus. Laboratory identification of C. tetani can be demonstrated only by the production of tetanospasmin in mice. Having recently experienced head trauma may indicate cephalic tetanus if no other diagnosis has been made.
The "spatula test" is a clinical test for tetanus that involves touching the posterior pharyngeal wall with a soft-tipped instrument and observing the effect. A positive test result is the involuntary contraction of the jaw (biting down on the "spatula"), and a negative test result would normally be a gag reflex attempting to expel the foreign object. A short report in The American Journal of Tropical Medicine and Hygiene states that, in an affected subject research study, the spatula test had a high specificity (zero false-positive test results) and a high sensitivity (94% of infected people produced a positive test).
Prevention
Unlike many infectious diseases, recovery from naturally acquired tetanus does not usually result in immunity. This is due to the extreme potency of the tetanospasmin toxin. Tetanospasmin will likely be lethal before it will provoke an immune response.
Tetanus can be prevented by vaccination with tetanus toxoid. The CDC recommends that adults receive a booster vaccine every ten years, and standard care practice in many places is to give the booster to any person with a puncture wound who is uncertain of when they were last vaccinated, or if they have had fewer than three lifetime doses of the vaccine. The booster may not prevent a potentially fatal case of tetanus from the current wound, however, as it can take up to two weeks for tetanus antibodies to form.
In children under the age of seven, the tetanus vaccine is often administered as a combined vaccine, DPT/DTaP vaccine, which also includes vaccines against diphtheria and pertussis. For adults and children over seven, the Td vaccine (tetanus and diphtheria) or Tdap (tetanus, diphtheria, and acellular pertussis) is commonly used.
The World Health Organization certifies countries as having eliminated maternal or neonatal tetanus. Certification requires at least two years of rates of less than 1 case per 1,000 live births. In 1998 in Uganda, 3,433 tetanus cases were recorded in newborn babies; of these, 2,403 died. After a major public health effort, Uganda was certified as having eliminated maternal and neonatal tetanus in 2011.
Post-exposure prophylaxis
Tetanus toxoid can be given in case of suspected exposure to tetanus. In such cases, it can be given with or without tetanus immunoglobulin (also called tetanus antibodies or tetanus antitoxin). It can be given as intravenous therapy or by intramuscular injection.
The guidelines for such events in the United States for people at least 11 years old (and not pregnant) are as follows:
Treatment
Mild tetanus
Mild cases of tetanus can be treated with:
Tetanus immunoglobulin (TIG), also called tetanus antibodies or tetanus antitoxin. It can be given as intravenous therapy or by intramuscular injection.
Antibiotic therapy to reduce toxin production. Metronidazole intravenous (IV) is a preferred treatment.
Benzodiazepines can be used to control muscle spasms. Options include diazepam and lorazepam, oral or IV.
Severe tetanus
Severe cases will require admission to intensive care. In addition to the measures listed above for mild tetanus:
Human tetanus immunoglobulin injected intrathecally (which increases clinical improvement from 4% to 35%).
Tracheotomy and mechanical ventilation for 3 to 4 weeks. Tracheotomy is recommended for securing the airway, because the presence of an endotracheal tube is a stimulus for spasm.
Magnesium sulfate, as an intravenous infusion, to control spasm and autonomic dysfunction.
Diazepam as a continuous IV infusion.
The autonomic effects of tetanus can be difficult to manage (alternating hyper- and hypotension hyperpyrexia/hypothermia), and may require IV labetalol, magnesium, clonidine, or nifedipine.
Drugs, such as diazepam or other muscle relaxants, can be given to control the muscle spasms. In extreme cases, it may be necessary to paralyze the person with curare-like drugs, and use a mechanical ventilator.
To survive a tetanus infection, the maintenance of an airway and proper nutrition are required. An intake of and at least 150 g of protein per day is often given in liquid form through a tube directly into the stomach (percutaneous endoscopic gastrostomy), or through a drip into a vein (parenteral nutrition). This high-caloric diet maintenance is required because of the increased metabolic strain brought on by the increased muscle activity. Full recovery takes 4 to 6 weeks because the body must regenerate destroyed nerve axon terminals.
The antibiotic of choice is metronidazole. It can be given intravenously, by mouth, or by rectum. Of likewise efficiency is penicillin, but some raise the concern of provoking spasms because it inhibits GABA receptor, which is already affected by tetanospasmin.
Epidemiology
In 2013, it caused about 59,000 deaths—down from 356,000 in 1990. Tetanus, notably the neonatal form, remains a significant public health problem in non-industrialized countries, with 59,000 newborns dying worldwide in 2008 as a result of neonatal tetanus. In the United States, from 2000 through 2007, an average of 31 cases were reported per year. Nearly all of the cases in the United States occur in unimmunized individuals, or individuals who have allowed their inoculations to lapse.
In animals
Tetanus is found primarily in goats and sheep. The following are clinical symptoms found in affected goats and sheep. Extended head and neck, tail rigors
(tail becomes rigid and straight), abnormal gait (walking becomes stiff and abnormal), arched back, stiffness of the jaw muscles, lockjaw,
twitching of eyes, drooping eyelids, difficulty swallowing, difficulty or inability to eat and drink, abdominal bloat, spasms (uncontrolled muscular contractions) before death.
Death sometimes is due to asphyxiation, secondary to respiratory paralysis.
History
Tetanus was well known to ancient civilizations, who recognized the relationship between wounds and fatal muscle spasms. In 1884, Arthur Nicolaier isolated the strychnine-like toxin of tetanus from free-living, anaerobic soil bacteria. The etiology of the disease was further elucidated in 1884 by Antonio Carle and Giorgio Rattone, two pathologists of the University of Turin, who demonstrated the transmissibility of tetanus for the first time. They produced tetanus in rabbits by injecting pus from a person with fatal tetanus into their sciatic nerves, and testing their reactions while tetanus was spreading.
In 1891, C. tetani was isolated from a human victim by Kitasato Shibasaburō, who later showed that the organism could produce disease when injected into animals and that the toxin could be neutralized by specific antibodies. In 1897, Edmond Nocard showed that tetanus antitoxin induced passive immunity in humans, and could be used for prophylaxis and treatment. Tetanus toxoid vaccine was developed by P. Descombey in 1924, and was widely used to prevent tetanus induced by battle wounds during World War II.
Etymology
The word tetanus comes from the , which is further from the .
Research
There is insufficient evidence that tetanus can be treated or prevented by vitamin C. This is at least partially due to the fact that the historical trials that were conducted in attempts to look for a possible connection between vitamin C and alleviating tetanus patients were of poor quality.
See also
Renshaw cell
Tetanized state
References
External links
Tetanus Information from Medline Plus
Tetanus Surveillance -- United States, 1998-2000 (Data and Analysis)
Bacterial diseases
Wikipedia medicine articles ready to translate
Wikipedia emergency medicine articles ready to translate
Vaccine-preventable diseases | Tetanus | [
"Biology"
] | 4,807 | [
"Vaccination",
"Vaccine-preventable diseases"
] |
45,832 | https://en.wikipedia.org/wiki/Gyrocompass | A gyrocompass is a type of non-magnetic compass which is based on a fast-spinning disc and the rotation of the Earth (or another planetary body if used elsewhere in the universe) to find geographical direction automatically. A gyrocompass makes use of one of the seven fundamental ways to determine the heading of a vehicle. A gyroscope is an essential component of a gyrocompass, but they are different devices; a gyrocompass is built to use the effect of gyroscopic precession, which is a distinctive aspect of the general gyroscopic effect. Gyrocompasses, such as the fibre optic gyrocompass are widely used to provide a heading for navigation on ships. This is because they have two significant advantages over magnetic compasses:
they find true north as determined by the axis of the Earth's rotation, which is different from, and navigationally more useful than, magnetic north, and
they have a greater degree of accuracy because they are unaffected by ferromagnetic materials, such as in a ship's steel hull, which distort the magnetic field.
Aircraft commonly use gyroscopic instruments (but not a gyrocompass) for navigation and attitude monitoring; for details, see flight instruments (specifically the heading indicator) and gyroscopic autopilot.
History
The first, not yet practical, form of gyrocompass was patented in 1885 by Marinus Gerardus van den Bos. A usable gyrocompass was invented in 1906 in Germany by Hermann Anschütz-Kaempfe, and after successful tests in 1908 became widely used in the German Imperial Navy. Anschütz-Kaempfe founded the company Anschütz & Co. in Kiel, to mass produce gyrocompasses; the company is today Raytheon Anschütz GmbH. The gyrocompass was an important invention for nautical navigation because it allowed accurate determination of a vessel’s location at all times regardless of the vessel’s motion, the weather and the amount of steel used in the construction of the ship.
In the United States, Elmer Ambrose Sperry produced a workable gyrocompass system (1908: ), and founded the Sperry Gyroscope Company. The unit was adopted by the U.S. Navy (1911), and played a major role in World War I. The Navy also began using Sperry's "Metal Mike": the first gyroscope-guided autopilot steering system. In the following decades, these and other Sperry devices were adopted by steamships such as the , airplanes, and the warships of World War II. After his death in 1930, the Navy named the after him.
Meanwhile, in 1913, C. Plath (a Hamburg, Germany-based manufacturer of navigational equipment including sextants and magnetic compasses) developed the first gyrocompass to be installed on a commercial vessel. C. Plath sold many gyrocompasses to the Weems’ School for Navigation in Annapolis, MD, and soon the founders of each organization formed an alliance and became Weems & Plath.
Before the success of the gyrocompass, several attempts had been made in Europe to use a gyroscope instead. By 1880, William Thomson (Lord Kelvin) tried to propose a gyrostat to the British Navy. In 1889, Arthur Krebs adapted an electric motor to the Dumoulin-Froment marine gyroscope, for the French Navy. That gave the Gymnote submarine the ability to keep a straight line while underwater for several hours, and it allowed her to force a naval block in 1890.
In 1923 Max Schuler published his paper containing his observation that if a gyrocompass possessed Schuler tuning such that it had an oscillation period of 84.4 minutes (which is the orbital period of a notional satellite orbiting around the Earth at sea level), then it could be rendered insensitive to lateral motion and maintain directional stability.
Operation
A gyroscope, not to be confused with a gyrocompass, is a spinning wheel mounted on a set of gimbals so that its axis is free to orient itself in any way. When it is spun up to speed with its axis pointing in some direction, due to the law of conservation of angular momentum, such a wheel will normally maintain its original orientation to a fixed point in outer space (not to a fixed point on Earth). Since the Earth rotates, it appears to a stationary observer on Earth that a gyroscope's axis is completing a full rotation once every 24 hours. Such a rotating gyroscope is used for navigation in some cases, for example on aircraft, where it is known as heading indicator or directional gyro, but cannot ordinarily be used for long-term marine navigation. The crucial additional ingredient needed to turn a gyroscope into a gyrocompass, so it would automatically position to true north, is some mechanism that results in an application of torque whenever the compass's axis is not pointing north.
One method uses friction to apply the needed torque: the gyroscope in a gyrocompass is not completely free to reorient itself; if for instance a device connected to the axis is immersed in a viscous fluid, then that fluid will resist reorientation of the axis. This friction force caused by the fluid results in a torque acting on the axis, causing the axis to turn in a direction orthogonal to the torque (that is, to precess) along a line of longitude. Once the axis points toward the celestial pole, it will appear to be stationary and won't experience any more frictional forces. This is because true north (or true south) is the only direction for which the gyroscope can remain on the surface of the earth and not be required to change. This axis orientation is considered to be a point of minimum potential energy.
Another, more practical, method is to use weights to force the axis of the compass to remain horizontal (perpendicular to the direction of the center of the Earth), but otherwise allow it to rotate freely within the horizontal plane. In this case, gravity will apply a torque forcing the compass's axis toward true north. Because the weights will confine the compass's axis to be horizontal with respect to the Earth's surface, the axis can never align with the Earth's axis (except on the Equator) and must realign itself as the Earth rotates. But with respect to the Earth's surface, the compass will appear to be stationary and pointing along the Earth's surface toward the true North Pole.
Since the gyrocompass's north-seeking function depends on the rotation around the axis of the Earth that causes torque-induced gyroscopic precession, it will not orient itself correctly to true north if it is moved very fast in an east to west direction, thus negating the Earth's rotation. However, aircraft commonly use heading indicators or directional gyros, which are not gyrocompasses and do not align themselves to north via precession, but are periodically aligned manually to magnetic north.
Errors
A gyrocompass is subject to certain errors. These include steaming error, where rapid changes in course, speed and latitude cause deviation before the gyro can adjust itself. On most modern ships the GPS or other navigational aids feed data to the gyrocompass allowing a small computer to apply a correction.
Alternatively a design based on a strapdown architecture (including a triad of fibre optic gyroscopes, ring laser gyroscopes or hemispherical resonator gyroscopes and a triad of accelerometers) will eliminate these errors, as they do not depend upon mechanical parts to determinate rate of rotation.
Mathematical model
We consider a gyrocompass as a gyroscope which is free to rotate about one of its symmetry axes, also the whole rotating gyroscope is free to rotate on the horizontal plane about the local vertical. Therefore there are two independent local rotations. In addition to these rotations we consider the rotation of the Earth about its north-south (NS) axis, and we model the planet as a perfect sphere. We neglect friction and also the rotation of the Earth about the Sun.
In this case a non-rotating observer located at the center of the Earth can be approximated as being an inertial frame. We establish cartesian coordinates for such an observer (whom we name as 1-O), and the barycenter of the gyroscope is located at a distance from the center of the Earth.
First time-dependent rotation
Consider another (non-inertial) observer (the 2-O) located at the center of the Earth but rotating about the NS-axis by We establish coordinates attached to this observer as
so that the unit versor is mapped to the point . For the 2-O neither the Earth nor the barycenter of the gyroscope is moving. The rotation of 2-O relative to 1-O is performed with angular velocity . We suppose that the axis denotes points with zero longitude (the prime, or Greenwich, meridian).
Second and third fixed rotations
We now rotate about the axis, so that the -axis has the longitude of the barycenter. In this case we have
With the next rotation (about the axis of an angle , the co-latitude) we bring the axis along the local zenith (-axis) of the barycenter. This can be achieved by the following orthogonal matrix (with unit determinant)
so that the versor is mapped to the point
Constant translation
We now choose another coordinate basis whose origin is located at the barycenter of the gyroscope. This can be performed by the following translation along the zenith axis
so that the origin of the new system, is located at the point and is the radius of the Earth. Now the -axis points towards the south direction.
Fourth time-dependent rotation
Now we rotate about the zenith -axis so that the new coordinate system is attached to the structure of the gyroscope, so that for an observer at rest in this coordinate system, the gyrocompass is only rotating about its own axis of symmetry. In this case we find
The axis of symmetry of the gyrocompass is now along the -axis.
Last time-dependent rotation
The last rotation is a rotation on the axis of symmetry of the gyroscope as in
Dynamics of the system
Since the height of the gyroscope's barycenter does not change (and the origin of the coordinate system is located at this same point), its gravitational potential energy is constant. Therefore its Lagrangian corresponds to its kinetic energy only. We have
where is the mass of the gyroscope, and
is the squared inertial speed of the origin of the coordinates of the final coordinate system (i.e. the center of mass). This constant term does not affect the dynamics of the gyroscope and it can be neglected. On the other hand, the tensor of inertia is given by
and
Therefore we find
The Lagrangian can be rewritten as
where
is the part of the Lagrangian responsible for the dynamics of the system. Then, since , we find
Since the angular momentum of the gyrocompass is given by we see that the constant is the component of the angular momentum about the axis of symmetry. Furthermore, we find the equation of motion for the variable as
or
Particular case: the poles
At the poles we find and the equations of motion become
This simple solution implies that the gyroscope is uniformly rotating with constant angular velocity in both the vertical and symmetrical axis.
The general and physically relevant case
Let us suppose now that and that , that is the axis of the gyroscope is approximately along the north-south line, and let us find the parameter space (if it exists) for which the system admits stable small oscillations about this same line. If this situation occurs, the gyroscope will always be approximately aligned along the north-south line, giving direction. In this case we find
Consider the case that
and, further, we allow for fast gyro-rotations, that is
Therefore, for fast spinning rotations, implies In this case, the equations of motion further simplify to
Therefore we find small oscillations about the north-south line, as , where the angular velocity of this harmonic motion of the axis of symmetry of the gyrocompass about the north-south line is given by
which corresponds to a period for the oscillations given by
Therefore is proportional to the geometric mean of the Earth and spinning angular velocities. In order to have small oscillations we have required , so that the North is located along the right-hand-rule direction of the spinning axis, that is along the negative direction of the -axis, the axis of symmetry. As a side result, on measuring (and knowing ), one can deduce the local co-latitude
See also
Acronyms and abbreviations in avionics
Heading indicator, also known as direction indicator, a lightweight gyroscope (not a gyrocompass) used on aircraft
HRG gyrocompass
Fluxgate compass
Fibre optic gyrocompass
Inertial navigation system, a more complex system that also incorporates accelerometers
Schuler tuning
Binnacle
Notes
References
Bibliography
: "Gyroscopic compass" by E. A. Sperry, filed June, 1911; issued September, 1918
External links
Feynman's Tips on Physics - The gyrocompass
Case Files: Elmer A. Sperry at the Franklin Institute contains records concerning his 1914 Franklin Award for the gyroscopic compass
Britannica - Gyrocompass
1908 introductions
Navigational equipment
Avionics
Aircraft instruments
German inventions | Gyrocompass | [
"Technology",
"Engineering"
] | 2,925 | [
"Avionics",
"Aircraft instruments",
"Measuring instruments"
] |
45,839 | https://en.wikipedia.org/wiki/Drum%20machine | A drum machine is an electronic musical instrument that creates percussion sounds, drum beats, and patterns. Drum machines may imitate drum kits or other percussion instruments, or produce unique sounds, such as synthesized electronic tones. A drum machine often has pre-programmed beats and patterns for popular genres and styles, such as pop music, rock music, and dance music. Most modern drum machines made in the 2010s and 2020s also allow users to program their own rhythms and beats. Drum machines may create sounds using analog synthesis or play prerecorded samples.
While a distinction is generally made between drum machines (which can play back pre-programmed or user-programmed beats or patterns) and electronic drums (which have pads that can be struck and played like an acoustic drum kit), there are some drum machines that have buttons or pads that allow the performer to play drum sounds "live", either on top of a programmed drum beat or as a standalone performance. Drum machines have a range of capabilities, which go from playing a short beat pattern in a loop, to being able to program or record complex song arrangements with changes of meter and style.
Drum machines have had a lasting impact on popular music in the 20th century. The Roland TR-808, introduced in 1980, significantly influenced the development of dance music, especially electronic dance music, and hip hop. Its successor, the TR-909, introduced in 1983, heavily influenced techno and house music. The first drum machine to use samples of real drum kits, the Linn LM-1, was introduced in 1980 and was adopted by rock and pop artists including Prince and Michael Jackson. In the late 1990s, software emulations began to overtake the popularity of physical drum machines housed in a separate plastic or metal chassis.
History
Rhythmicon (1930–1932)
In 1930–32, the innovative and hard-to-use Rhythmicon was developed by Léon Theremin at the request of Henry Cowell, who wanted an instrument that could play compositions with multiple rhythmic patterns, based on the overtone series, that were far too hard to perform on existing keyboard instruments. The invention could produce sixteen different rhythms, each associated with a particular pitch, either individually or in any combination, including en masse, if desired. Received with considerable interest when it was publicly introduced in 1932, the Rhythmicon was soon set aside by Cowell.
Chamberlin Rhythmate (1957)
In 1957, Harry Chamberlin, an engineer from Iowa, created the Chamberlin Rhythmate, which allowed users to select between 14 tape loops of drum kits and percussion instruments performing various beats. Like the Chamberlin keyboard, the Rhythmate was intended for family singalongs. Around 100 units were sold.
Wurlitzer Side Man (1959)
In 1959, Wurlitzer released the Side Man, which generates sounds mechanically by a rotating disc, similar to a music box. A slider controls the tempo (between 34 and 150 beats per minute). Sounds can also be triggered individually through buttons on a control panel. The Side Man was a success and drew criticism from the American Federation of Musicians, which ruled in 1961 that its local jurisdictions could not prohibit Side Man use, though it could not be used for dancing. Wurlitzer ceased production of the Side Man in 1969.
Raymond Scott (1960–1963)
In 1960, Raymond Scott constructed the Rhythm Synthesizer and, in 1963, a drum machine called Bandito the Bongo Artist. Scott's machines were used for recording his album Soothing Sounds for Baby series (1964).
First fully transistorized drum machines – Seeburg/Gulbransen (1964)
During the 1960s, the implementation of rhythm machines had evolved into fully solid-state (transistorized) from early electro-mechanical with vacuum tubes, and also size was reduced to desktop size from earlier floor type. In the early 1960s, a home organ manufacturer, Gulbransen (later acquired by Fender) cooperated with an automatic musical equipment manufacturer Seeburg Corporation, and released early compact rhythm machines Rhythm Prince (PRP), although, at that time, these sizes were still as large as small guitar amp head, due to the use of bulky electro-mechanical pattern generators. Then in 1964, Seeburg invented a compact electronic rhythm pattern generator using "diode matrix" ( in 1967), and fully transistorized electronic rhythm machine with pre-programmed patterns, Select-A-Rhythm (SAR1), was released. As a result of its robustness and enough compact size, these rhythm machines were gradually installed on the electronic organ as an accompaniment of organists and finally spread widely.
Keio-Giken (Korg), Nippon Columbia, and Ace Tone (1963–1967)
In the early 1960s, a nightclub owner in Tokyo, Tsutomu Katoh was consulted by a notable accordion player, Tadashi Osanai, about the rhythm machine he used for accompaniment in the club, a Wurlitzer Side Man. Osanai, a graduate of the Department of Mechanical Engineering at the University of Tokyo, convinced Katoh to finance his efforts to build a better one. In 1963, their new company Keio-Giken (later Korg) released their first rhythm machine, the Donca-Matic DA-20, using vacuum tube circuits for sounds and a mechanical wheel for rhythm patterns. It was a floor-type machine with a built-in speaker, and featured a keyboard for manual play, in addition to the multiple automatic rhythm patterns. Its price was comparable with the average annual income of Japanese at that time.
Next, their effort was focused on the improvement of reliability and performance, along with size and cost reductions. Unstable vacuum tube circuits were replaced with reliable transistor circuits on the Donca-Matic DC-11 in the mid-1960s. In 1966, the bulky mechanical wheel was also replaced with a compact transistor circuit on the Donca-Matic DE-20 and DE-11. In 1967, the Mini Pops MP-2 was developed as an option for the Yamaha Electone (electric organ), and Mini Pops was established as a series of compact desktop rhythm machines. In the United States, Mini Pops MP-3, MP-7, etc. were sold under the Univox brand by the distributor at that time, Unicord Corporation.
In 1965, Nippon Columbia filed a patent for an automatic rhythm instrument. It described it as an "automatic rhythm player which is simple but capable of electronically producing various rhythms in the characteristic tones of a drum, a piccolo and so on." It has some similarities to Seeburg's slightly earlier 1964 patent.
In 1967, Ace Tone founder Ikutaro Kakehashi (later founder of Roland Corporation) developed the preset rhythm-pattern generator using diode matrix circuit, which has some similarities to the earlier Seeburg and Nippon Columbia patents. Kakehashi's patent describes his device as a "plurality of inverting circuits and/or clipper circuits" which "are connected to a counting circuit to synthesize the output signal of the counting circuit" where the "synthesized output signal becomes a desired rhythm."
Ace Tone commercialized its preset rhythm machine, called the FR-1 Rhythm Ace, in 1967. It offered 16 preset patterns, and four buttons to manually play each instrument sound (cymbal, claves, cowbell and bass drum). The rhythm patterns could also be cascaded together by pushing multiple rhythm buttons simultaneously, and the possible combination of rhythm patterns were more than a hundred (on the later models of Rhythm Ace, the individual volumes of each instrument could be adjusted with the small knobs or faders). The FR-1 was adopted by the Hammond Organ Company for incorporation within their latest organ models. In the US, the units were also marketed under the Multivox brand by Peter Sorkin Music Company, and in the UK, marketed under the Bentley Rhythm Ace brand.
Early preset drum machine users
A number of other preset drum machines were released in the 1970s, but early examples of the use can be found on The United States of America's eponymous album from 1967–8. The first major pop song to use a drum machine was "Saved by the Bell" by Robin Gibb, which reached #2 in Britain in 1969. Drum machine tracks were also heavily used on the Sly & the Family Stone album There's a Riot Goin' On, released in 1971. Sly & the Family Stone was the first group to have a number #1 pop single that used a drum machine: that single was "Family Affair".
The German krautrock band Can also used a drum machine on their songs "Peking O" and "Spoon". The 1972 Timmy Thomas single "Why Can't We Live Together"/"Funky Me" featured a distinctive use of a drum machine and keyboard arrangement on both tracks. Another early example of electronic drums used by a rock band is Obscured by Clouds by Pink Floyd in 1972. The first album on which a drum machine produced all the percussion was Kingdom Come's Journey, recorded in November 1972 using a Bentley Rhythm Ace. French singer-songwriter Léo Ferré mixed a drum machine with a symphonic orchestra in the song "Je t'aimais bien, tu sais..." in his album L'Espoir, released in 1974. Miles Davis' live band began to use a drum machine in 1974 (played by percussionist James Mtume), which can be heard on Dark Magus (1977). Osamu Kitajima's progressive psychedelic rock album Benzaiten (1974) also used drum machines.
Programmable drum machines
In 1972, Eko released the ComputeRhythm, which was one of the first programmable drum machines. It had a 6-row push-button matrix that allowed the user to enter a pattern manually. The user could also push punch cards with pre-programmed rhythms through a reader slot on the unit.
Another stand-alone drum machine released in 1975, the PAiA Programmable Drum Set was also one of the first programmable drum machines, and was sold as a kit with parts and instructions which the buyer would use to build the machine.
In 1975, Ace Tone released the Rhythm Producer FR-15 that enables the modification of the pre-programmed rhythm patterns. In 1978, Roland released the Roland CR-78, the first microprocessor-based programmable rhythm machine, with four memory storage for user patterns. In 1979, a simpler version with four sounds, Boss DR-55, was released.
Drum sound synthesis
A key difference between such early machines and more modern equipment is that they use sound synthesis rather than digital sampling in order to generate their sounds. For example, a snare drum or maraca sound would typically be created using a burst of white noise whereas a bass drum sound would be made using sine waves or other basic waveforms. This meant that while the resulting sound was not very close to that of the real instrument, each model tended to have a unique character. For this reason, many of these early machines have achieved a certain "cult status" and are now sought after by producers for use in production of modern electronic music, most notably the Roland TR-808.
Digital sampling
The Linn LM-1 Drum Computer, released in 1980 at $4,995 (), was the first drum machine to use digital samples. It also featured rhythmic concepts such as swing factors, shuffle, accent, and real-time programming. Only about 500 were ever made, but its effect on the music industry was extensive. Its distinctive sound almost defines 1980s pop, and it can be heard on hundreds of hit records from the era, including The Human League's Dare, Gary Numan's Dance, Devo's New Traditionalists, and Ric Ocasek's Beatitude. Prince bought one of the first LM-1s and used it on nearly all of his most popular albums, including 1999 and Purple Rain.
Many of the drum sounds on the LM-1 were composed of two chips that were triggered at the same time, and each voice was individually tunable with individual outputs. Due to memory limitations, a crash cymbal sound was not available except as an expensive third-party modification. A cheaper version of the LM-1 was released in 1982 called the LinnDrum. Priced at $2,995 (), not all of its voices were tunable, but crash cymbal was included as a standard sound. Like its predecessor the LM-1, it featured swappable sound chips. The LinnDrum can be heard on records such as The Cars' Heartbeat City and Giorgio Moroder's soundtrack for the film Scarface.
It was feared the LM-1 would put every session drummer in Los Angeles out of work and it caused many of L.A.'s top session drummers (Jeff Porcaro is one example) to purchase their own drum machines and learn to program them themselves in order to stay employed. Linn even marketed the LinnDrum specifically to drummers.
Following the success of the LM-1, Oberheim introduced the DMX, which also featured digitally sampled sounds and a "swing" feature similar to the one found on the Linn machines. It became very popular in its own right, becoming a staple of the nascent hip-hop scene.
Other manufacturers soon began to produce machines, e.g. the Sequential Circuits Drumtraks and Tom, the E-mu Drumulator and the Yamaha RX11.
In 1986, the SpecDrum by Cheetah Marketing, an inexpensive 8-bit sampling drum external module for the ZX Spectrum, was introduced, with a price less than £30, when similar models cost around £250.
Roland TR-808 and TR-909
In 1980, the Roland Corporation launched the TR-808 Rhythm Composer. It was one of the earliest programmable drum machines, with which users could create their own rhythms rather than having to use preset patterns. Unlike the more expensive LM-1, the 808 is completely analog, meaning its sounds are generated non-digitally via hardware rather than samples (prerecorded sounds). Launched when electronic music had yet to become mainstream, the 808 received mixed reviews for its unrealistic drum sounds and was a commercial failure. Having built approximately 12,000 units, Roland discontinued the 808 after its semiconductors became impossible to restock.
Over the course of the 1980s, the 808 attracted a cult following among underground musicians for its affordability on the used market, ease of use, and idiosyncratic sounds, particularly its deep, "booming" bass drum. It became a cornerstone of the emerging electronic, dance, and hip hop genres, popularized by early hits such as Marvin Gaye's "Sexual Healing" and Afrika Bambaataa and the Soulsonic Force's "Planet Rock". The 808 was eventually used on more hit records than any other drum machine; its popularity with hip hop in particular has made it one of the most influential inventions in popular music, comparable to the Fender Stratocaster's influence on rock. Its sounds continue to be used as samples included with music software and modern drum machines.
The 808 was followed in 1983 by the TR-909, the first Roland drum machine to use MIDI, which synchronizes devices built by different manufacturers. It was also the first Roland drum machine to use samples for some sounds. Like the 808, the 909 was a commercial failure, but had a lasting influence on popular music after cheap units circulated on the used market; alongside the Roland TB-303 bass synthesizer, it influenced the development of electronic genres such as techno, house and acid.
Later machines
By 2000, standalone drum machines had become less common, partly supplanted by general-purpose hardware samplers controlled by sequencers (built-in or external), software-based sequencing and sampling and the use of loops, and music workstations with integrated sequencing and drum sounds. TR-808 and other digitized drum machine sounds can be found in archives on the Internet. However, traditional drum machines are still being made by companies such as Roland Corporation (under the name Boss), Zoom, Korg and Alesis, whose SR-16 drum machine has remained popular since it was introduced in 1991.
There are percussion-specific sound modules that can be triggered by pickups, trigger pads, or through MIDI. These are called drum modules; the Alesis D4 and Roland TD-8 are popular examples. Unless such a sound module also features a sequencer, it is, strictly speaking, not a drum machine.
In the 2010s a revival of interest in analogue synthesis resulted in a new wave of analogue drum machines, ranging from the budget-priced Korg Volca Beats and Akai Rhythm Wolf to the mid-priced Arturia DrumBrute, and the high-end MFB Tanzbär and Dave Smith Instruments Tempest. Roland's TR-08 and TR-09 Rhythm Composers were digital recreations of the original TR-808 and 909, while Behringer released an analogue clone of the 808 as the Behringer RD-8 Rhythm Designer. Korg released an analog drum machine, the Volca Beats, in 2013.
Programming
Programming of drum machines varies from product to product. On most products, it can be done in real time: the user creates drum patterns by pressing the trigger pads as though a drum kit were being played; or using step-sequencing: the pattern is built up over time by adding individual sounds at certain points by placing them, as with the TR-808 and TR-909, along a 16-step bar. For example, a generic 4-on-the-floor dance pattern could be made by placing a closed high hat on the 3rd, 7th, 11th, and 15th steps, then a kick drum on the 1st, 5th, 9th, and 13th steps, and a clap or snare on the 5th and 13th. This pattern could be varied in a multitude of ways to obtain fills, breakdowns and other elements that the programmer sees fit, which in turn could be sequenced with song-sequence—essentially the drum machine plays back the programmed patterns from memory in an order the programmer has chosen. The machine will quantize entries that are slightly off-beat in order to make them exactly in time.
If the drum machine has MIDI connectivity, then one could program the drum machine with a computer or another MIDI device.
Comparison with live drumming
While drum machines have been used much in popular music since the 1980s, "...scientific studies show there are certain aspects of human-created rhythm that machines cannot replicate, or can only replicate poorly" such as the "feel" of human drumming and the ability of a human drummer to respond to changes in a song as it is being played live onstage. Human drummers also have the ability to make slight variations in their playing, such as playing "ahead of the beat" or "behind the beat" for sections of a song, in contrast to a drum machine that plays a pre-programmed rhythm. As well, human drummers play a "tremendously wide variety of rhythmic variations" that drum machines cannot reproduce.
Labor costs
Increasingly, drum machines and drum programming are used by major record labels to undercut the costly expense of studio drummers.
See also
Electronic drum
Groovebox (generic groove machines)
Music sequencer
References
External links
http://drum-machines-history.blogspot.co.uk
Music sequencers
Electronic musical instruments
Drums | Drum machine | [
"Engineering"
] | 4,008 | [
"Music sequencers",
"Automation"
] |
45,856 | https://en.wikipedia.org/wiki/Monastery | A monastery is a building or complex of buildings comprising the domestic quarters and workplaces of monastics, monks or nuns, whether living in communities or alone (hermits). A monastery generally includes a place reserved for prayer which may be a chapel, church, or temple, and may also serve as an oratory, or in the case of communities anything from a single building housing only one senior and two or three junior monks or nuns, to vast complexes and estates housing tens or hundreds. A monastery complex typically comprises a number of buildings which include a church, dormitory, cloister, refectory, library, balneary and infirmary and outlying granges. Depending on the location, the monastic order and the occupation of its inhabitants, the complex may also include a wide range of buildings that facilitate self-sufficiency and service to the community. These may include a hospice, a school, and a range of agricultural and manufacturing buildings such as a barn, a forge, or a brewery.
In English usage, the term monastery is generally used to denote the buildings of a community of monks. In modern usage, convent tends to be applied only to institutions of female monastics (nuns), particularly communities of teaching or nursing religious sisters. Historically, a convent denoted a house of friars (reflecting the Latin), now more commonly called a friary. Various religions may apply these terms in more specific ways.
Etymology
The word monastery comes from the Greek word μοναστήριον, neut. of μοναστήριος – monasterios from μονάζειν – monazein "to live alone" from the root μόνος – monos "alone" (originally all Christian monks were hermits); the suffix "-terion" denotes a "place for doing something". The earliest extant use of the term monastērion is by the 1st century AD Jewish philosopher Philo in On The Contemplative Life, ch. III.
In England, the word monastery was also applied to the habitation of a bishop and the cathedral clergy who lived apart from the lay community. Most cathedrals were not monasteries, and were served by canons secular, which were communal but not monastic. However, some were run by monasteries orders, such as Durham Cathedral. Westminster Abbey was for a short time a cathedral, and was a Benedictine monastery until the Reformation, and its Chapter preserves elements of the Benedictine tradition. See the entry cathedral. They are also to be distinguished from collegiate churches, such as St George's Chapel, Windsor.
Terms
The term monastery is used generically to refer to any of a number of types of religious community. In the Roman Catholic religion and to some extent in certain branches of Buddhism, there is a somewhat more specific definition of the term and many related terms.
Buddhist monasteries are generally called vihara (Pali language el). Viharas may be occupied by men or women, and in keeping with common English usage, a vihara populated by females may often be called a nunnery or a convent. However, vihara can also refer to a temple. In Tibetan Buddhism, monasteries are often called gompa. In Cambodia, Laos and Thailand, a monastery is called a wat. In Burma, a monastery is called a kyaung.
A Christian monastery may be an abbey (i.e., under the rule of an abbot), or a priory (under the rule of a prior), or conceivably a hermitage (the dwelling of a hermit). It may be a community of men (monks) or of women (nuns). A charterhouse is any monastery belonging to the Carthusian order. In Eastern Christianity, a very small monastic community can be called a skete, and a very large or important monastery can be given the dignity of a lavra.
The great communal life of a Christian monastery is called cenobitic, as opposed to the anchoretic (or anchoritic) life of an anchorite and the eremitic life of a hermit. There has also been, mostly under the Osmanli occupation of Greece and Cyprus, an "idiorrhythmic" lifestyle where monks come together but being able to own things individually and not being obliged to work for the common good.
In Hinduism monasteries are called matha, mandir, koil, or most commonly an ashram.
Jains use the Buddhist term vihara.
Monastic life
In most religions, life inside monasteries is governed by community rules that stipulate the gender of the inhabitants and require them to remain celibate and own little or no personal property. The degree to which life inside a particular monastery is socially separate from the surrounding populace can also vary widely; some religious traditions mandate isolation for purposes of contemplation removed from the everyday world, in which case members of the monastic community may spend most of their time isolated even from each other. Others focus on interacting with the local communities to provide services, such as teaching, medical care, or evangelism. Some monastic communities are only occupied seasonally, depending both on the traditions involved and the local climate, and people may be part of a monastic community for periods ranging from a few days at a time to almost an entire lifetime.
Life within the walls of a monastery may be supported in several ways: by manufacturing and selling goods, often agricultural products; by donations or alms; by rental or investment incomes; and by funds from other organizations within the religion, which in the past formed the traditional support of monasteries. There has been a long tradition of Christian monasteries providing hospitable, charitable and hospital services. Monasteries have often been associated with the provision of education and the encouragement of scholarship and research, which has led to the establishment of schools and colleges and the association with universities. Monastic life has adapted to modern society by offering computer services, accounting services and management as well as modern hospital and educational administration.
Buddhism
Buddhist monasteries, known as vihāra in Pali and in Sanskrit, emerged sometime around the fourth century BCE from the practice of vassa, a retreat undertaken by Buddhist monastics during the South Asian wet season. To prevent wandering monks and nuns from disturbing new plant-growth or becoming stranded in inclement weather, they were instructed to remain in a fixed location for the roughly three-month period typically beginning in mid-July.
These early fixed vassa retreats took place in pavilions and parks that wealthy supporters had donated to the sangha. Over the years, the custom of staying on property held in common by the sangha as a whole during the vassa retreat evolved into cenobitic monasticism, in which monks and nuns resided year-round in monasteries.
In India, Buddhist monasteries gradually developed into centres of learning where philosophical principles were developed and debated; this tradition continues in the monastic universities of Vajrayana Buddhists, as well as in religious schools and universities founded by religious orders across the Buddhist world. In modern times, living a settled life in a monastery setting has become the most common lifestyle for Buddhist monks and nuns across the globe.
Whereas early monasteries are considered to have been held in common by the entire sangha, in later years this tradition diverged in a number of countries. Despite vinaya prohibitions on possessing wealth, many monasteries became large landowners, much like monasteries in medieval Christian Europe. In Chinese Buddhism, peasant families worked monastic-owned land in exchange for paying a portion of their yearly crop to the resident monks in the monastery, just as they would to a feudal landlord. In Sri Lanka and in Tibetan Buddhism, the ownership of a monastery often became vested in a single monk, who would often keep the property within the family by passing it on to a nephew ordained as a monk. In Japan, where civil authorities permitted Buddhist monks to marry, the position of head of a temple or monastery sometimes became hereditary, passed from father to son over many generations.
Forest monasteries – most commonly found in the Theravada traditions of Southeast Asia and Sri Lanka – are monasteries dedicated primarily to the study and cultivation of Buddhist meditation, rather than to scholarship or ceremonial duties. Forest monasteries often function like early Christian monasteries, with small groups of monks living an essentially hermit-like life gathered loosely around a respected elder teacher. While the wandering lifestyle practised by the Buddha and by his disciples continues to be the ideal model for forest-tradition monks in Thailand, Myanmar, Sri Lanka and elsewhere, practical concerns - including shrinking wilderness areas, lack of access to lay supporters, dangerous wildlife, and dangerous border conflicts - dictate that increasing numbers of "meditation" monks live in monasteries, rather than wandering.
Tibetan Buddhist monasteries or gompas are sometimes known as lamaseries, with their monks sometimes (mistakenly) known as lamas. Helena Blavatsky's Theosophical Society named its initial New York City meeting-place "the Lamasery".
Famous Buddhist monasteries include:
Donglin Temple, Jiangxi, China
Jetavana, Shravasti (India)
Nalanda, India
Shaolin Monastery, China
Tengboche Monastery, Nepal
For a further list of Buddhist monasteries see list of Buddhist temples.
Trends
Buddhist monasteries include some of the largest in the world. Drepung Monastery in Tibet housed around 10,000 monks prior to the Chinese invasion
in 1950–1951. the relocated monastery in India houses around 8,000.
Christianity
According to tradition, Christian monasticism began in Egypt with Anthony the Great. Originally, all Christian monks were hermits seldom encountering other people.
A transitional form of monasticism was later created by Ammonas in which "solitary" monks lived close enough to one another to offer mutual support as well as gathering together on Sundays for common services.
It was Pachomius the Great who developed the idea of cenobitic monasticism: having renunciates live together and worship together under the same roof. Some attribute his mode of communal living to the barracks of the Roman Army in which Pachomios served as a young man. Soon the Egyptian desert blossomed with monasteries, especially around Nitria (Wadi El Natrun), which was called the "Holy City". Estimates are that upwards of 50,000 monks lived in this area at any one time. Eremetism never died out though, but was reserved only for those advanced monks who had worked out their problems within a cenobitic monastery.
The idea caught on, and other places followed:
Upon his return from the Council of Serdica, Athanasius of Alexandria established the first Christian monastery in Europe circa 344 near modern-day Chirpan in Bulgaria.
Mar Awgin founded a monastery on Mount Izla above Nusaybin in Mesopotamia (~350), and from this monastery the cenobitic tradition spread in Mesopotamia, Persia, Armenia, Georgia and even India and China.
Mar Saba organized the monks of the Judaean Desert in a monastery close to Bethlehem (483), and this is considered the mother of all monasteries of Eastern Orthodoxy.
Benedict of Nursia founded the monastery of Monte Cassino in Italy (529), which was the seed of Roman Catholic monasticism in general, and of the Order of Saint Benedict in particular.
The Carthusians were founded by Bruno of Cologne at the Grande Chartreuse, from which the religious Order takes its name, in the eleventh century as an eremitical community, and remains the motherhouse of the Order.
Jerome and Paula of Rome decided to go and live a hermit's life in Bethlehem and founded several monasteries in the Holy Land. This way of life inspired the foundation of the Hieronymites in Spain and Portugal. The Monastery of Santa María del Parral in Segovia is the motherhouse of the Order.
Western Medieval Europe
The life of prayer and communal living was one of rigorous schedules and self-sacrifice. Prayer was their work, and the Office prayers took up much of a monk's waking hours – Matins, Lauds, Prime, Terce, daily Mass, Sext, None, Vespers, and Compline. In between prayers, monks were allowed to sit in the cloister and work on their projects of writing, copying, or decorating books. These would have been assigned based on a monk's abilities and interests. The non-scholastic types were assigned to physical labour of varying degrees.
The main meal of the day took place around noon, often taken at a refectory table, and consisted of the most simple and bland foods e.g., poached fish, boiled oats. While they ate, scripture would be read from a pulpit above them. Since no other words were allowed to be spoken, monks developed communicative gestures. Abbots and notable guests were honoured with a seat at the high table, while everyone else sat perpendicular to that in the order of seniority. This practice remained when some monasteries became universities after the first millennium, and can still be seen at Oxford University and Cambridge University.
Monasteries were important contributors to the surrounding community. They were centres of intellectual progression and education. They welcomed aspiring priests to come and study and learn, allowing them even to challenge doctrine in dialogue with superiors. The earliest forms of musical notation are attributed to a monk named Notker of St Gall, and was spread to musicians throughout Europe by way of the interconnected monasteries. Since monasteries offered respite for weary pilgrim travellers, monks were obligated also to care for their injuries or emotional needs. Over time, lay people started to make pilgrimages to monasteries instead of just using them as a stopover. By this time, they had sizeable libraries that attracted learned tourists. Families would donate a son in return for blessings. During the plagues, monks helped to till the fields and provide food for the sick.
A Warming House is a common part of a medieval monastery, where monks went to warm themselves. It was often the only room in the monastery where a fire was lit.
Catholic
A number of distinct monastic orders developed within Roman Catholicism:
Camaldolese monks
Canons Regular of the Order of the Holy Cross, priests and brothers, all of whom live together like monks according to the Rule of St. Augustine;
Carmelite hermits and Carmelite nuns (from the Ancient Observance and Discalced branch);
Cistercian Order, with monks and nuns (both of the Original Observance and of the Trappist reform);
Monks and Sisters of Bethlehem
Order of Minims, founded by Francis of Paola
Order of Saint Benedict, known as the Benedictine monks and nuns, founded by Benedict of Nursia with Scholastica, stresses manual labour in self-subsistent monasteries. See also: Cluniac Reforms;
Order of Saint Claire, best known as the Poor Clares (of all the observances);
Order of Saint Jerome, inspired by Jerome and Paula of Rome, known as the Hieronymite monks and nuns;
Order of Saint Paul the First Hermit, known as the Pauline Fathers;
Order of the Annunciation of the Blessed Virgin Mary, also known as Sisters of the Annunciation or Annociades, founded by Joan of France;
Order of the Carthusians, a hermitical religious order founded by Bruno of Cologne;
Order of the Immaculate Conception, also known as the Conceptionists, founded by Beatrice of Silva;
Order of the Most Holy Annunciation, also known as Turchine Nuns or Blue Nuns, founded by Maria Vittoria De Fornari Strata;
Order of the Most Holy Savior, known as Bridgettine nuns and monks, founded by Bridget of Sweden;
Order of the Visitation of Holy Mary, known as the Visitandine nuns, founded by Francis de Sales and Jane Frances de Chantal;
Passionists
Premonstratensian canons ("The White Canons")
Tironensian monks ("The Grey Monks")
Valliscaulian monks
While in English most mendicant Orders use the monastic terms of monastery or priory, in the Latin languages, the term used by the friars for their houses is convent, from the Latin conventus, e.g., () or (), meaning "gathering place". The Franciscans rarely use the term "monastery" at present, preferring to call their house a "friary".
Eastern Orthodox
In the Eastern Orthodox Church and Eastern Catholic Church, both monks and nuns follow a similar ascetic discipline, and even their religious habit is the same (though nuns wear an extra veil, called the apostolnik). Unlike Roman Catholic monasticism, the Eastern Orthodox do not have distinct religious orders, but a single monastic form throughout the Eastern Orthodox Church. Monastics, male or female, live away from the world, in order to pray for the world.
Monasteries vary from the very large to the very small. There are three types of monastic houses in the Eastern Orthodox Church:
A cenobium is a monastic community where monks live together, work together, and pray together, following the directions of an abbot and the elder monks. The concept of the cenobitic life is that when many men (or women) live together in a monastic context, like rocks with sharp edges, their "sharpness" becomes worn away and they become smooth and polished. The largest monasteries can hold many thousands of monks and are called lavras. In the cenobium the daily office, work and meals are all done in common.
A skete is a small monastic establishment that usually consist of one elder and two or three disciples. In the skete most prayer and work are done in private, coming together on Sundays and feast days. Thus, skete life has elements of both solitude and community, and for this reason is called the "middle way".
A hermit is a monk who practises asceticism but lives in solitude rather than in a monastic community.
One of the great centres of Eastern Orthodox monasticism is Mount Athos in Greece, which, like Vatican City, is self-governing. It is located on an isolated peninsula approximately long and wide, and is administered by the heads of the 20 monasteries. Today the population of the Holy Mountain is around 2,200 men only and can only be visited by men with special permission granted by both the Greek government and the government of the Holy Mountain itself.
Oriental Orthodox
The Oriental Orthodox churches, distinguished by their Miaphysite beliefs, consist of the Armenian Apostolic Church, Coptic Orthodox Church of Alexandria (whose Patriarch is considered first among equals for the following churches), Ethiopian Orthodox Tewahedo Church, Eritrean Orthodox Tewahedo Church, Indian Orthodox Church, and Syriac Orthodox Church of Antioch.
The monasteries of St. Macarius (Deir Abu Makaria) and St. Anthony (Deir Mar Antonios) are the oldest monasteries in the world and under the patronage of the Patriarch of the Coptic Orthodox Church.
Others
The last years of the 18th century marked in the Christian Church the beginnings of growth of monasticism among Protestant denominations. The center of this movement was in the United States and Canada beginning with the Shaker Church, which was founded in England and then moved to the United States. In the 19th century many of these monastic societies were founded as Utopian communities based on the monastic model in many cases. Aside from the Shakers, there were the Amanna, the Anabaptists, and others. Many did allow marriage but most had a policy of celibacy and communal life in which members shared all things communally and disavowed personal ownership.
In the 19th-century monasticism was revived in the Church of England, leading to the foundation of such institutions as the House of the Resurrection, Mirfield (Community of the Resurrection), Nashdom Abbey (Benedictine), Cleeve Priory (Community of the Glorious Ascension) and Ewell Monastery (Cistercian), Benedictine orders, Franciscan orders and the Orders of the Holy Cross, Order of St. Helena. Other Protestant Christian denominations also engage in monasticism, particularly Lutherans in Europe and North America. For example, the Benedictine order of the Holy Cross at St Augustine's House in Michigan is a Lutheran order of monks and there are Lutheran religious communities in Sweden and Germany. In the 1960s, experimental monastic groups were formed in which both men and women were members of the same house and also were permitted to be married and have childrenthese were operated on a communal form.
Trends
There is a growing Christian neo-monasticism, particularly among evangelical Christians.
Hinduism
Advaita Vedanta
In Hinduism, monks have existed for a long time, and with them, their respective monasteries, called mathas. Important among them are the chatur-amnaya mathas established by Adi Shankara which formed the nodal centres of under whose guidance the ancient Order of Advaitin monks were re-organised under ten names of the Dashanami Sampradaya.
Ramakrishna Math
Sri Vaishnava
Ramanuja heralded a new era in the world of Hinduism by reviving the lost faith in it and gave a firm doctrinal basis to the Vishishtadvaita philosophy which had existed since time immemorial. He ensured the establishment of a number of mathas of his Sri Vaishnava creed at different important centres of pilgrimage.
Emar Matha at Puri
Sriranga Narayana Jeeyar Mutt at Srirangam
Tirumala Pedda Jeeyangar Mutt at Tirupati
Later on, other famous Sri Vaishnava theologians and religious heads established various important mathas such as
Vanamamalai Mutt
Parakala Mutt
Ahobila Mutt
Nimbarka Vaishnava
Nimbarka Sampradaya of Nimbarkacharya is popular in North, West and East India and has several important Mathas.
Nimbarakacharya Peeth at Salemabad, Rajasthan
Kathia Baba ka Sthaan at Vrindavan
Ukhra Mahanta Asthal at Ukhra in West Bengal
Howrah Nimbarka Ashram at Howrah
Dvaita Vedanta
Ashta matha (eight monasteries) of Udupi were founded by Madhvacharya (Madhwa acharya), a dwaitha philosopher.
Gaud Saraswat Math
Kashi Math at Varanasi, Uttar Pradesh
Gokarna Math at Canacona, Goa
Jainism
Jainism, founded by Mahavira , had its own monasteries since 5th century BC.
Sufism
Islam discourages monasticism, which is referred to in the Quran as "an invention". However, the term "Sufi" is applied to Muslim mystics who, as a means of achieving union with Allah, adopted ascetic practices including wearing a garment made of coarse wool called "sf". The term "Sufism" comes from "sf" meaning the person, who wears "sf". But in the course of time, Sufi has come to designate all Muslim believers in mystic union.
Monasteries in literature
Matthew Lewis' 1796 Gothic Novel The Monk has as parts of its setting both a fictional monastery and nunnery in Spain at the time of the Inquisition. Many have interpreted Lewis' novel as a critique of Catholicism. Jane Austen sets the latter half of her 1818 novel Northanger Abbey in an out of use monastery, reflecting on Henry VIII's abolition of monasticism in England and the contemporary abolition of monasticism in France in the wake of the French Revolution. Convents for female monastics, or nunneries, were often portrayed as punishments for women unable or unwilling to marry.
In the 1880 novel The Brothers Karamazov, Fyodor Dostoyevsky was heavily inspired by real-life accounts of Orthodox monasticism. Parts of the novel focus in particular on the controversy surrounding the institution of "elderhood" in Orthodox Monasticism. Dostoyevsky's understanding of the tradition of elderhood is taken largely from Life of Elder Leonid of Optina by Father Kliment Zeder-gol'm, from which he quotes directly in chapter 5, book 1 of the Brother's Karamazov.
See also
Dissolution of the monasteries
Ecovillage
Intentional community
Khanqah
Krishnapura matha
List of abbeys and priories
List of Buddhist temples
List of monasteries of the Ukrainian Orthodox Church (Moscow Patriarchate)
Monasticism
Mount Athos
New Monasticism
Pilgrimage
Religious order
Rota (architecture)
Shivalli
Taoism
Thomas Merton
Vihara
Wudangshan
Zawiya
References
External links
Public Domain photographs and texts, and information regarding medieval monasteries.
Monastery Italy
Monasteries SearchUOC Synod Commission for Monasteries
Google-mapUOC Synod Commission for Monasteries
Religious buildings and structures
Total institutions | Monastery | [
"Biology"
] | 5,022 | [
"Behavioural sciences",
"Behavior",
"Total institutions"
] |
45,857 | https://en.wikipedia.org/wiki/Hurwitz%20polynomial | In mathematics, a Hurwitz polynomial (named after German mathematician Adolf Hurwitz) is a polynomial whose roots (zeros) are located in the left half-plane of the complex plane or on the imaginary axis, that is, the real part of every root is zero or negative. Such a polynomial must have coefficients that are positive real numbers. The term is sometimes restricted to polynomials whose roots have real parts that are strictly negative, excluding the imaginary axis (i.e., a Hurwitz stable polynomial).
A polynomial function of a complex variable is said to be Hurwitz if the following conditions are satisfied:
is real when is real.
The roots of have real parts which are zero or negative.
Hurwitz polynomials are important in control systems theory, because they represent the characteristic equations of stable linear systems. Whether a polynomial is Hurwitz can be determined by solving the equation to find the roots, or from the coefficients without solving the equation by the Routh–Hurwitz stability criterion.
Examples
A simple example of a Hurwitz polynomial is:
The only real solution is −1, because it factors as
In general, all quadratic polynomials with positive coefficients are Hurwitz.
This follows directly from the quadratic formula:
where, if the discriminant b2−4ac is less than zero, then the polynomial will have two complex-conjugate solutions with real part −b/2a, which is negative for positive a and b.
If the discriminant is equal to zero, there will be two coinciding real solutions at −b/2a. Finally, if the discriminant is greater than zero, there will be two real negative solutions,
because for positive a, b and c.
Properties
For a polynomial to be Hurwitz, it is necessary but not sufficient that all of its coefficients be positive (except for quadratic polynomials, which also imply sufficiency). A necessary and sufficient condition that a polynomial is Hurwitz is that it passes the Routh–Hurwitz stability criterion. A given polynomial can be efficiently tested to be Hurwitz or not by using the Routh continued fraction expansion technique.
References
Wayne H. Chen (1964) Linear Network Design and Synthesis, page 63, McGraw Hill.
Polynomials | Hurwitz polynomial | [
"Mathematics"
] | 467 | [
"Polynomials",
"Algebra"
] |
45,868 | https://en.wikipedia.org/wiki/National%20Center%20for%20Supercomputing%20Applications | The National Center for Supercomputing Applications (NCSA) is a state-federal partnership to develop and deploy national-scale cyberinfrastructure that advances research, science and engineering based in the United States. NCSA operates as a unit of the University of Illinois Urbana-Champaign,
and provides high-performance computing resources to researchers across the country. Support for NCSA comes from the National Science Foundation,
the state of Illinois, the University of Illinois, business and industry partners, and other federal agencies.
NCSA provides leading-edge computing, data storage, and visualization resources. NCSA computational and data environment implements a multi-architecture hardware strategy, deploying both clusters and shared memory systems to support high-end users and communities on the architectures best-suited to their requirements. Nearly 1,360 scientists, engineers and students used the computing and data systems at NCSA to support research in more than 830 projects.
NCSA is led by Professor Bill Gropp.
History
NCSA is one of the five original centers in the National Science Foundation's Supercomputer Centers Program. The idea for NCSA and the four other supercomputer centers arose from the frustration of its founder, Larry Smarr, who wrote an influential paper, "The Supercomputer Famine in American Universities", in 1982, after having to travel to Europe in summertime to access supercomputers and conduct his research.
Smarr wrote a proposal to address the future needs of scientific research. Seven other University of Illinois professors joined as co-principal investigators, and many others provided descriptions of what could be accomplished if the proposal were accepted. Known as the Black Proposal (after the color of its cover), it was submitted to the NSF in 1983. It met the NSF's mandate and its contents immediately generated excitement. However, the NSF had no organization in place to support it, and the proposal itself did not contain a clearly defined home for its implementation.
The NSF established an Office of Scientific Computing in 1984 and, with strong congressional support, it announced a national competition that would fund a set of supercomputer centers like the one described in the Black Proposal.
The result was that four supercomputer centers would be chartered (Cornell, Illinois, Princeton, and San Diego), with a fifth (Pittsburgh) added later.
The Black Proposal was approved in 1985 and marked the foundation of NCSA, with $42,751,000 in funding from 1 January 1985 through 31 December 1989. This was also noteworthy in that the NSF's action of approving an unsolicited proposal was unprecedented. NCSA opened its doors in January 1986.
In 2007, NCSA was awarded a grant from the National Science Foundation to build "Blue Waters", a supercomputer capable of performing quadrillions of calculations per second, a level of performance known as petascale.
Black Proposal
The 'Black Proposal' was a short, ten-page proposal for the creation of a supercomputing center that eventually led to funding from the National Science Foundation (NSF) to create supercomputing centers, including the National Center for Supercomputing Applications (NCSA) at the University of Illinois. In this sense, the significant role played by the U.S. Government in funding the center, and the first widely popular web browser (NCSA's Mosaic), cannot be denied.
The Black Proposal described the limitations on any scientific research that required computer capabilities, and it described a future world of productive scientific collaboration, centered on universal computer access, in which technical limitations on scientific research would not exist. Significantly, it expressed a clear vision of how to get from the present to the future. The proposal was titled "A Center for Scientific and Engineering Supercomputing", and was ten pages long.
The proposal's vision of the computing future were then unusual or non-existent, but elements of it are now commonplace, such as visualization, workstations, high-speed I/O, data storage, software engineering, and close collaboration with the multi-disciplinary user community.
Modern readers of the Black Proposal may gain insight into a world that no longer exists. Today's computers are easy to use, and the web is omnipresent. Employees in high-tech endeavors are given supercomputer accounts simply because they are employees. Computers are universally available and can be used by almost anyone of any age, applicable to almost anything.
At the time the proposal was written, computers were available to almost no one. For scientists who needed computers in their research, access was difficult if available at all. The effect on research was crippling. Reading publications from that time gives no hint that scientists were required to learn the arcane technical details of whatever computer facilities were available to them, a time-consuming limitation on their research, and an exceedingly tedious distraction from their professional interests.
The implementation of the Black Proposal had a primary role in shaping the computer technology of today, and its impact on research (both scientific and otherwise) has been profound. The proposal's description of the leading edge of scientific research may be sobering, and the limitations on computer usage at major universities may be surprising. A comprehensive list of the world's supercomputers shows the best resources that were then available. The thrust of the proposal may seem obvious now, but was then novel.
The National Science Foundation announced funding for the supercomputer centers in 1985; The first supercomputer at NCSA came online in January 1986.
NCSA quickly came to the attention of the worldwide scientific community with the release of NCSA Telnet in 1986. A number of other tools followed, and like NCSA Telnet, all were made available to everyone at no cost. In 1993, NCSA released the Mosaic web browser, the first popular graphical Web browser, which played an important part in expanding the growth of the World Wide Web. NCSA Mosaic was written by Marc Andreessen and Eric Bina, who went on to develop the Netscape Web browser. Mosaic was later licensed to Spyglass, Inc. which provided the foundation for Internet Explorer. The server-complement was called NCSA HTTPd, which later became known as Apache HTTP Server.
Other notable contributions by NCSA were the black hole simulations supporting the development of LIGO in 1992, the tracking of Comet Hale–Bopp in 1997, the creation of a PlayStation 2 Cluster in 2003, and the monitoring of the COVID-19 pandemic and creation of a COVID-19 vaccine.
Facilities
Initially, NCSA's administrative offices were in the Water Resources Building and employees were scattered across the campus. NCSA is now headquartered within its own building directly north of the Siebel Center for Computer Science, on the site of a former baseball field, Illini Field. NCSA's supercomputers are at the National Petascale Computing Facility.
The latest supercomputing system at NCSA today is the DeltaAI, funded by the National Science Foundation.
Movies and visualization
NCSA's visualization department is internationally well-known. Donna Cox, leader of the Advanced Visualization Laboratory at NCSA and a professor in the School of Art and Design at the University of Illinois Urbana-Champaign, and her team created visualizations for the Oscar-nominated IMAX film "Cosmic Voyage", the PBS NOVA episodes "Hunt for the Supertwister" and "Runaway Universe", as well as Discovery Channel documentaries and pieces for CNN and NBC Nightly News. Cox and NCSA worked with the American Museum of Natural History to produce high-resolution visualizations for the Hayden Planetarium's 2000 Millennium show, "Passport to the Universe", and for "The Search for Life: Are We Alone?" She produced visualizations for the Hayden's "Big Bang Theatre" and worked with the Denver Museum of Nature and Science to produce high-resolution data-driven visualizations of terabytes of scientific data for "Black Holes: The Other Side of Infinity", a digital dome program on black holes.
Private business partners
Referred to as the Industrial Partners program when it began in 1986, NCSA's collaboration with major corporations ensured that its expertise and emerging technologies would be relevant to major challenges outside of the academic world, as those challenges arose. Business partners had no control over research or the disposition of its results, but they were well-situated to be early adopters of any benefits of the research. This program is now called NCSA Industry.
Past and current business partners include:
Abaqus
Abbvie
Allstate
American Airlines
AT&T
Boeing Phantom Works
Caterpillar
Dell
Dow Chemical
Eastman Kodak
Eli Lilly and Company
ExxonMobil
FMC Corporation
Ford
IBM
Illinois Rocstar
Innerlink
John Deere
JPMorgan Chase
Kellogg's
McDonnell Douglas (now part of Boeing)
Motorola
Nielsen Corporation
Phillips Petroleum Company
Schlumberger
Sears
Shell
State Farm
Tribune Media Company
United Technologies
Notable NCSA scientists (sorted by last names)
Donna Cox
Bill Gropp
Larry Smarr
Ed Seidel
See also
Beckman Institute for Advanced Science and Technology
Coordinated Science Laboratory
Cyberinfrastructure
NCSA Brown Dog
References
External links
Buildings and structures of the University of Illinois Urbana-Champaign
University of Illinois Urbana-Champaign centers and institutes
Cyberinfrastructure
E-Science
History of the Internet
National Science Foundation
Research institutes established in 1986
Supercomputer sites
1986 establishments in the United States
Computer science institutes in the United States | National Center for Supercomputing Applications | [
"Technology"
] | 1,946 | [
"Information and communications technology",
"IT infrastructure",
"Cyberinfrastructure"
] |
45,871 | https://en.wikipedia.org/wiki/Loudspeaker | A loudspeaker (commonly referred to as a speaker or, more fully, a speaker system) is a combination of one or more speaker drivers, an enclosure, and electrical connections (possibly including a crossover network). The speaker driver is an electroacoustic transducer that converts an electrical audio signal into a corresponding sound.
The driver is a linear motor connected to a diaphragm, which transmits the motor's movement to produce sound by moving air. An audio signal, typically originating from a microphone, recording, or radio broadcast, is electronically amplified to a power level sufficient to drive the motor, reproducing the sound corresponding to the original unamplified signal. This process functions as the inverse of a microphone. In fact, the dynamic speaker driver—the most common type—shares the same basic configuration as a dynamic microphone, which operates in reverse as a generator.
The dynamic speaker was invented in 1925 by Edward W. Kellogg and Chester W. Rice. When the electrical current from an audio signal passes through its voice coil—a coil of wire capable of moving axially in a cylindrical gap containing a concentrated magnetic field produced by a permanent magnet—the coil is forced to move rapidly back and forth due to Faraday's law of induction; this attaches to a diaphragm or speaker cone (as it is usually conically shaped for sturdiness) in contact with air, thus creating sound waves. In addition to dynamic speakers, several other technologies are possible for creating sound from an electrical signal, a few of which are in commercial use.
For a speaker to efficiently produce sound, especially at lower frequencies, the speaker driver must be baffled so that the sound emanating from its rear does not cancel out the (intended) sound from the front; this generally takes the form of a speaker enclosure or speaker cabinet, an often rectangular box made of wood, but sometimes metal or plastic. The enclosure's design plays an important acoustic role thus determining the resulting sound quality. Most high fidelity speaker systems (picture at right) include two or more sorts of speaker drivers, each specialized in one part of the audible frequency range. The smaller drivers capable of reproducing the highest audio frequencies are called tweeters, those for middle frequencies are called mid-range drivers and those for low frequencies are called woofers. Sometimes the reproduction of the very lowest frequencies (20–~50 Hz) is augmented by a so-called subwoofer often in its own (large) enclosure. In a two-way or three-way speaker system (one with drivers covering two or three different frequency ranges) there is a small amount of passive electronics called a crossover network which helps direct components of the electronic signal to the speaker drivers best capable of reproducing those frequencies. In a so-called powered speaker system, the power amplifier actually feeding the speaker drivers is built into the enclosure itself; these have become more and more common especially as computer speakers.
Smaller speakers are found in devices such as radios, televisions, portable audio players, personal computers (computer speakers), headphones, and earphones. Larger, louder speaker systems are used for home hi-fi systems (stereos), electronic musical instruments, sound reinforcement in theaters and concert halls, and in public address systems.
Terminology
The term loudspeaker may refer to individual transducers (also known as drivers) or to complete speaker systems consisting of an enclosure and one or more drivers.
To adequately and accurately reproduce a wide range of frequencies with even coverage, most loudspeaker systems employ more than one driver, particularly for higher sound pressure level (SPL) or maximum accuracy. Individual drivers are used to reproduce different frequency ranges. The drivers are named subwoofers (for very low frequencies); woofers (low frequencies); mid-range speakers (middle frequencies); tweeters (high frequencies); and sometimes supertweeters, for the highest audible frequencies and beyond. The terms for different speaker drivers differ, depending on the application. In two-way systems there is no mid-range driver, so the task of reproducing the mid-range sounds is divided between the woofer and tweeter. When multiple drivers are used in a system, a filter network, called an audio crossover, separates the incoming signal into different frequency ranges and routes them to the appropriate driver. A loudspeaker system with n separate frequency bands is described as n-way speakers: a two-way system will have a woofer and a tweeter; a three-way system employs a woofer, a mid-range, and a tweeter. Loudspeaker drivers of the type pictured are termed dynamic (short for electrodynamic) to distinguish them from other sorts including moving iron speakers, and speakers using piezoelectric or electrostatic systems.
History
Johann Philipp Reis installed an electric loudspeaker in his telephone in 1861; it was capable of reproducing clear tones, but later revisions could also reproduce muffled speech. Alexander Graham Bell patented his first electric loudspeaker (a moving iron type capable of reproducing intelligible speech) as part of his telephone in 1876, which was followed in 1877 by an improved version from Ernst Siemens. During this time, Thomas Edison was issued a British patent for a system using compressed air as an amplifying mechanism for his early cylinder phonographs, but he ultimately settled for the familiar metal horn driven by a membrane attached to the stylus. In 1898, Horace Short patented a design for a loudspeaker driven by compressed air; he then sold the rights to Charles Parsons, who was issued several additional British patents before 1910. A few companies, including the Victor Talking Machine Company and Pathé, produced record players using compressed-air loudspeakers. Compressed-air designs are significantly limited by their poor sound quality and their inability to reproduce sound at low volume. Variants of the design were used for public address applications, and more recently, other variations have been used to test space-equipment resistance to the very loud sound and vibration levels that the launching of rockets produces.
Moving-coil
The first experimental moving-coil (also called dynamic) loudspeaker was invented by Oliver Lodge in 1898. The first practical moving-coil loudspeakers were manufactured by Danish engineer Peter L. Jensen and Edwin Pridham in 1915, in Napa, California. Like previous loudspeakers these used horns to amplify the sound produced by a small diaphragm. Jensen was denied patents. Being unsuccessful in selling their product to telephone companies, in 1915 they changed their target market to radios and public address systems, and named their product Magnavox. Jensen was, for years after the invention of the loudspeaker, a part owner of The Magnavox Company.
The moving-coil principle commonly used today in speakers was patented in 1925 by Edward W. Kellogg and Chester W. Rice. The key difference between previous attempts and the patent by Rice and Kellogg is the adjustment of mechanical parameters to provide a reasonably flat frequency response.
These first loudspeakers used electromagnets, because large, powerful permanent magnets were generally not available at a reasonable price. The coil of an electromagnet, called a field coil, was energized by a current through a second pair of connections to the driver. This winding usually served a dual role, acting also as a choke coil, filtering the power supply of the amplifier that the loudspeaker was connected to. AC ripple in the current was attenuated by the action of passing through the choke coil. However, AC line frequencies tended to modulate the audio signal going to the voice coil and added to the audible hum. In 1930 Jensen introduced the first commercial fixed-magnet loudspeaker; however, the large, heavy iron magnets of the day were impractical and field-coil speakers remained predominant until the widespread availability of lightweight alnico magnets after World War II.
First loudspeaker systems
In the 1930s, loudspeaker manufacturers began to combine two and three drivers or sets of drivers each optimized for a different frequency range in order to improve frequency response and increase sound pressure level. In 1937, the first film industry-standard loudspeaker system, "The Shearer Horn System for Theatres", a two-way system, was introduced by Metro-Goldwyn-Mayer. It used four 15" low-frequency drivers, a crossover network set for 375 Hz, and a single multi-cellular horn with two compression drivers providing the high frequencies. John Kenneth Hilliard, James Bullough Lansing, and Douglas Shearer all played roles in creating the system. At the 1939 New York World's Fair, a very large two-way public address system was mounted on a tower at Flushing Meadows. The eight 27" low-frequency drivers were designed by Rudy Bozak in his role as chief engineer for Cinaudagraph. High-frequency drivers were likely made by Western Electric.
Altec Lansing introduced the 604, which became their most famous coaxial Duplex driver, in 1943. It incorporated a high-frequency horn that sent sound through a hole in the pole piece of a 15-inch woofer for near-point-source performance. Altec's "Voice of the Theatre" loudspeaker system was first sold in 1945, offering better coherence and clarity at the high output levels necessary in movie theaters. The Academy of Motion Picture Arts and Sciences immediately began testing its sonic characteristics; they made it the film house industry standard in 1955.
In 1954, Edgar Villchur developed the acoustic suspension principle of loudspeaker design. This allowed for better bass response than previously obtainable from drivers mounted in larger cabinets. He and his partner Henry Kloss formed the Acoustic Research company to manufacture and market speaker systems using this principle. Subsequently, continuous developments in enclosure design and materials led to significant audible improvements.
The most notable improvements to date in modern dynamic drivers, and the loudspeakers that employ them, are improvements in cone materials, the introduction of higher-temperature adhesives, improved permanent magnet materials, improved measurement techniques, computer-aided design, and finite element analysis. At low frequencies, Thiele/Small parameters electrical network theory has been used to optimize bass driver and enclosure synergy since the early 1970s.
Driver design: dynamic loudspeakers
Speaker systems
Speaker system design involves subjective perceptions of timbre and sound quality, measurements and experiments. Adjusting a design to improve performance is done using a combination of magnetic, acoustic, mechanical, electrical, and materials science theory, and tracked with high-precision measurements and the observations of experienced listeners. A few of the issues speaker and driver designers must confront are distortion, acoustic lobing, phase effects, off-axis response, and crossover artifacts. Designers can use an anechoic chamber to ensure the speaker can be measured independently of room effects, or any of several electronic techniques that, to some extent, substitute for such chambers. Some developers eschew anechoic chambers in favor of specific standardized room setups intended to simulate real-life listening conditions.
Individual electrodynamic drivers provide their best performance within a limited frequency range. Multiple drivers (e.g. subwoofers, woofers, mid-range drivers, and tweeters) are generally combined into a complete loudspeaker system to provide performance beyond that constraint. The three most commonly used sound radiation systems are the cone, dome and horn-type drivers.
Full-range drivers
A full- or wide-range driver is a speaker driver designed to be used alone to reproduce an audio channel without the help of other drivers and therefore must cover the audio frequency range required by the application. These drivers are small, typically in diameter to permit reasonable high-frequency response, and carefully designed to give low-distortion output at low frequencies, though with reduced maximum output level. Full-range drivers are found, for instance, in public address systems, in televisions, small radios, intercoms, and some computer speakers.
In hi-fi speaker systems, the use of wide-range drivers can avoid undesirable interactions between multiple drivers caused by non-coincident driver location or crossover network issues but also may limit frequency response and output abilities (most especially at low frequencies). Hi-fi speaker systems built with wide-range drivers may require large, elaborate or, expensive enclosures to approach optimum performance.
Full-range drivers often employ an additional cone called a whizzer: a small, light cone attached to the joint between the voice coil and the primary cone. The whizzer cone extends the high-frequency response of the driver and broadens its high-frequency directivity, which would otherwise be greatly narrowed due to the outer diameter cone material failing to keep up with the central voice coil at higher frequencies. The main cone in a whizzer design is manufactured so as to flex more in the outer diameter than in the center. The result is that the main cone delivers low frequencies and the whizzer cone contributes most of the higher frequencies. Since the whizzer cone is smaller than the main diaphragm, output dispersion at high frequencies is improved relative to an equivalent single larger diaphragm.
Limited-range drivers, also used alone, are typically found in computers, toys, and clock radios. These drivers are less elaborate and less expensive than wide-range drivers, and they may be severely compromised to fit into very small mounting locations. In these applications, sound quality is a low priority.
Subwoofer
A subwoofer is a woofer driver used only for the lowest-pitched part of the audio spectrum: typically below 200 Hz for consumer systems, below 100 Hz for professional live sound, and below 80 Hz in THX-approved systems. Because the intended range of frequencies is limited, subwoofer system design is usually simpler in many respects than for conventional loudspeakers, often consisting of a single driver enclosed in a suitable enclosure. Since sound in this frequency range can easily bend around corners by diffraction, the speaker aperture does not have to face the audience, and subwoofers can be mounted in the bottom of the enclosure, facing the floor. This is eased by the limitations of human hearing at low frequencies; Such sounds cannot be located in space, due to their large wavelengths compared to higher frequencies which produce differential effects in the ears due to shadowing by the head, and diffraction around it, both of which we rely upon for localization clues.
To accurately reproduce very low bass notes, subwoofer systems must be solidly constructed and properly braced to avoid unwanted sounds from cabinet vibrations. As a result, good subwoofers are typically quite heavy. Many subwoofer systems include integrated power amplifiers and electronic subsonic-filters, with additional controls relevant to low-frequency reproduction (e.g. a crossover knob and a phase switch). These variants are known as active or powered subwoofers. In contrast, passive subwoofers require external amplification.
In typical installations, subwoofers are physically separated from the rest of the speaker cabinets. Because of propagation delay and positioning, their output may be out of phase with the rest of the sound. Consequently, a subwoofer's power amp often has a phase-delay adjustment which may be used improve performance of the system as a whole. Subwoofers are widely used in large concert and mid-sized venue sound reinforcement systems. Subwoofer cabinets are often built with a bass reflex port, a design feature which if properly engineered improves bass performance and increases efficiency.
Woofer
A woofer is a driver that reproduces low frequencies. The driver works with the characteristics of the speaker enclosure to produce suitable low frequencies. Some loudspeaker systems use a woofer for the lowest frequencies, sometimes well enough that a subwoofer is not needed. Additionally, some loudspeakers use the woofer to handle middle frequencies, eliminating the mid-range driver.
Mid-range driver
A mid-range speaker is a loudspeaker driver that reproduces a band of frequencies generally between 1–6 kHz, otherwise known as the mid frequencies (between the woofer and tweeter). Mid-range driver diaphragms can be made of paper or composite materials and can be direct radiation drivers (rather like smaller woofers) or they can be compression drivers (rather like some tweeter designs). If the mid-range driver is a direct radiator, it can be mounted on the front baffle of a loudspeaker enclosure, or, if a compression driver, mounted at the throat of a horn for added output level and control of radiation pattern.
Tweeter
A tweeter is a high-frequency driver that reproduces the highest frequencies in a speaker system. A major problem in tweeter design is achieving wide angular sound coverage (off-axis response), since high-frequency sound tends to leave the speaker in narrow beams. Soft-dome tweeters are widely found in home stereo systems, and horn-loaded compression drivers are common in professional sound reinforcement. Ribbon tweeters have gained popularity as the output power of some designs has been increased to levels useful for professional sound reinforcement, and their output pattern is wide in the horizontal plane, a pattern that has convenient applications in concert sound.
Coaxial drivers
A coaxial driver is a loudspeaker driver with two or more combined concentric drivers. Coaxial drivers have been produced by Altec, Tannoy, Pioneer, KEF, SEAS, B&C Speakers, BMS, Cabasse and Genelec.
System design
Crossover
Used in multi-driver speaker systems, the crossover is an assembly of filters that separate the input signal into different frequency bands according to the requirements of each driver. Hence the drivers receive power only in the sound frequency range they were designed for, thereby reducing distortion in the drivers and interference between them. Crossovers can be passive or active.
A passive crossover is an electronic circuit that uses a combination of one or more resistors, inductors and capacitors. These components are combined to form a filter network and are most often placed between the full frequency-range power amplifier and the loudspeaker drivers to divide the amplifier's signal into the necessary frequency bands before being delivered to the individual drivers. Passive crossover circuits need no external power beyond the audio signal itself, but have some disadvantages: they may require larger inductors and capacitors due to power handling requirements. Unlike active crossovers which include a built-in amplifier, passive crossovers have an inherent attenuation within the passband, typically leading to a reduction in damping factor before the voice coil.
An active crossover is an electronic filter circuit that divides the signal into individual frequency bands before power amplification, thus requiring at least one power amplifier for each band. Passive filtering may also be used in this way before power amplification, but it is an uncommon solution, being less flexible than active filtering. Any technique that uses crossover filtering followed by amplification is commonly known as bi-amping, tri-amping, quad-amping, and so on, depending on the minimum number of amplifier channels.
Some loudspeaker designs use a combination of passive and active crossover filtering, such as a passive crossover between the mid- and high-frequency drivers and an active crossover for the low-frequency driver.
Passive crossovers are commonly installed inside speaker boxes and are by far the most common type of crossover for home and low-power use. In car audio systems, passive crossovers may be in a separate box, necessary to accommodate the size of the components used. Passive crossovers may be simple for low-order filtering, or complex to allow steep slopes such as 18 or 24 dB per octave. Passive crossovers can also be designed to compensate for undesired characteristics of driver, horn, or enclosure resonances, and can be tricky to implement, due to component interaction. Passive crossovers, like the driver units that they feed, have power handling limits, have insertion losses, and change the load seen by the amplifier. The changes are matters of concern for many in the hi-fi world. When high output levels are required, active crossovers may be preferable. Active crossovers may be simple circuits that emulate the response of a passive network or may be more complex, allowing extensive audio adjustments. Some active crossovers, usually digital loudspeaker management systems, may include electronics and controls for precise alignment of phase and time between frequency bands, equalization, dynamic range compression and limiting.
Enclosures
Most loudspeaker systems consist of drivers mounted in an enclosure, or cabinet. The role of the enclosure is to prevent sound waves emanating from the back of a driver from interfering destructively with those from the front. The sound waves emitted from the back are 180° out of phase with those emitted forward, so without an enclosure they typically cause cancellations which significantly degrade the level and quality of sound at low frequencies.
The simplest driver mount is a flat panel (baffle) with the drivers mounted in holes in it. However, in this approach, sound frequencies with a wavelength longer than the baffle dimensions are canceled out because the antiphase radiation from the rear of the cone interferes with the radiation from the front. With an infinitely large panel, this interference could be entirely prevented. A sufficiently large sealed box can approach this behavior.
Since panels of infinite dimensions are impossible, most enclosures function by containing the rear radiation from the moving diaphragm. A sealed enclosure prevents transmission of the sound emitted from the rear of the loudspeaker by confining the sound in a rigid and airtight box. Techniques used to reduce the transmission of sound through the walls of the cabinet include thicker cabinet walls, internal bracing and lossy wall material.
However, a rigid enclosure reflects sound internally, which can then be transmitted back through the loudspeaker diaphragm—again resulting in degradation of sound quality. This can be reduced by internal absorption using absorptive materials such as glass wool, wool, or synthetic fiber batting, within the enclosure. The internal shape of the enclosure can also be designed to reduce this by reflecting sounds away from the loudspeaker diaphragm, where they may then be absorbed.
Other enclosure types alter the rear sound radiation so it can add constructively to the output from the front of the cone. Designs that do this (including bass reflex, passive radiator, transmission line, etc.) are often used to extend the effective low-frequency response and increase the low-frequency output of the driver.
To make the transition between drivers as seamless as possible, system designers have attempted to time align the drivers by moving one or more driver mounting locations forward or back so that the acoustic center of each driver is in the same vertical plane. This may also involve tilting the driver back, providing a separate enclosure mounting for each driver, or using electronic techniques to achieve the same effect. These attempts have resulted in some unusual cabinet designs.
The speaker mounting scheme (including cabinets) can also cause diffraction, resulting in peaks and dips in the frequency response. The problem is usually greatest at higher frequencies, where wavelengths are similar to, or smaller than, cabinet dimensions.
Horn loudspeakers
Horn loudspeakers are the oldest form of loudspeaker system. The use of horns as voice-amplifying megaphones dates at least to the 17th century, and horns were used in mechanical gramophones as early as 1877. Horn loudspeakers use a shaped waveguide in front of or behind the driver to increase the directivity of the loudspeaker and to transform a small diameter, high-pressure condition at the driver cone surface to a large diameter, low-pressure condition at the mouth of the horn. This improves the acoustic—electro/mechanical impedance match between the driver and ambient air, increasing efficiency, and focusing the sound over a narrower area.
The size of the throat, mouth, the length of the horn, as well as the area expansion rate along it must be carefully chosen to match the driver to properly provide this transforming function over a range of frequencies. The length and cross-sectional mouth area required to create a bass or sub-bass horn dictates a horn many feet long. Folded horns can reduce the total size, but compel designers to make compromises and accept increased cost and construction complications. Some horn designs not only fold the low-frequency horn but use the walls in a room corner as an extension of the horn mouth. In the late 1940s, horns whose mouths took up much of a room wall were not unknown among hi-fi fans. Room-sized installations became much less acceptable when two or more were required.
A horn-loaded speaker can have a sensitivity as high as 110 dB at 2.83 volts (1 watt at 8 ohms) at 1 meter. This is a hundredfold increase in output compared to a speaker rated at 90 dB sensitivity and is invaluable in applications where high sound levels are required or amplifier power is limited.
Transmission line loudspeaker
A transmission line loudspeaker is a loudspeaker enclosure design that uses an acoustic transmission line within the cabinet, compared to the simpler enclosure-based designs. Instead of reverberating in a fairly simple damped enclosure, sound from the back of the bass speaker is directed into a long (generally folded) damped pathway within the speaker enclosure, which allows greater control and efficient use of speaker energy.
Wiring connections
Most home hi-fi loudspeakers use two wiring points to connect to the source of the signal (for example, to the audio amplifier or receiver). To accept the wire connection, the loudspeaker enclosure may have binding posts, spring clips, or a panel-mount jack. If the wires for a pair of speakers are not connected with respect to the proper electrical polarity, the loudspeakers are said to be out of phase or more properly out of polarity. Given identical signals, motion in the cone of an out of polarity loudspeaker is in the opposite direction of the others. This typically causes monophonic material in a stereo recording to be canceled out, reduced in level, and made more difficult to localize, all due to destructive interference of the sound waves. The cancellation effect is most noticeable at frequencies where the loudspeakers are separated by a quarter wavelength or less; low frequencies are affected the most. This type of miswiring error does not damage speakers, but is not optimal for listening.
With sound reinforcement system, PA system and instrument amplifier speaker enclosures, cables and some type of jack or connector are typically used. Lower- and mid-priced sound system and instrument speaker cabinets often use 1/4" jacks. Higher-priced and higher-powered sound system cabinets and instrument speaker cabinets often use Speakon connectors. Speakon connectors are considered to be safer for high-wattage amplifiers, because the connector is designed so that human users cannot touch the connectors.
Wireless speakers
Wireless speakers are similar to wired powered speakers, but they receive audio signals using radio frequency (RF) waves rather than over audio cables. There is an amplifier integrated in the speaker's cabinet because the RF waves alone are not enough to drive the speaker. Wireless speakers still need power, so require a nearby AC power outlet, or onboard batteries. Only the wire for the audio is eliminated.
Specifications
Speaker specifications generally include:
Speaker or driver type (individual units only) – full-range, woofer, tweeter, or mid-range.
Size of individual drivers. For cone drivers, the quoted size is generally the outside diameter of the basket. However, it may less commonly also be the diameter of the cone surround, measured apex to apex, or the distance from the center of one mounting hole to its opposite. Voice-coil diameter may also be specified. If the loudspeaker has a compression horn driver, the diameter of the horn throat may be given.
Rated power – power, and peak power a loudspeaker can handle. A driver may be damaged at much less than its rated power if driven past its mechanical limits at lower frequencies. In some jurisdictions, power handling has a legal meaning allowing comparisons between loudspeakers under consideration. Elsewhere, the variety of meanings for power handling capacity can be quite confusing.
Impedance – typically 4 Ω (ohms), 8 Ω, etc.
Baffle or enclosure type (enclosed systems only) – Sealed, bass reflex, etc.
Number of drivers (complete speaker systems only) – two-way, three-way, etc.
Class of loudspeaker:
Class 1: maximum SPL 110-119 dB, the type of loudspeaker used for reproducing a person speaking in a small space or for background music; mainly used as fill speakers for Class 2 or Class 3 speakers; typically small 4" or 5" woofers and dome tweeters
Class 2: maximum SPL 120-129 dB, the type of medium power-capable loudspeaker used for reinforcement in small to medium spaces or as fill speakers for Class 3 or Class 4 speakers; typically 5" to 8" woofers and dome tweeters
Class 3: maximum SPL 130-139 dB, high power-capable loudspeakers used in main systems in small to medium spaces; also used as fill speakers for class 4 speakers; typically 6.5" to 12" woofers and 2" or 3" compression drivers for high frequencies
Class 4: maximum SPL 140 dB and higher, very high power-capable loudspeakers used as mains in medium to large spaces (or for fill speakers for these medium to large spaces); 10" to 15" woofers and 3" compression drivers
and optionally:
Crossover frequency(ies) (multi-driver systems only) – The nominal frequency boundaries of the division between drivers.
Frequency response – The measured, or specified, output over a specified range of frequencies for a constant input level varied across those frequencies. It sometimes includes a variance limit, such as within "± 2.5 dB."
Thiele/Small parameters (individual drivers only) – these include the driver's Fs (resonance frequency), Qts (a driver's Q; more or less, its damping factor at resonant frequency), Vas (the equivalent air compliance volume of the driver), etc.
Sensitivity – The sound pressure level produced by a loudspeaker in a non-reverberant environment, often specified in dB and measured at 1 meter with an input of 1 watt (2.83 rms volts into 8 Ω), typically at one or more specified frequencies. Manufacturers often use this rating in marketing material.
Maximum sound pressure level – The highest output the loudspeaker can manage, short of damage or not exceeding a particular distortion level. Manufacturers often use this rating in marketing material—commonly without reference to frequency range or distortion level.
Electrical characteristics of dynamic loudspeakers
To make sound, a loudspeaker is driven by modulated electric current (produced by an amplifier) that passes through a speaker coil which then (through inductance) creates a magnetic field around the coil. The electric current variations that pass through the speaker are thus converted to a varying magnetic field, whose interaction with the driver's magnetic field moves the speaker diaphragm, which thus forces the driver to produce air motion that is similar to the original signal from the amplifier.
The load that a driver presents to an amplifier consists of a complex electrical impedance—a combination of resistance and both capacitive and inductive reactance, which combines properties of the driver, its mechanical motion, the effects of crossover components (if any are in the signal path between amplifier and driver), and the effects of air loading on the driver as modified by the enclosure and its environment. Most amplifiers' output specifications are given at a specific power into an ideal resistive load; however, a loudspeaker does not have a constant impedance across its frequency range. Instead, the voice coil is inductive, the driver has mechanical resonances, the enclosure changes the driver's electrical and mechanical characteristics, and a passive crossover between the drivers and the amplifier contributes its own variations. The result is a load impedance that varies widely with frequency, and usually a varying phase relationship between voltage and current as well, also changing with frequency. Some amplifiers can cope with the variation better than others can.
Electromechanical measurements
Examples of typical loudspeaker measurement are: amplitude and phase characteristics vs. frequency; impulse response under one or more conditions (e.g. square waves, sine wave bursts, etc.); directivity vs. frequency (e.g. horizontally, vertically, spherically, etc.); harmonic and intermodulation distortion vs. sound pressure level (SPL) output, using any of several test signals; stored energy (i.e. ringing) at various frequencies; impedance vs. frequency; and small-signal vs. large-signal performance. Most of these measurements require sophisticated and often expensive equipment to perform. The sound pressure level (SPL) a loudspeaker produces is measured in decibels (dBspl).
Efficiency vs. sensitivity
Loudspeaker efficiency is defined as the sound power output divided by the electrical power input. Most loudspeakers are inefficient transducers; only about 1% of the electrical energy sent by an amplifier to a typical home loudspeaker is converted to acoustic energy. The remainder is converted to heat, mostly in the voice coil and magnet assembly. The main reason for this is the difficulty of achieving proper impedance matching between the acoustic impedance of the drive unit and the air it radiates into. The efficiency of loudspeaker drivers varies with frequency as well. For instance, the output of a woofer driver decreases as the input frequency decreases because of the increasingly poor impedance match between air and the driver.
Driver ratings based on the SPL for a given input are called sensitivity ratings and are notionally similar to efficiency. Sensitivity is usually defined as the SPL in decibels at 1 W electrical input, measured at 1 meter, often at a single frequency. The voltage used is often 2.83 VRMS, which results in 1 watt into a nominal 8 Ω speaker impedance. Measurements taken with this reference are quoted as dB with 2.83 V @ 1 m.
The sound pressure output is measured at (or mathematically scaled to be equivalent to a measurement taken at) one meter from the loudspeaker and on-axis (directly in front of it), under the condition that the loudspeaker is radiating into an infinitely large space and mounted on an infinite baffle. Clearly then, sensitivity does not correlate precisely with efficiency, as it also depends on the directivity of the driver being tested and the acoustic environment in front of the actual loudspeaker. For example, a cheerleader's horn produces more sound output in the direction it is pointed by concentrating sound waves from the cheerleader in one direction, thus focusing them. The horn also improves impedance matching between the voice and the air, which produces more acoustic power for a given speaker power. In some cases, improved impedance matching (via careful enclosure design) lets the speaker produce more acoustic power.
Typical home loudspeakers have sensitivities of about 85 to 95 dB for 1 W @ 1 m—an efficiency of 0.5–4%. Sound reinforcement and public address loudspeakers have sensitivities of perhaps 95 to 102 dB for 1 W @ 1 m—an efficiency of 4–10%. Rock concert, stadium PA, marine hailing, etc. speakers generally have higher sensitivities of 103 to 110 dB for 1 W @ 1 m—an efficiency of 10–20%.
Since sensitivity and power handling are largely independent properties, a driver with a higher maximum power rating cannot necessarily be driven to louder levels than a lower-rated one. In the example that follows, assume (for simplicity) that the drivers being compared have the same electrical impedance, are operated at the same frequency within both driver's respective passbands, and that power compression and distortion are insignificant. A speaker 3 dB more sensitive than another produces double the sound power (is 3 dB louder) for the same electrical power input. Thus, a 100 W driver (A) rated at 92 dB for 1 W @ 1 m sensitivity puts out twice as much acoustic power as a 200 W driver (B) rated at 89 dB for 1 W @ 1 m when both are driven with 100 W of electrical power. In this example, when driven at 100 W, speaker A produces the same SPL, or loudness as speaker B would produce with 200 W input. Thus, a 3 dB increase in the sensitivity of the speaker means that it needs half the amplifier power to achieve a given SPL. This translates into a smaller, less complex power amplifier—and often, to reduced overall system cost.
It is typically not possible to combine high efficiency (especially at low frequencies) with compact enclosure size and adequate low-frequency response. One can, for the most part, choose only two of the three parameters when designing a speaker system. So, for example, if extended low-frequency performance and small box size are important, one must accept low efficiency. This rule of thumb is sometimes called Hofmann's Iron Law (after J.A. Hofmann, the H in KLH).
Listening environment
The interaction of a loudspeaker system with its environment is complex and is largely out of the loudspeaker designer's control. Most listening rooms present a more or less reflective environment, depending on size, shape, volume, and furnishings. This means the sound reaching a listener's ears consists not only of sound directly from the speaker system, but also the same sound delayed by traveling to and from (and being modified by) one or more surfaces. These reflected sound waves, when added to the direct sound, cause cancellation and addition at assorted frequencies (e.g. from resonant room modes), thus changing the timbre and character of the sound at the listener's ears. The human brain is sensitive to small variations in reflected sound, and this is part of the reason why a loudspeaker system sounds different at different listening positions or in different rooms.
A significant factor in the sound of a loudspeaker system is the amount of absorption and diffusion present in the environment. Clapping one's hands in a typical empty room, without draperies or carpet, produces a zippy, fluttery echo due to a lack of absorption and diffusion.
Placement
In a typical rectangular listening room, the hard, parallel surfaces of the walls, floor and ceiling cause primary acoustic resonance nodes in each of the three dimensions: left-right, up-down and forward-backward. Furthermore, there are more complex resonance modes involving up to all six boundary surfaces combining to create standing waves. This is called speaker boundary interference response (SBIR). Low frequencies excite these modes the most, since long wavelengths are not much affected by furniture compositions or placement. The mode spacing is critical, especially in small and medium-sized rooms like recording studios, home theaters and broadcast studios. The proximity of the loudspeakers to room boundaries affects how strongly the resonances are excited as well as affecting the relative strength at each frequency. The location of the listener is critical, too, as a position near a boundary can have a great effect on the perceived balance of frequencies. This is because standing wave patterns are most easily heard in these locations and at lower frequencies, below the Schroeder frequency—typically around 200–300 Hz, depending on room size.
Directivity
Acousticians, in studying the radiation of sound sources have developed some concepts important to understanding how loudspeakers are perceived. The simplest possible radiating source is a point source, sometimes called a simple source. An ideal point source is an infinitesimally small point radiating sound. It may be easier to imagine a tiny pulsating sphere, uniformly increasing and decreasing in diameter, sending out sound waves in all directions equally, independent of frequency.
Any object radiating sound, including a loudspeaker system, can be thought of as being composed of combinations of such simple point sources. The radiation pattern of a combination of point sources is not the same as for a single source, but depends on the distance and orientation between the sources, the position relative to them from which the listener hears the combination, and the frequency of the sound involved. Using geometry and calculus, some simple combinations of sources are easily solved; others are not.
One simple combination is two simple sources separated by a distance and vibrating out of phase, one miniature sphere expanding while the other is contracting. The pair is known as a doublet, or dipole, and the radiation of this combination is similar to that of a very small dynamic loudspeaker operating without a baffle. The directivity of a dipole is a figure 8 shape with maximum output along a vector that connects the two sources and minimums to the sides when the observing point is equidistant from the two sources, where the sum of the positive and negative waves cancel each other. While most drivers are dipoles, depending on the enclosure to which they are attached, they may radiate as monopoles, dipoles (or bipoles). If mounted on a finite baffle, and these out-of-phase waves are allowed to interact, dipole peaks and nulls in the frequency response result. When the rear radiation is absorbed or trapped in a box, the diaphragm becomes a monopole radiator. Bipolar speakers, made by mounting in-phase monopoles (both moving out of or into the box in unison) on opposite sides of a box, are a method of approaching omnidirectional radiation patterns.
In real life, individual drivers are complex 3D shapes such as cones and domes, and they are placed on a baffle for various reasons. A mathematical expression for the directivity of a complex shape, based on modeling combinations of point sources, is usually not possible, but in the far field, the directivity of a loudspeaker with a circular diaphragm is close to that of a flat circular piston, so it can be used as an illustrative simplification for discussion. As a simple example of the mathematical physics involved, consider the following:
the formula for far field directivity of a flat circular piston in an infinite baffle is
where , is the pressure on axis, is the piston radius, is the wavelength (i.e. ) is the angle off axis and is the Bessel function of the first kind.
A planar source radiates sound uniformly for low frequencies' wavelengths longer than the dimensions of the planar source, and as frequency increases, the sound from such a source focuses into an increasingly narrower angle. The smaller the driver, the higher the frequency where this narrowing of directivity occurs. Even if the diaphragm is not perfectly circular, this effect occurs such that larger sources are more directive. Several loudspeaker designs approximate this behavior. Most are electrostatic or planar magnetic designs.
Various manufacturers use different driver mounting arrangements to create a specific type of sound field in the space for which they are designed. The resulting radiation patterns may be intended to more closely simulate the way sound is produced by real instruments, or simply create a controlled energy distribution from the input signal (some using this approach are called monitors, as they are useful in checking the signal just recorded in a studio). An example of the first is a room corner system with many small drivers on the surface of a 1/8 sphere. A system design of this type was patented and produced commercially by Professor Amar Bose—the 2201. Later Bose models have deliberately emphasized production of both direct and reflected sound by the loudspeaker itself, regardless of its environment. The designs are controversial in high fidelity circles, but have proven commercially successful. Several other manufacturers' designs follow similar principles.
Directivity is an important issue because it affects the frequency balance of sound a listener hears, and also the interaction of the speaker system with the room and its contents. A very directive (sometimes termed 'beamy') speaker (i.e. on an axis perpendicular to the speaker face) may result in a reverberant field lacking in high frequencies, giving the impression the speaker is deficient in treble even though it measures well on axis (e.g. flat across the entire frequency range). Speakers with very wide, or rapidly increasing directivity at high frequencies, can give the impression that there is too much treble (if the listener is on axis) or too little (if the listener is off axis). This is part of the reason why on-axis frequency response measurement is not a complete characterization of the sound of a given loudspeaker.
Other speaker designs
While dynamic cone speakers remain the most popular choice, many other speaker technologies exist.
With a diaphragm
Moving-iron loudspeakers
The original loudspeaker design was the moving iron. Unlike the newer dynamic (moving coil) design, a moving-iron speaker uses a stationary coil to vibrate a magnetized piece of metal (called the iron, reed, or armature). The metal is either attached to the diaphragm or is the diaphragm itself. This design originally appeared in the early telephone.
Moving iron drivers are inefficient and can only produce a small band of sound. They require large magnets and coils to increase force.
Balanced armature drivers (a type of moving iron driver) use an armature that moves like a see-saw or diving board. Since they are not damped, they are highly efficient, but they also produce strong resonances. They are still used today for high-end earphones and hearing aids, where small size and high efficiency are important.
Piezoelectric speakers
Piezoelectric speakers are frequently used as beepers in watches and other electronic devices, and are sometimes used as tweeters in less-expensive speaker systems, such as computer speakers and portable radios. Piezoelectric speakers have several advantages over conventional loudspeakers: they are resistant to overloads that would normally destroy most high-frequency drivers, and they can be used without a crossover due to their electrical properties. There are also disadvantages: some amplifiers can oscillate when driving capacitive loads like most piezoelectrics, which results in distortion or damage to the amplifier. Additionally, their frequency response, in most cases, is inferior to that of other technologies. This is why they are generally used in single-frequency (beeper) or non-critical applications.
Piezoelectric speakers can have extended high-frequency output, and this is useful in some specialized circumstances; for instance, sonar applications in which piezoelectric variants are used as both output devices (generating underwater sound) and as input devices (acting as the sensing components of underwater microphones). They have advantages in these applications, not the least of which is simple and solid-state construction that resists seawater better than a ribbon or cone-based device would.
In 2013, Kyocera introduced piezoelectric ultra-thin medium-size film speakers with only 1 millimeter of thickness and 7 grams of weight for their 55" OLED televisions and they hope the speakers will also be used in PCs and tablets. Besides medium-size, there are also large and small sizes which can all produce relatively the same quality of sound and volume within 180 degrees. The highly responsive speaker material provides better clarity than traditional TV speakers.
Magnetostatic loudspeakers
Instead of a voice coil driving a speaker cone, a magnetostatic speaker uses an array of metal strips bonded to a large film membrane. The magnetic field produced by signal current flowing through the strips interacts with the field of permanent bar magnets mounted behind them. The force produced moves the membrane and so the air in front of it. Typically, these designs are less efficient than conventional moving-coil speakers.
Magnetostrictive speakers
Magnetostrictive transducers, based on magnetostriction, have been predominantly used as sonar ultrasonic sound wave radiators, but their use has spread also to audio speaker systems. Magnetostrictive speaker drivers have some special advantages: they can provide greater force (with smaller excursions) than other technologies; low excursion can avoid distortions from large excursion as in other designs; the magnetizing coil is stationary and therefore more easily cooled; they are robust because delicate suspensions and voice coils are not required. Magnetostrictive speaker modules have been produced by Fostex and FeONIC and subwoofer drivers have also been produced.
Electrostatic loudspeakers
Electrostatic loudspeakers use a high-voltage electric field (rather than a magnetic field) to drive a thin statically charged membrane. Because they are driven over the entire membrane surface rather than from a small voice coil, they ordinarily provide a more linear and lower-distortion motion than dynamic drivers. They also have a relatively narrow dispersion pattern that can make for precise sound-field positioning. However, their optimum listening area is small and they are not very efficient speakers. They have the disadvantage that the diaphragm excursion is severely limited because of practical construction limitations—the further apart the stators are positioned, the higher the voltage must be to achieve acceptable efficiency. This increases the tendency for electrical arcs as well as increasing the speaker's attraction of dust particles. Arcing remains a potential problem with current technologies, especially when the panels are allowed to collect dust or dirt and are driven with high signal levels.
Electrostatics are inherently dipole radiators and due to the thin flexible membrane are less suited for use in enclosures to reduce low-frequency cancellation as with common cone drivers. Due to this and the low excursion capability, full-range electrostatic loudspeakers are large by nature, and the bass rolls off at a frequency corresponding to a quarter wavelength of the narrowest panel dimension. To reduce the size of commercial products, they are sometimes used as a high-frequency driver in combination with a conventional dynamic driver that handles the bass frequencies effectively.
Electrostatics are usually driven through a step-up transformer that multiplies the voltage swings produced by the power amplifier. This transformer also multiplies the capacitive load that is inherent in electrostatic transducers, which means the effective impedance presented to the power amplifiers varies widely by frequency. A speaker that is nominally 8 ohms may actually present a load of 1 ohm at higher frequencies, which is challenging to some amplifier designs.
Ribbon and planar magnetic loudspeakers
A ribbon speaker consists of a thin metal-film ribbon suspended in a magnetic field. The electrical signal is applied to the ribbon, which moves with it to create the sound. The advantage of a ribbon driver is that the ribbon has very little mass; thus, it can accelerate very quickly, yielding a very good high-frequency response. Ribbon loudspeakers are often very fragile. Most ribbon tweeters emit sound in a dipole pattern. A few have backings that limit the dipole radiation pattern. Above and below the ends of the more or less rectangular ribbon, there is less audible output due to phase cancellation, but the precise amount of directivity depends on the ribbon length. Ribbon designs generally require exceptionally powerful magnets, which makes them costly to manufacture. Ribbons have a very low resistance that most amplifiers cannot drive directly. As a result, a step down transformer is typically used to increase the current through the ribbon. The amplifier sees a load that is the ribbon's resistance times the transformer turns ratio squared. The transformer must be carefully designed so that its frequency response and parasitic losses do not degrade the sound, further increasing cost and complication relative to conventional designs.
Planar magnetic speakers (having printed or embedded conductors on a flat diaphragm) are sometimes described as ribbons, but are not truly ribbon speakers. The term planar is generally reserved for speakers with roughly rectangular flat surfaces that radiate in a bipolar (i.e. front and back) manner. Planar magnetic speakers consist of a flexible membrane with a voice coil printed or mounted on it. The current flowing through the coil interacts with the magnetic field of carefully placed magnets on either side of the diaphragm, causing the membrane to vibrate more or less uniformly and without much bending or wrinkling. The driving force covers a large percentage of the membrane surface and reduces resonance problems inherent in coil-driven flat diaphragms.
Bending wave loudspeakers
Bending wave transducers use a diaphragm that is intentionally flexible. The rigidity of the material increases from the center to the outside. Short wavelengths radiate primarily from the inner area, while longer waves reach the edge of the speaker. To prevent reflections from the outside back into the center, long waves are absorbed by a surrounding damper. Such transducers can cover a wide frequency range (80 Hz to 35,000 Hz) and have been promoted as being close to an ideal point sound source. This uncommon approach is being taken by only a very few manufacturers, in very different arrangements.
The Ohm Walsh loudspeakers use a unique driver designed by Lincoln Walsh, who had been a radar development engineer in WWII. He became interested in audio equipment design and his last project was a unique, one-way speaker using a single driver. The cone faced down into a sealed, airtight enclosure. Rather than move back and forth as conventional speakers do, the cone rippled and created sound in a manner known in RF electronics as a "transmission line". The new speaker created a cylindrical sound field. Lincoln Walsh died before his speaker was released to the public. The Ohm Acoustics firm has produced several loudspeaker models using the Walsh driver design since then. German Physiks, an audio equipment firm in Germany, also produces speakers using this approach.
The German firm Manger has designed and produced a bending wave driver that at first glance appears conventional. In fact, the round panel attached to the voice coil bends in a carefully controlled way to produce full-range sound. Josef W. Manger was awarded with the Rudolf-Diesel-Medaille for extraordinary developments and inventions by the German institute of inventions.
Flat panel loudspeakers
There have been many attempts to reduce the size of speaker systems, or alternatively to make them less obvious. One such attempt was the development of exciter transducer coils mounted to flat panels to act as sound sources, most accurately called exciter/panel drivers. These can then be made in a neutral color and hung on walls where they are less noticeable than many speakers, or can be deliberately painted with patterns, in which case they can function decoratively. There are two related problems with flat panel techniques: first, a flat panel is necessarily more flexible than a cone shape in the same material, and therefore moves as a single unit even less, and second, resonances in the panel are difficult to control, leading to considerable distortions. Some progress has been made using such lightweight, rigid, materials such as Styrofoam, and there have been several flat panel systems commercially produced in recent years.
Heil air motion transducers
Oskar Heil invented the air motion transducer in the 1960s. In this approach, a pleated diaphragm is mounted in a magnetic field and forced to close and open under control of a music signal. Air is forced from between the pleats in accordance with the imposed signal, generating sound. The drivers are less fragile than ribbons and considerably more efficient (and able to produce higher absolute output levels) than ribbon, electrostatic, or planar magnetic tweeter designs. ESS, a California manufacturer, licensed the design, employed Heil, and produced a range of speaker systems using his tweeters during the 1970s and 1980s. Lafayette Radio, a large US retail store chain, also sold speaker systems using such tweeters for a time. There are several manufacturers of these drivers (at least two in Germany—one of which produces a range of high-end professional speakers using tweeters and mid-range drivers based on the technology) and the drivers are increasingly used in professional audio. Martin Logan produces several AMT speakers in the US and GoldenEar Technologies incorporates them in its entire speaker line.
Transparent ionic conduction speaker
In 2013, a research team introduced a transparent ionic conduction speaker which has two sheets of transparent conductive gel and a layer of transparent rubber in between to make high voltage and high actuation work to reproduce good sound quality. The speaker is suitable for robotics, mobile computing and adaptive optics fields.
Digital speakers
Digital speakers have been the subject of experiments performed by Bell Labs as far back as the 1920s. The design is simple; each bit controls a driver, which is either fully 'on' or 'off'. Problems with this design have led manufacturers to abandon it as impractical for the present. First, for a reasonable number of bits (required for adequate sound reproduction quality), the physical size of a speaker system becomes very large. Secondly, due to inherent analog-to-digital conversion problems, the effect of aliasing is unavoidable, so that the audio output is reflected at equal amplitude in the frequency domain, on the other side of the Nyquist limit (half the sampling frequency), causing an unacceptably high level of ultrasonics to accompany the desired output. No workable scheme has been found to adequately deal with this.
Without a diaphragm
Plasma arc speakers
Plasma arc loudspeakers use electrical plasma as a radiating element. Since plasma has minimal mass, but is charged and therefore can be manipulated by an electric field, the result is a very linear output at frequencies far higher than the audible range. Problems of maintenance and reliability for this approach tend to make it unsuitable for mass market use. In 1978 Alan E. Hill of the Air Force Weapons Laboratory in Albuquerque, NM, designed the Plasmatronics Hill Type I, a tweeter whose plasma was generated from helium gas. This avoided the ozone and NOx produced by RF decomposition of air in an earlier generation of plasma tweeters made by the pioneering DuKane Corporation, who produced the Ionovac (marketed as the Ionofane in the UK) during the 1950s.
A less expensive variation on this theme is the use of a flame for the driver, as flames contain ionized (electrically charged) gases.
Thermoacoustic speakers
In 2008, researchers of Tsinghua University demonstrated a thermoacoustic loudspeaker (or thermophone) of carbon nanotube thin film, whose working mechanism is a thermoacoustic effect. Sound frequency electric currents are used to periodically heat the CNT and thus result in sound generation in the surrounding air. The CNT thin film loudspeaker is transparent, stretchable and flexible.
In 2013, researchers of Tsinghua University further present a thermoacoustic earphone of carbon nanotube thin yarn and a thermoacoustic surface-mounted device. They are both fully integrated devices and compatible with Si-based semiconducting technology.
Rotary woofers
A rotary woofer is essentially a fan with blades that constantly change their pitch, allowing them to easily push the air back and forth. Rotary woofers are able to efficiently reproduce subsonic frequencies, which are difficult to impossible to achieve on a traditional speaker with a diaphragm. They are often employed in movie theaters to recreate rumbling bass effects, such as explosions.
See also
Audio power
Audiophile
Bandwidth extension
Digital speaker
Directional sound
Earphone
Echo cancellation
Electronics
Ferrofluid#Heat transfer
Guitar speaker
Headphones
High-end audio
Isobaric loudspeaker
List of loudspeaker manufacturers
Loudspeaker acoustics
Long Range Acoustic Device (LRAD)
Music center
Parabolic loudspeaker
Phase plug
Planephones
Rotary woofer
Shelf stereo
Sound from ultrasound
Soundbar
Speaker driver
Speaker stands
Speaker wire
Speakerphone
Studio monitor
Super tweeter
Surround sound
Notes
References
External links
Conversion of sensitivity to energy efficiency in percent for passive loudspeakers
Article on sensitivity and efficiency of loudspeakers (PDF)
American inventions
Audio engineering
Audiovisual introductions in 1924
Consumer electronics
Music technology | Loudspeaker | [
"Engineering"
] | 12,502 | [
"Electrical engineering",
"Audio engineering"
] |
45,906 | https://en.wikipedia.org/wiki/Exponential%20distribution | In probability theory and statistics, the exponential distribution or negative exponential distribution is the probability distribution of the distance between events in a Poisson point process, i.e., a process in which events occur continuously and independently at a constant average rate; the distance parameter could be any meaningful mono-dimensional measure of the process, such as time between production errors, or length along a roll of fabric in the weaving manufacturing process. It is a particular case of the gamma distribution. It is the continuous analogue of the geometric distribution, and it has the key property of being memoryless. In addition to being used for the analysis of Poisson point processes it is found in various other contexts.
The exponential distribution is not the same as the class of exponential families of distributions. This is a large class of probability distributions that includes the exponential distribution as one of its members, but also includes many other distributions, like the normal, binomial, gamma, and Poisson distributions.
Definitions
Probability density function
The probability density function (pdf) of an exponential distribution is
Here λ > 0 is the parameter of the distribution, often called the rate parameter. The distribution is supported on the interval . If a random variable X has this distribution, we write .
The exponential distribution exhibits infinite divisibility.
Cumulative distribution function
The cumulative distribution function is given by
Alternative parametrization
The exponential distribution is sometimes parametrized in terms of the scale parameter , which is also the mean:
Properties
Mean, variance, moments, and median
The mean or expected value of an exponentially distributed random variable X with rate parameter λ is given by
In light of the examples given below, this makes sense; a person who receives an average of two telephone calls per hour can expect that the time between consecutive calls will be 0.5 hour, or 30 minutes.
The variance of X is given by
so the standard deviation is equal to the mean.
The moments of X, for are given by
The central moments of X, for are given by
where !n is the subfactorial of n
The median of X is given by
where refers to the natural logarithm. Thus the absolute difference between the mean and median is
in accordance with the median-mean inequality.
Memorylessness property of exponential random variable
An exponentially distributed random variable T obeys the relation
This can be seen by considering the complementary cumulative distribution function:
When T is interpreted as the waiting time for an event to occur relative to some initial time, this relation implies that, if T is conditioned on a failure to observe the event over some initial period of time s, the distribution of the remaining waiting time is the same as the original unconditional distribution. For example, if an event has not occurred after 30 seconds, the conditional probability that occurrence will take at least 10 more seconds is equal to the unconditional probability of observing the event more than 10 seconds after the initial time.
The exponential distribution and the geometric distribution are the only memoryless probability distributions.
The exponential distribution is consequently also necessarily the only continuous probability distribution that has a constant failure rate.
Quantiles
The quantile function (inverse cumulative distribution function) for Exp(λ) is
The quartiles are therefore:
first quartile: ln(4/3)/λ
median: ln(2)/λ
third quartile: ln(4)/λ
And as a consequence the interquartile range is ln(3)/λ.
Conditional Value at Risk (Expected Shortfall)
The conditional value at risk (CVaR) also known as the expected shortfall or superquantile for Exp(λ) is derived as follows:
Buffered Probability of Exceedance (bPOE)
The buffered probability of exceedance is one minus the probability level at which the CVaR equals the threshold . It is derived as follows:
Kullback–Leibler divergence
The directed Kullback–Leibler divergence in nats of ("approximating" distribution) from ('true' distribution) is given by
Maximum entropy distribution
Among all continuous probability distributions with support and mean μ, the exponential distribution with λ = 1/μ has the largest differential entropy. In other words, it is the maximum entropy probability distribution for a random variate X which is greater than or equal to zero and for which E[X] is fixed.
Distribution of the minimum of exponential random variables
Let X1, ..., Xn be independent exponentially distributed random variables with rate parameters λ1, ..., λn. Then
is also exponentially distributed, with parameter
This can be seen by considering the complementary cumulative distribution function:
The index of the variable which achieves the minimum is distributed according to the categorical distribution
A proof can be seen by letting . Then,
Note that
is not exponentially distributed, if X1, ..., Xn do not all have parameter 0.
Joint moments of i.i.d. exponential order statistics
Let be independent and identically distributed exponential random variables with rate parameter λ.
Let denote the corresponding order statistics.
For , the joint moment of the order statistics and is given by
This can be seen by invoking the law of total expectation and the memoryless property:
The first equation follows from the law of total expectation.
The second equation exploits the fact that once we condition on , it must follow that . The third equation relies on the memoryless property to replace with .
Sum of two independent exponential random variables
The probability distribution function (PDF) of a sum of two independent random variables is the convolution of their individual PDFs. If and are independent exponential random variables with respective rate parameters and then the probability density of is given by
The entropy of this distribution is available in closed form: assuming (without loss of generality), then
where is the Euler-Mascheroni constant, and is the digamma function.
In the case of equal rate parameters, the result is an Erlang distribution with shape 2 and parameter which in turn is a special case of gamma distribution.
The sum of n independent Exp(λ) exponential random variables is Gamma(n, λ) distributed.
Related distributions
If X ~ Laplace(μ, β−1), then |X − μ| ~ Exp(β).
If X ~ U(0, 1) then −log(X) ~ Exp(1).
If X ~ Pareto(1, λ), then log(X) ~ Exp(λ).
If X ~ SkewLogistic(θ), then .
If Xi ~ U(0, 1) then
The exponential distribution is a limit of a scaled beta distribution:
The exponential distribution is a special case of type 3 Pearson distribution.
The exponential distribution is the special case of a Gamma distribution with shape parameter 1.
If X ~ Exp(λ) and X ~ Exp(λ) then:
, closure under scaling by a positive factor.
1 + X ~ BenktanderWeibull(λ, 1), which reduces to a truncated exponential distribution.
keX ~ Pareto(k, λ).
e−λX ~ U(0, 1).
e−X ~ Beta(λ, 1).
e ~ PowerLaw(k, λ)
, the Rayleigh distribution
, the Weibull distribution
.
, a geometric distribution on 0,1,2,3,...
, a geometric distribution on 1,2,3,4,...
If also Y ~ Erlang(n, λ) or then
If also λ ~ Gamma(k, θ) (shape, scale parametrisation) then the marginal distribution of X is Lomax(k, 1/θ), the gamma mixture
λX − λY ~ Laplace(0, 1).
min{X1, ..., Xn} ~ Exp(λ1 + ... + λn).
If also λ = λ then:
Erlang(k, λ) = Gamma(k, λ−1) = Gamma(k, λ) (in (k, θ) and (α, β) parametrization, respectively) with an integer shape parameter k.
If , then .
X − X ~ Laplace(0, λ−1).
If also X are independent, then:
~ U(0, 1)
has probability density function . This can be used to obtain a confidence interval for .
If also λ = 1:
, the logistic distribution
μ − σ log(X) ~ GEV(μ, σ, 0).
Further if then (K-distribution)
If also λ = 1/2 then ; i.e., X has a chi-squared distribution with 2 degrees of freedom. Hence:
If and ~ Poisson(X) then (geometric distribution)
The Hoyt distribution can be obtained from exponential distribution and arcsine distribution
The exponential distribution is a limit of the κ-exponential distribution in the case.
Exponential distribution is a limit of the κ-Generalized Gamma distribution in the and cases:
Other related distributions:
Hyper-exponential distribution – the distribution whose density is a weighted sum of exponential densities.
Hypoexponential distribution – the distribution of a general sum of exponential random variables.
exGaussian distribution – the sum of an exponential distribution and a normal distribution.
Statistical inference
Below, suppose random variable X is exponentially distributed with rate parameter λ, and are n independent samples from X, with sample mean .
Parameter estimation
The maximum likelihood estimator for λ is constructed as follows.
The likelihood function for λ, given an independent and identically distributed sample x = (x1, ..., xn) drawn from the variable, is:
where:
is the sample mean.
The derivative of the likelihood function's logarithm is:
Consequently, the maximum likelihood estimate for the rate parameter is:
This is an unbiased estimator of although an unbiased MLE estimator of and the distribution mean.
The bias of is equal to
which yields the bias-corrected maximum likelihood estimator
An approximate minimizer of mean squared error (see also: bias–variance tradeoff) can be found, assuming a sample size greater than two, with a correction factor to the MLE:
This is derived from the mean and variance of the inverse-gamma distribution, .
Fisher information
The Fisher information, denoted , for an estimator of the rate parameter is given as:
Plugging in the distribution and solving gives:
This determines the amount of information each independent sample of an exponential distribution carries about the unknown rate parameter .
Confidence intervals
An exact 100(1 − α)% confidence interval for the rate parameter of an exponential distribution is given by:
which is also equal to
where is the percentile of the chi squared distribution with v degrees of freedom, n is the number of observations and x-bar is the sample average. A simple approximation to the exact interval endpoints can be derived using a normal approximation to the distribution. This approximation gives the following values for a 95% confidence interval:
This approximation may be acceptable for samples containing at least 15 to 20 elements.
Bayesian inference
The conjugate prior for the exponential distribution is the gamma distribution (of which the exponential distribution is a special case). The following parameterization of the gamma probability density function is useful:
The posterior distribution p can then be expressed in terms of the likelihood function defined above and a gamma prior:
Now the posterior density p has been specified up to a missing normalizing constant. Since it has the form of a gamma pdf, this can easily be filled in, and one obtains:
Here the hyperparameter α can be interpreted as the number of prior observations, and β as the sum of the prior observations.
The posterior mean here is:
Occurrence and applications
Occurrence of events
The exponential distribution occurs naturally when describing the lengths of the inter-arrival times in a homogeneous Poisson process.
The exponential distribution may be viewed as a continuous counterpart of the geometric distribution, which describes the number of Bernoulli trials necessary for a discrete process to change state. In contrast, the exponential distribution describes the time for a continuous process to change state.
In real-world scenarios, the assumption of a constant rate (or probability per unit time) is rarely satisfied. For example, the rate of incoming phone calls differs according to the time of day. But if we focus on a time interval during which the rate is roughly constant, such as from 2 to 4 p.m. during work days, the exponential distribution can be used as a good approximate model for the time until the next phone call arrives. Similar caveats apply to the following examples which yield approximately exponentially distributed variables:
The time until a radioactive particle decays, or the time between clicks of a Geiger counter
The time between receiving one telephone call and the next
The time until default (on payment to company debt holders) in reduced-form credit risk modeling
Exponential variables can also be used to model situations where certain events occur with a constant probability per unit length, such as the distance between mutations on a DNA strand, or between roadkills on a given road.
In queuing theory, the service times of agents in a system (e.g. how long it takes for a bank teller etc. to serve a customer) are often modeled as exponentially distributed variables. (The arrival of customers for instance is also modeled by the Poisson distribution if the arrivals are independent and distributed identically.) The length of a process that can be thought of as a sequence of several independent tasks follows the Erlang distribution (which is the distribution of the sum of several independent exponentially distributed variables).
Reliability theory and reliability engineering also make extensive use of the exponential distribution. Because of the memoryless property of this distribution, it is well-suited to model the constant hazard rate portion of the bathtub curve used in reliability theory. It is also very convenient because it is so easy to add failure rates in a reliability model. The exponential distribution is however not appropriate to model the overall lifetime of organisms or technical devices, because the "failure rates" here are not constant: more failures occur for very young and for very old systems.
In physics, if you observe a gas at a fixed temperature and pressure in a uniform gravitational field, the heights of the various molecules also follow an approximate exponential distribution, known as the Barometric formula. This is a consequence of the entropy property mentioned below.
In hydrology, the exponential distribution is used to analyze extreme values of such variables as monthly and annual maximum values of daily rainfall and river discharge volumes.
The blue picture illustrates an example of fitting the exponential distribution to ranked annually maximum one-day rainfalls showing also the 90% confidence belt based on the binomial distribution. The rainfall data are represented by plotting positions as part of the cumulative frequency analysis.
In operating-rooms management, the distribution of surgery duration for a category of surgeries with no typical work-content (like in an emergency room, encompassing all types of surgeries).
Prediction
Having observed a sample of n data points from an unknown exponential distribution a common task is to use these samples to make predictions about future data from the same source. A common predictive distribution over future samples is the so-called plug-in distribution, formed by plugging a suitable estimate for the rate parameter λ into the exponential density function. A common choice of estimate is the one provided by the principle of maximum likelihood, and using this yields the predictive density over a future sample xn+1, conditioned on the observed samples x = (x1, ..., xn) given by
The Bayesian approach provides a predictive distribution which takes into account the uncertainty of the estimated parameter, although this may depend crucially on the choice of prior.
A predictive distribution free of the issues of choosing priors that arise under the subjective Bayesian approach is
which can be considered as
a frequentist confidence distribution, obtained from the distribution of the pivotal quantity ;
a profile predictive likelihood, obtained by eliminating the parameter λ from the joint likelihood of xn+1 and λ by maximization;
an objective Bayesian predictive posterior distribution, obtained using the non-informative Jeffreys prior 1/λ;
the Conditional Normalized Maximum Likelihood (CNML) predictive distribution, from information theoretic considerations.
The accuracy of a predictive distribution may be measured using the distance or divergence between the true exponential distribution with rate parameter, λ0, and the predictive distribution based on the sample x. The Kullback–Leibler divergence is a commonly used, parameterisation free measure of the difference between two distributions. Letting Δ(λ0||p) denote the Kullback–Leibler divergence between an exponential with rate parameter λ0 and a predictive distribution p it can be shown that
where the expectation is taken with respect to the exponential distribution with rate parameter , and is the digamma function. It is clear that the CNML predictive distribution is strictly superior to the maximum likelihood plug-in distribution in terms of average Kullback–Leibler divergence for all sample sizes .
Random variate generation
A conceptually very simple method for generating exponential variates is based on inverse transform sampling: Given a random variate U drawn from the uniform distribution on the unit interval , the variate
has an exponential distribution, where F is the quantile function, defined by
Moreover, if U is uniform on (0, 1), then so is 1 − U. This means one can generate exponential variates as follows:
Other methods for generating exponential variates are discussed by Knuth and Devroye.
A fast method for generating a set of ready-ordered exponential variates without using a sorting routine is also available.
See also
Dead time – an application of exponential distribution to particle detector analysis.
Laplace distribution, or the "double exponential distribution".
Relationships among probability distributions
Marshall–Olkin exponential distribution
References
External links
Online calculator of Exponential Distribution
Continuous distributions
Exponentials
Poisson point processes
Conjugate prior distributions
Exponential family distributions
Infinitely divisible probability distributions
Survival analysis | Exponential distribution | [
"Mathematics"
] | 3,690 | [
"Point (geometry)",
"E (mathematical constant)",
"Point processes",
"Exponentials",
"Poisson point processes"
] |
45,938 | https://en.wikipedia.org/wiki/General%20equilibrium%20theory | In economics, general equilibrium theory attempts to explain the behavior of supply, demand, and prices in a whole economy with several or many interacting markets, by seeking to prove that the interaction of demand and supply will result in an overall general equilibrium. General equilibrium theory contrasts with the theory of partial equilibrium, which analyzes a specific part of an economy while its other factors are held constant.
General equilibrium theory both studies economies using the model of equilibrium pricing and seeks to determine in which circumstances the assumptions of general equilibrium will hold. The theory dates to the 1870s, particularly the work of French economist Léon Walras in his pioneering 1874 work Elements of Pure Economics. The theory reached its modern form with the work of Lionel W. McKenzie (Walrasian theory), Kenneth Arrow and Gérard Debreu (Hicksian theory) in the 1950s.
Overview
Broadly speaking, general equilibrium tries to give an understanding of the whole economy using a "bottom-up" approach, starting with individual markets and agents. Therefore, general equilibrium theory has traditionally been classified as part of microeconomics. The difference is not as clear as it used to be, since much of modern macroeconomics has emphasized microeconomic foundations, and has constructed general equilibrium models of macroeconomic fluctuations. General equilibrium macroeconomic models usually have a simplified structure that only incorporates a few markets, like a "goods market" and a "financial market". In contrast, general equilibrium models in the microeconomic tradition typically involve a multitude of different goods markets. They are usually complex and require computers to calculate numerical solutions.
In a market system the prices and production of all goods, including the price of money and interest, are interrelated. A change in the price of one good, say bread, may affect another price, such as bakers' wages. If bakers don't differ in tastes from others, the demand for bread might be affected by a change in bakers' wages, with a consequent effect on the price of bread. Calculating the equilibrium price of just one good, in theory, requires an analysis that accounts for all of the millions of different goods that are available. It is often assumed that agents are price takers, and under that assumption two common notions of equilibrium exist: Walrasian, or competitive equilibrium, and its generalization: a price equilibrium with transfers.
Walrasian equilibrium
The first attempt in neoclassical economics to model prices for a whole economy was made by Léon Walras. Walras' Elements of Pure Economics provides a succession of models, each taking into account more aspects of a real economy (two commodities, many commodities, production, growth, money). Some think Walras was unsuccessful and that the later models in this series are inconsistent.
In particular, Walras's model was a long-run model in which prices of capital goods are the same whether they appear as inputs or outputs and in which the same rate of profits is earned in all lines of industry. This is inconsistent with the quantities of capital goods being taken as data. But when Walras introduced capital goods in his later models, he took their quantities as given, in arbitrary ratios. (In contrast, Kenneth Arrow and Gérard Debreu continued to take the initial quantities of capital goods as given, but adopted a short run model in which the prices of capital goods vary with time and the own rate of interest varies across capital goods.)
Walras was the first to lay down a research program widely followed by 20th-century economists. In particular, the Walrasian agenda included the investigation of when equilibria are unique and stable— Walras' Lesson 7 shows neither uniqueness, nor stability, nor even existence of an equilibrium is guaranteed. Walras also proposed a dynamic process by which general equilibrium might be reached, that of the tâtonnement or groping process.
The tâtonnement process is a model for investigating stability of equilibria. Prices are announced (perhaps by an "auctioneer"), and agents state how much of each good they would like to offer (supply) or purchase (demand). No transactions and no production take place at disequilibrium prices. Instead, prices are lowered for goods with positive prices and excess supply. Prices are raised for goods with excess demand. The question for the mathematician is under what conditions such a process will terminate in equilibrium where demand equates to supply for goods with positive prices and demand does not exceed supply for goods with a price of zero. Walras was not able to provide a definitive answer to this question (see Unresolved Problems in General Equilibrium below).
Marshall and Sraffa
In partial equilibrium analysis, the determination of the price of a good is simplified by just looking at the price of one good, and assuming that the prices of all other goods remain constant. The Marshallian theory of supply and demand is an example of partial equilibrium analysis. Partial equilibrium analysis is adequate when the first-order effects of a shift in the demand curve do not shift the supply curve. Anglo-American economists became more interested in general equilibrium in the late 1920s and 1930s after Piero Sraffa's demonstration that Marshallian economists cannot account for the forces thought to account for the upward-slope of the supply curve for a consumer good.
If an industry uses little of a factor of production, a small increase in the output of that industry will not bid the price of that factor up. To a first-order approximation, firms in the industry will experience constant costs, and the industry supply curves will not slope up. If an industry uses an appreciable amount of that factor of production, an increase in the output of that industry will exhibit increasing costs. But such a factor is likely to be used in substitutes for the industry's product, and an increased price of that factor will have effects on the supply of those substitutes. Consequently, Sraffa argued, the first-order effects of a shift in the demand curve of the original industry under these assumptions includes a shift in the supply curve of substitutes for that industry's product, and consequent shifts in the original industry's supply curve. General equilibrium is designed to investigate such interactions between markets.
Continental European economists made important advances in the 1930s. Walras' arguments for the existence of general equilibrium often were based on the counting of equations and variables. Such arguments are inadequate for non-linear systems of equations and do not imply that equilibrium prices and quantities cannot be negative, a meaningless solution for his models. The replacement of certain equations by inequalities and the use of more rigorous mathematics improved general equilibrium modeling.
Modern concept of general equilibrium in economics
The modern conception of general equilibrium is provided by the Arrow–Debreu–McKenzie model, developed jointly by Kenneth Arrow, Gérard Debreu, and Lionel W. McKenzie in the 1950s. Debreu presents this model in Theory of Value (1959) as an axiomatic model, following the style of mathematics promoted by Nicolas Bourbaki. In such an approach, the interpretation of the terms in the theory (e.g., goods, prices) are not fixed by the axioms.
Three important interpretations of the terms of the theory have been often cited. First, suppose commodities are distinguished by the location where they are delivered. Then the Arrow-Debreu model is a spatial model of, for example, international trade.
Second, suppose commodities are distinguished by when they are delivered. That is, suppose all markets equilibrate at some initial instant of time. Agents in the model purchase and sell contracts, where a contract specifies, for example, a good to be delivered and the date at which it is to be delivered. The Arrow–Debreu model of intertemporal equilibrium contains forward markets for all goods at all dates. No markets exist at any future dates.
Third, suppose contracts specify states of nature which affect whether a commodity is to be delivered: "A contract for the transfer of a commodity now specifies, in addition to its physical properties, its location and its date, an event on the occurrence of which the transfer is conditional. This new definition of a commodity allows one to obtain a theory of [risk] free from any probability concept..."
These interpretations can be combined. So the complete Arrow–Debreu model can be said to apply when goods are identified by when they are to be delivered, where they are to be delivered and under what circumstances they are to be delivered, as well as their intrinsic nature. So there would be a complete set of prices for contracts such as "1 ton of Winter red wheat, delivered on 3rd of January in Minneapolis, if there is a hurricane in Florida during December". A general equilibrium model with complete markets of this sort seems to be a long way from describing the workings of real economies, however, its proponents argue that it is still useful as a simplified guide as to how real economies function.
Some of the recent work in general equilibrium has in fact explored the implications of incomplete markets, which is to say an intertemporal economy with uncertainty, where there do not exist sufficiently detailed contracts that would allow agents to fully allocate their consumption and resources through time. While it has been shown that such economies will generally still have an equilibrium, the outcome may no longer be Pareto optimal. The basic intuition for this result is that if consumers lack adequate means to transfer their wealth from one time period to another and the future is risky, there is nothing to necessarily tie any price ratio down to the relevant marginal rate of substitution, which is the standard requirement for Pareto optimality. Under some conditions the economy may still be constrained Pareto optimal, meaning that a central authority limited to the same type and number of contracts as the individual agents may not be able to improve upon the outcome, what is needed is the introduction of a full set of possible contracts. Hence, one implication of the theory of incomplete markets is that inefficiency may be a result of underdeveloped financial institutions or credit constraints faced by some members of the public. Research still continues in this area.
Properties and characterization of general equilibrium
Basic questions in general equilibrium analysis are concerned with the conditions under which an equilibrium will be efficient, which efficient equilibria can be achieved, when an equilibrium is guaranteed to exist and when the equilibrium will be unique and stable.
First Fundamental Theorem of Welfare Economics
The First Fundamental Welfare Theorem asserts that market equilibria are Pareto efficient. In other words, the allocation of goods in the equilibria is such that there is no reallocation which would leave a consumer better off without leaving another consumer worse off. In a pure exchange economy, a sufficient condition for the first welfare theorem to hold is that preferences be locally nonsatiated. The first welfare theorem also holds for economies with production regardless of the properties of the production function. Implicitly, the theorem assumes complete markets and perfect information. In an economy with externalities, for example, it is possible for equilibria to arise that are not efficient.
The first welfare theorem is informative in the sense that it points to the sources of inefficiency in markets. Under the assumptions above, any market equilibrium is tautologically efficient. Therefore, when equilibria arise that are not efficient, the market system itself is not to blame, but rather some sort of market failure.
Second Fundamental Theorem of Welfare Economics
Even if every equilibrium is efficient, it may not be that every efficient allocation of resources can be part of an equilibrium. However, the second theorem states that every Pareto efficient allocation can be supported as an equilibrium by some set of prices. In other words, all that is required to reach a particular Pareto efficient outcome is a redistribution of initial endowments of the agents after which the market can be left alone to do its work. This suggests that the issues of efficiency and equity can be separated and need not involve a trade-off. The conditions for the second theorem are stronger than those for the first, as consumers' preferences and production sets now need to be convex (convexity roughly corresponds to the idea of diminishing marginal rates of substitution i.e. "the average of two equally good bundles is better than either of the two bundles").
Existence
Even though every equilibrium is efficient, neither of the above two theorems say anything about the equilibrium existing in the first place. To guarantee that an equilibrium exists, it suffices that consumer preferences be strictly convex. With enough consumers, the convexity assumption can be relaxed both for existence and the second welfare theorem. Similarly, but less plausibly, convex feasible production sets suffice for existence; convexity excludes economies of scale.
Proofs of the existence of equilibrium traditionally rely on fixed-point theorems such as Brouwer fixed-point theorem for functions (or, more generally, the Kakutani fixed-point theorem for set-valued functions). See Competitive equilibrium#Existence of a competitive equilibrium. The proof was first due to Lionel McKenzie, and Kenneth Arrow and Gérard Debreu. In fact, the converse also holds, according to Uzawa's derivation of Brouwer's fixed point theorem from Walras's law. Following Uzawa's theorem, many mathematical economists consider proving existence a deeper result than proving the two Fundamental Theorems.
Another method of proof of existence, global analysis, uses Sard's lemma and the Baire category theorem; this method was pioneered by Gérard Debreu and Stephen Smale.
Nonconvexities in large economies
Starr (1969) applied the Shapley–Folkman–Starr theorem to prove that even without convex preferences there exists an approximate equilibrium. The Shapley–Folkman–Starr results bound the distance from an "approximate" economic equilibrium to an equilibrium of a "convexified" economy, when the number of agents exceeds the dimension of the goods. Following Starr's paper, the Shapley–Folkman–Starr results were "much exploited in the theoretical literature", according to Guesnerie, who wrote the following:
some key results obtained under the convexity assumption remain (approximately) relevant in circumstances where convexity fails. For example, in economies with a large consumption side, nonconvexities in preferences do not destroy the standard results of, say Debreu's theory of value. In the same way, if indivisibilities in the production sector are small with respect to the size of the economy, [ . . . ] then standard results are affected in only a minor way.
To this text, Guesnerie appended the following footnote:
The derivation of these results in general form has been one of the major achievements of postwar economic theory.
In particular, the Shapley-Folkman-Starr results were incorporated in the theory of general economic equilibria and in the theory of market failures and of public economics.
Uniqueness
Although generally (assuming convexity) an equilibrium will exist and will be efficient, the conditions under which it will be unique are much stronger. The Sonnenschein–Mantel–Debreu theorem, proven in the 1970s, states that the aggregate excess demand function inherits only certain properties of individual's demand functions, and that these (continuity, homogeneity of degree zero, Walras' law and boundary behavior when prices are near zero) are the only real restriction one can expect from an aggregate excess demand function. Any such function can represent the excess demand of an economy populated with rational utility-maximizing individuals.
There has been much research on conditions when the equilibrium will be unique, or which at least will limit the number of equilibria. One result states that under mild assumptions the number of equilibria will be finite (see regular economy) and odd (see index theorem). Furthermore, if an economy as a whole, as characterized by an aggregate excess demand function, has the revealed preference property (which is a much stronger condition than revealed preferences for a single individual) or the gross substitute property then likewise the equilibrium will be unique. All methods of establishing uniqueness can be thought of as establishing that each equilibrium has the same positive local index, in which case by the index theorem there can be but one such equilibrium.
Determinacy
Given that equilibria may not be unique, it is of some interest to ask whether any particular equilibrium is at least locally unique. If so, then comparative statics can be applied as long as the shocks to the system are not too large. As stated above, in a regular economy equilibria will be finite, hence locally unique. One reassuring result, due to Debreu, is that "most" economies are regular.
Work by Michael Mandler (1999) has challenged this claim. The Arrow–Debreu–McKenzie model is neutral between models of production functions as continuously differentiable and as formed from (linear combinations of) fixed coefficient processes. Mandler accepts that, under either model of production, the initial endowments will not be consistent with a continuum of equilibria, except for a set of Lebesgue measure zero. However, endowments change with time in the model and this evolution of endowments is determined by the decisions of agents (e.g., firms) in the model. Agents in the model have an interest in equilibria being indeterminate:
Indeterminacy, moreover, is not just a technical nuisance; it undermines the price-taking assumption of competitive models. Since arbitrary small manipulations of factor supplies can dramatically increase a factor's price, factor owners will not take prices to be parametric.
When technology is modeled by (linear combinations) of fixed coefficient processes, optimizing agents will drive endowments to be such that a continuum of equilibria exist:
The endowments where indeterminacy occurs systematically arise through time and therefore cannot be dismissed; the Arrow-Debreu-McKenzie model is thus fully subject to the dilemmas of factor price theory.
Some have questioned the practical applicability of the general equilibrium approach based on the possibility of non-uniqueness of equilibria.
Stability
In a typical general equilibrium model the prices that prevail "when the dust settles" are simply those that coordinate the demands of various consumers for various goods. But this raises the question of how these prices and allocations have been arrived at, and whether any (temporary) shock to the economy will cause it to converge back to the same outcome that prevailed before the shock. This is the question of stability of the equilibrium, and it can be readily seen that it is related to the question of uniqueness. If there are multiple equilibria, then some of them will be unstable. Then, if an equilibrium is unstable and there is a shock, the economy will wind up at a different set of allocations and prices once the convergence process terminates. However, stability depends not only on the number of equilibria but also on the type of the process that guides price changes (for a specific type of price adjustment process see Walrasian auction). Consequently, some researchers have focused on plausible adjustment processes that guarantee system stability, i.e., that guarantee convergence of prices and allocations to some equilibrium. When more than one stable equilibrium exists, where one ends up will depend on where one begins. The theorems that have been mostly conclusive when related to the stability of a typical general equilibrium model are closed related to that of the most local stability.
Unresolved problems in general equilibrium
Research building on the Arrow–Debreu–McKenzie model has revealed some problems with the model. The Sonnenschein–Mantel–Debreu results show that, essentially, any restrictions on the shape of excess demand functions are stringent. Some think this implies that the Arrow–Debreu model lacks empirical content. Therefore, an unsolved problem is
Is Arrow–Debreu–McKenzie equilibria stable and unique?
A model organized around the tâtonnement process has been said to be a model of a centrally planned economy, not a decentralized market economy. Some research has tried to develop general equilibrium models with other processes. In particular, some economists have developed models in which agents can trade at out-of-equilibrium prices and such trades can affect the equilibria to which the economy tends. Particularly noteworthy are the Hahn process, the Edgeworth process and the Fisher process.
The data determining Arrow-Debreu equilibria include initial endowments of capital goods. If production and trade occur out of equilibrium, these endowments will be changed further complicating the picture.
In a real economy, however, trading, as well as production and consumption, goes on out of equilibrium. It follows that, in the course of convergence to equilibrium (assuming that occurs), endowments change. In turn this changes the set of equilibria. Put more succinctly, the set of equilibria is path dependent... [This path dependence] makes the calculation of equilibria corresponding to the initial state of the system essentially irrelevant. What matters is the equilibrium that the economy will reach from given initial endowments, not the equilibrium that it would have been in, given initial endowments, had prices happened to be just right. – (Franklin Fisher).
The Arrow–Debreu model in which all trade occurs in futures contracts at time zero requires a very large number of markets to exist. It is equivalent under complete markets to a sequential equilibrium concept in which spot markets for goods and assets open at each date-state event (they are not equivalent under incomplete markets); market clearing then requires that the entire sequence of prices clears all markets at all times. A generalization of the sequential market arrangement is the temporary equilibrium structure, where market clearing at a point in time is conditional on expectations of future prices which need not be market clearing ones.
Although the Arrow–Debreu–McKenzie model is set out in terms of some arbitrary numéraire, the model does not encompass money. Frank Hahn, for example, has investigated whether general equilibrium models can be developed in which money enters in some essential way. One of the essential questions he introduces, often referred to as the Hahn's problem is: "Can one construct an equilibrium where money has value?" The goal is to find models in which existence of money can alter the equilibrium solutions, perhaps because the initial position of agents depends on monetary prices.
Some critics of general equilibrium modeling contend that much research in these models constitutes exercises in pure mathematics with no connection to actual economies. In a 1979 article, Nicholas Georgescu-Roegen complains: "There are endeavors that now pass for the most desirable kind of economic contributions although they are just plain mathematical exercises, not only without any economic substance but also without any mathematical value." He cites as an example a paper that assumes more traders in existence than there are points in the set of real numbers.
Although modern models in general equilibrium theory demonstrate that under certain circumstances prices will indeed converge to equilibria, critics hold that the assumptions necessary for these results are extremely strong. As well as stringent restrictions on excess demand functions, the necessary assumptions include perfect rationality of individuals; complete information about all prices both now and in the future; and the conditions necessary for perfect competition. However, some results from experimental economics suggest that even in circumstances where there are few, imperfectly informed agents, the resulting prices and allocations may wind up resembling those of a perfectly competitive market (although certainly not a stable general equilibrium in all markets).
Frank Hahn defends general equilibrium modeling on the grounds that it provides a negative function. General equilibrium models show what the economy would have to be like for an unregulated economy to be Pareto efficient.
Computing general equilibrium
Until the 1970s general equilibrium analysis remained theoretical. With advances in computing power and the development of input–output tables, it became possible to model national economies, or even the world economy, and attempts were made to solve for general equilibrium prices and quantities empirically.
Applied general equilibrium (AGE) models were pioneered by Herbert Scarf in 1967, and offered a method for solving the Arrow–Debreu General Equilibrium system in a numerical fashion. This was first implemented by John Shoven and John Whalley (students of Scarf at Yale) in 1972 and 1973, and were a popular method up through the 1970s. In the 1980s however, AGE models faded from popularity due to their inability to provide a precise solution and its high cost of computation.
Computable general equilibrium (CGE) models surpassed and replaced AGE models in the mid-1980s, as the CGE model was able to provide relatively quick and large computable models for a whole economy, and was the preferred method of governments and the World Bank. CGE models are heavily used today, and while 'AGE' and 'CGE' is used inter-changeably in the literature, Scarf-type AGE models have not been constructed since the mid-1980s, and the CGE literature at current is not based on Arrow-Debreu and General Equilibrium Theory as discussed in this article. CGE models, and what is today referred to as AGE models, are based on static, simultaneously solved, macro balancing equations (from the standard Keynesian macro model), giving a precise and explicitly computable result.
Other schools
General equilibrium theory is a central point of contention and influence between the neoclassical school and other schools of economic thought, and different schools have varied views on general equilibrium theory. Some, such as the Keynesian and Post-Keynesian schools, strongly reject general equilibrium theory as "misleading" and "useless". Disequilibrium macroeconomics and different non-equilibrium approaches were developed as alternatives. Other schools, such as new classical macroeconomics, developed from general equilibrium theory.
Keynesian and Post-Keynesian
Keynesian and Post-Keynesian economists, and their underconsumptionist predecessors criticize general equilibrium theory specifically, and as part of criticisms of neoclassical economics generally. Specifically, they argue that general equilibrium theory is neither accurate nor useful, that economies are not in equilibrium, that equilibrium may be slow and painful to achieve, and that modeling by equilibrium is "misleading", and that the resulting theory is not a useful guide, particularly for understanding of economic crises.
Robert Clower and others have argued for a reformulation of theory toward disequilibrium analysis to incorporate how monetary exchange fundamentally alters the representation of an economy as though a barter system.
New classical macroeconomics
While general equilibrium theory and neoclassical economics generally were originally microeconomic theories, new classical macroeconomics builds a macroeconomic theory on these bases. In new classical models, the macroeconomy is assumed to be at its unique equilibrium, with full employment and potential output, and that this equilibrium is assumed to always have been achieved via price and wage adjustment (market clearing). The best-known such model is real business-cycle theory, in which business cycles are considered to be largely due to changes in the real economy, unemployment is not due to the failure of the market to achieve potential output, but due to equilibrium potential output having fallen and equilibrium unemployment having risen.
Socialist economics
Within socialist economics, a sustained critique of general equilibrium theory (and neoclassical economics generally) is given in Anti-Equilibrium, based on the experiences of János Kornai with the failures of Communist central planning, although Michael Albert and Robin Hahnel later based their Parecon model on the same theory.
New structural economics
The structural equilibrium model is a matrix-form computable general equilibrium model in new structural economics.
This model is an extension of the John von Neumann's general equilibrium model (see Computable general equilibrium for details). Its computation can be performed using the R package GE.
The structural equilibrium model can be used for intertemporal equilibrium analysis, where time is treated as a label that differentiates between types of commodities and firms, meaning commodities are distinguished by when they are delivered and firms are distinguished by when they produce. The model can include factors such as taxes, money, endogenous production functions, and endogenous institutions, etc. The structural equilibrium model can include excess tax burdens, meaning that the equilibrium in the model may not be Pareto optimal. When production functions and/or economic institutions are treated as endogenous variables, the general equilibrium is referred to as structural equilibrium.
See also
General equilibrium theorists (category)
Cobweb model
Decision theory
Game theory
Mechanism design
Notes
Further reading
Microeconomic theories
Welfare economics
Game theory | General equilibrium theory | [
"Mathematics"
] | 5,830 | [
"Game theory"
] |
45,943 | https://en.wikipedia.org/wiki/Swastika | The swastika (卐 or 卍) is a symbol used in various Eurasian religions and cultures, and it is also seen in some African and American ones. In the Western world, it is more widely recognized as a symbol of the German Nazi Party who appropriated it for their party insignia starting in the early 20th century. The appropriation continues with its use by neo-Nazis around the world. The swastika was and continues to be used as a symbol of divinity and spirituality in Indian religions, including Hinduism, Buddhism, and Jainism. It generally takes the form of a cross, the arms of which are of equal length and perpendicular to the adjacent arms, each bent midway at a right angle.
The word swastika comes from , meaning 'conducive to well-being'. In Hinduism, the right-facing symbol (clockwise) () is called , symbolizing ('sun'), prosperity and good luck, while the left-facing symbol (counter-clockwise) () is called , symbolising night or tantric aspects of Kali. In Jain symbolism, it is the part of Jain flag. It represents Suparshvanathathe seventh of 24 Tirthankaras (spiritual teachers and saviours), while in Buddhist symbolism it represents the auspicious footprints of the Buddha. In the different Indo-European traditions, the swastika symbolises fire, lightning bolts, and the sun. The symbol is found in the archaeological remains of the Indus Valley civilisation and Samarra, as well as in early Byzantine and Christian artwork.
Although used for the first time as a symbol of international antisemitism by far-right Romanian politician A. C. Cuza prior to World War I, it was a symbol of auspiciousness and good luck for most of the Western world until the 1930s, when the German Nazi Party adopted the swastika as an emblem of the Aryan race. As a result of World War II and the Holocaust, in the West it continues to be strongly associated with Nazism, antisemitism, white supremacism, or simply evil. As a consequence, its use in some countries, including Germany, is prohibited by law. However, the swastika remains a symbol of good luck and prosperity in Hindu, Buddhist and Jain countries such as Nepal, India, Thailand, Mongolia, Sri Lanka, China and Japan, and carries various other meanings for peoples around the world, such as the Akan, Hopi, Navajo, and Tlingit peoples. It is also commonly used in Hindu marriage ceremonies and Dipavali celebrations.
Etymology and nomenclature
The word swastika is derived from the Sanskrit root , which is composed of 'good, well' and 'is; it is; there is'. The word occurs frequently in the Vedas as well as in classical literature, meaning 'health, luck, success, prosperity', and it was commonly used as a greeting. The final is a common suffix that could have multiple meanings.
According to Monier-Williams, a majority of scholars consider the swastika to originally be a solar symbol. The sign implies well-being, something fortunate, lucky, or auspicious. It is alternatively spelled in contemporary texts as svastika, and other spellings were occasionally used in the 19th and early 20th century, such as suastika. It was derived from the Sanskrit term (Devanagari ), which transliterates to under the commonly used IAST transliteration system, but is pronounced closer to swastika when letters are used with their English values.
The earliest known use of the word swastika is in Pāṇini's Aṣṭādhyāyī, which uses it to explain one of the Sanskrit grammar rules, in the context of a type of identifying mark on a cow's ear. Most scholarship suggests that Pāṇini lived in or before the 4th century BCE, possibly in 6th or 5th century BCE.
An important early use of the word swastika in a European text was in 1871 with the publications of Heinrich Schliemann, who discovered more than 1,800 ancient samples of swastika symbols and variants thereof while digging the Hisarlik mound near the Aegean Sea coast for the history of Troy. Schliemann linked his findings to the Sanskrit .
By the 19th century, the term swastika was adopted into the English lexicon, replacing the previous gammadion from Greek . In 1878, Irish scholar Charles Graves used swastika as the common English name for the symbol, after defining it as equivalent to the French term a cross with arms shaped like the Greek letter gamma (Γ). Shortly thereafter, British antiquarians Edward Thomas and Robert Sewell separately published their studies about the symbol, using swastika as the common English term.
The concept of a "reversed" swastika was probably first made among European scholars by Eugène Burnouf in 1852 and taken up by Schliemann in Ilios (1880), based on a letter from Max Müller that quotes Burnouf. The term is used in the sense of 'backward swastika' by Eugène Goblet d'Alviella (1894): "In India it [the gammadion] bears the name of , when its arms are bent towards the right, and when they are turned in the other direction."
Other names for the symbol include:
(Greek: ) or cross gammadion (; French: ), as each arm resembles the Greek letter Γ ()
hooked cross (German: ), angled cross (), or crooked cross ()
cross cramponned, cramponnée, or cramponny in heraldry, as each arm resembles a crampon or angle-iron ()
fylfot, chiefly in heraldry and architecture
(Greek: ), literally meaning 'four-legged', especially when composed of four conjoined legs (compare triskelion/triskele [Greek: ])
(Latvian for 'fire cross, cross of fire"; other names ('cross of thunder', 'thunder cross'), cross of Perun or of Perkūnas), cross of branches, cross of Laima)
whirling logs (Navajo): can denote abundance, prosperity, healing, and luck
In various European languages, it is known as the fylfot, , , or (a term in Anglo-Norman heraldry); German: ; French: ; Italian: ; Latvian: . In Mongolian it is called () and mainly used in seals. In Chinese it is called 卍字 (), pronounced in Japanese, (만자) in Korean and or in Vietnamese. In Balti/Tibetan language it is called .
Appearance
All swastikas are bent crosses based on a chiral symmetry, but they appear with different geometric details: as compact crosses with short legs, as crosses with large arms and as motifs in a pattern of unbroken lines. Chirality describes an absence of reflective symmetry, with the existence of two versions that are mirror images of each other. The mirror-image forms are typically described as left-facing or left-hand (卍) and right-facing or right-hand (卐).
The compact swastika can be seen as a chiral irregular icosagon (20-sided polygon) with fourfold (90°) rotational symmetry. Such a swastika proportioned on a 5×5 square grid and with the broken portions of its legs shortened by one unit can tile the plane by translation alone. The main Nazi flag swastika used a 5×5 diagonal grid, but with the legs unshortened.
Written characters
The swastika was adopted as a standard character in Chinese, "" () and as such entered various other East Asian languages, including Chinese script. In Japanese the symbol is called or .
The swastika is included in the Unicode character sets of two languages. In the Chinese block it is U+534D 卍 (left-facing) and U+5350 for the swastika 卐 (right-facing); The latter has a mapping in the original Big5 character set, but the former does not (although it is in Big5+). In Unicode 5.2, two swastika symbols and two swastikas were added to the Tibetan block: swastika , , and swastikas , .
Origin
European uses of swastikas are often treated in conjunction with cross symbols in general, such as the sun cross of Bronze Age religion. Beyond its certain presence in the "proto-writing" symbol systems, such as the Vinča script, which appeared during the Neolithic.
North pole
According to René Guénon, the swastika represents the north pole, and the rotational movement around a centre or immutable axis (), and only secondly it represents the Sun as a reflected function of the north pole. As such it is a symbol of life, of the vivifying role of the supreme principle of the universe, the absolute God, in relation to the cosmic order. It represents the activity (the Hellenic , the Hindu , the Chinese , 'Great One') of the principle of the universe in the formation of the world. According to Guénon, the swastika in its polar value has the same meaning of the yin and yang symbol of the Chinese tradition, and of other traditional symbols of the working of the universe, including the letters Γ (gamma) and G, symbolising the Great Architect of the Universe of Masonic thought.
According to the scholar Reza Assasi, the swastika represents the north ecliptic north pole centred in ζ Draconis, with the constellation Draco as one of its beams. He argues that this symbol was later attested as the four-horse chariot of Mithra in ancient Iranian culture. They believed the cosmos was pulled by four heavenly horses who revolved around a fixed centre in a clockwise direction. He suggests that this notion later flourished in Roman Mithraism, as the symbol appears in Mithraic iconography and astronomical representations.
According to the Russian archaeologist Gennady Zdanovich, who studied some of the oldest examples of the symbol in Sintashta culture, the swastika symbolises the universe, representing the spinning constellations of the celestial north pole centred in α Ursae Minoris, specifically the Little and Big Dipper (or Chariots), or Ursa Minor and Ursa Major. Likewise, according to René Guénon-the swastika is drawn by visualising the Big Dipper/Great Bear in the four phases of revolution around the pole star.
Comet
In their 1985 book Comet, Carl Sagan and Ann Druyan argue that the appearance of a rotating comet with a four-pronged tail as early as 2,000 years BCE could explain why the swastika is found in the cultures of both the Old World and the . The Han dynasty Book of Silk (2nd century BCE) depicts such a comet with a swastika-like symbol.
Bob Kobres, in a 1992 paper, contends that the swastika-like comet on the Han-dynasty manuscript was labelled a "long tailed pheasant star" (dixing) because of its resemblance to a bird's foot or footprint. Similar comparisons had been made by J.F. Hewitt in 1907, as well as a 1908 article in Good Housekeeping. Kobres goes on to suggest an association of mythological birds and comets also outside of China.
Four winds
In Native American culture, particularly among the Pima people of Arizona, the swastika is a symbol of the four winds. Anthropologist Frank Hamilton Cushing noted that among the Pima the symbol of the four winds is made from a cross with the four curved arms (similar to a broken sun cross) and concludes "the right-angle swastika is primarily a representation of the circle of the four wind gods standing at the head of their trails, or directions."
Historical uses
Prehistory
The earliest known swastikas are from 10,000 BCEpart of "an intricate meander pattern of joined-up swastikas" found on a late paleolithic figurine of a bird, carved from mammoth ivory, found in Mezine, Ukraine. However, the age of 10,000 BCE is a conservative estimate, and the true age may be as old as 17,000 BCE. It has been suggested that this swastika may be a stylised picture of a stork in flight. As the carving was found near phallic objects, this may also support the idea that the pattern was a fertility symbol.
In the mountains of Iran, there are swastikas or spinning wheels inscribed on stone walls, which are estimated to be more than 7,000 years old. One instance is in Khorashad, Birjand, on the holy wall Lakh Mazar.
Mirror-image swastikas (clockwise and counter-clockwise) have been found on ceramic pottery in the Devetashka cave, Bulgaria, dated to 6,000 BCE.
In Asia, swastika symbols first appear in the archaeological record around 3000 BCE in the Indus Valley Civilisation. It also appears in the Bronze and Iron Age cultures around the Black Sea and the Caspian Sea. In all these cultures, swastika symbols do not appear to occupy any marked position or significance, appearing as just one form of a series of similar symbols of varying complexity. In the Zoroastrian religion of Persia, the swastika was a symbol of the revolving sun, infinity, or continuing creation. It is one of the most common symbols on Mesopotamian coins.
Some researchers put forth the hypothesis that the swastika moved westward from the Indian subcontinent to Finland, Scandinavia, the Scottish Highlands and other parts of Europe. In England, neolithic or Bronze Age stone carvings of the symbol have been found on Ilkley Moor, such as the Swastika Stone.
Swastikas have also been found on pottery in archaeological digs in Africa, in the area of Kush and on pottery at the Jebel Barkal temples, in Iron Age designs of the northern Caucasus (Koban culture), and in Neolithic China in the Majiayao culture.
Swastikas are also seen in Egypt during the Coptic period. Textile number T.231-1923 held at the V&A Museum in London includes small swastikas in its design. This piece was found at Qau-el-Kebir, near Asyut, and is dated between 300 and 600 CE.
The Tierwirbel (the German for "animal whorl" or "whirl of animals") is a characteristic motif in Bronze Age Central Asia, the Eurasian Steppe, and later also in Iron Age Scythian and European (Baltic and Germanic) culture, showing rotational symmetric arrangement of an animal motif, often four birds' heads. Even wider diffusion of this "Asiatic" theme has been proposed to the Pacific and even North America (especially Moundville).
Caucasus
In Armenia the swastika is called the "arevakhach" and "kerkhach" () and is the ancient symbol of eternity and eternal light (i.e. God). Swastikas in Armenia were found on petroglyphs from the copper age, predating the Bronze Age. During the Bronze Age it was depicted on cauldrons, belts, medallions and other items.
Swastikas can also be seen on early Medieval churches and fortresses, including the principal tower in Armenia's historical capital city of Ani. The same symbol can be found on Armenian carpets, cross-stones (khachkar) and in medieval manuscripts, as well as on modern monuments as a symbol of eternity.
Old petroglyphs of four-beam and other swastikas were recorded in Dagestan, in particular, among the Avars. According to Vakhushti of Kartli, the tribal banner of the Avar khans depicted a wolf with a standard with a double-spiral swastika.
Petroglyphs with swastikas were depicted on medieval Vainakh tower architecture (see sketches by scholar Bruno Plaetschke from the 1920s). Thus, a rectangular swastika was made in engraved form on the entrance of a residential tower in the settlement Khimoy, Chechnya.
Europe
Iron Age attestations of swastikas can be associated with Indo-European cultures such as the Illyrians, Indo-Iranians, Celts, Greeks, Germanic peoples and Slavs. In Sintashta culture's "Country of Towns", ancient Indo-European settlements in southern Russia, it has been found a great concentration of some of the oldest swastika patterns.
Swastika shapes have been found on numerous artefacts from Iron Age Europe.
The swastika shape (also called a fylfot) appears on various Germanic Migration Period and Viking Age artifacts, such as the 3rd-century Værløse Fibula from Zealand, Denmark, the Gothic spearhead from Brest-Litovsk, today in Belarus, the 9th-century Snoldelev Stone from Ramsø, Denmark, and numerous Migration Period bracteates drawn left-facing or right-facing.
The pagan Anglo-Saxon ship burial at Sutton Hoo, England, contained numerous items bearing swastikas, now housed in the collection of the Cambridge Museum of Archaeology and Anthropology. A swastika is clearly marked on a hilt and sword belt found at Bifrons in Kent, in a grave of about the 6th century.
Hilda Ellis Davidson theorised that the swastika symbol was associated with Thor, possibly representing his Mjolnirsymbolic of thunderand possibly being connected to the Bronze Age sun cross. Davidson cites "many examples" of swastika symbols from Anglo-Saxon graves of the pagan period, with particular prominence on cremation urns from the cemeteries of East Anglia. Some of the swastikas on the items, on display at the Cambridge Museum of Archaeology and Anthropology, are depicted with such care and art that, according to Davidson, it must have possessed special significance as a funerary symbol. The runic inscription on the 8th-century Sæbø sword has been taken as evidence of the swastika as a symbol of Thor in Norse paganism.
The bronze frontispiece of a ritual pre-Christian () shield found in the River Thames near Battersea Bridge (hence "Battersea Shield") is embossed with 27 swastikas in bronze and red enamel. An Ogham stone found in Aglish, County Kerry, Ireland (CIIC 141) was modified into an early Christian gravestone, and was decorated with a cross pattée and two swastikas. The Book of Kells () contains swastika-shaped ornamentation. At the Northern edge of Ilkley Moor in West Yorkshire, there is a swastika-shaped pattern engraved in a stone known as the Swastika Stone. A number of swastikas have been found embossed in Galician metal pieces and carved in stones, mostly from the Castro culture period, although there also are contemporary examples (imitating old patterns for decorative purposes).
The ancient Baltic thunder cross symbol (pērkona krusts (cross of Perkons); also fire cross, ugunskrusts) is a swastika symbol used to decorate objects, traditional clothing and in archaeological excavations.
According to painter Stanisław Jakubowski, the "little sun" (Polish: słoneczko) is an Early Slavic pagan symbol of the Sun; he claimed it was engraved on wooden monuments built near the final resting places of fallen Slavs to represent eternal life. The symbol was first seen in his collection of Early Slavic symbols and architectural features, which he named Prasłowiańskie motywy architektoniczne (Polish: Early Slavic Architectural Motifs). His work was published in 1923.
The Boreyko coat of arms with a red swastika was used by several noble families in the Polish–Lithuanian Commonwealth.
According to Boris Kuftin, the Russians often used swastikas as a decorative element and as the basis of the ornament on traditional weaving products. Many can be seen on a women's folk costume from the Meshchera Lowlands.
According to some authors, Russian names popularly associated with the swastika include veterok ("breeze"), ognevtsi ("little flames"), "geese", "hares" (a towel with a swastika was called a towel with "hares"), or "little horses". The similar word "koleso" ("wheel") was used for rosette-shaped amulets, such as a hexafoil-thunder wheel ) in folklore, particularly in the Russian North.
An object very much like a hammer or a double axe is depicted among the magical symbols on the drums of Sami noaidi, used in their religious ceremonies before Christianity was established. The name of the Sami thunder god was Horagalles, thought to derive from "Old Man Thor" (Þórr karl). Sometimes on the drums, a male figure with a hammer-like object in either hand is shown, and sometimes it is more like a cross with crooked ends, or a swastika.
Southern and eastern Asia
The icon has been of spiritual significance to Indian religions such as Hinduism, Buddhism and Jainism. The swastika is a sacred symbol in the Bön religion, native to Tibet.
Hinduism
The swastika is an important Hindu symbol. The swastika symbol is commonly used before entrances or on doorways of homes or temples, to mark the starting page of financial statements, and mandalas constructed for rituals such as weddings or welcoming a newborn.
The swastika has a particular association with Diwali, being drawn in rangoli (coloured sand) or formed with deepak lights on the floor outside Hindu houses and on wall hangings and other decorations.
In the diverse traditions within Hinduism, both the clockwise and counterclockwise swastika are found, with different meanings. The clockwise or right hand icon is called swastika, while the counterclockwise or left hand icon is called sauwastika or sauvastika. The clockwise swastika is a solar symbol (Surya), suggesting the motion of the Sun in India (the northern hemisphere), where it appears to enter from the east, then ascend to the south at midday, exiting to the west. The counterclockwise sauwastika is less used; it connotes the night, and in tantric traditions it is an icon for the goddess Kali, the terrifying form of Devi Durga. The symbol also represents activity, karma, motion, wheel, and in some contexts the lotus. According to Norman McClelland its symbolism for motion and the Sun may be from shared prehistoric cultural roots.
Buddhism
In Buddhism, the swastika is considered to symbolise the auspicious footprints of the Buddha. The left-facing sauwastika is often imprinted on the chest, feet or palms of Buddha images. It is an aniconic symbol for the Buddha in many parts of Asia and homologous with the dharma wheel. The shape symbolises eternal cycling, a theme found in the samsara doctrine of Buddhism.
The swastika symbol is common in esoteric tantric traditions of Buddhism, along with Hinduism, where it is found with chakra theories and other meditative aids. The clockwise symbol is more common, and contrasts with the counterclockwise version common in the Tibetan Bon tradition and locally called yungdrung.
In East Asia, the swastika is prevalent in Buddhist monasteries and communities. It is commonly found in Buddhist temples, religious artifacts, texts related to Buddhism and schools founded by Buddhist religious groups. It also appears as a design or motif (singularly or woven into a pattern) on textiles, architecture and various decorative objects as a symbol of luck and good fortune. The icon is also found as a sacred symbol in the Bon tradition, but in the left-facing orientation.
Jainism
In Jainism, it is a symbol of the seventh tīrthaṅkara, Suparśvanātha. In the Śvētāmbara tradition, it is also one of the aṣṭamaṅgala or eight auspicious symbols. All Jain temples and holy books must contain the swastika and ceremonies typically begin and end with creating a swastika mark several times with rice around the altar. Jains use rice to make a swastika in front of statues and then put an offering on it, usually a ripe or dried fruit, a sweet ( ), or a coin or currency note. The four arms of the swastika symbolise the four places where a soul could be reborn in samsara, the cycle of birth and deathsvarga "heaven", naraka "hell", manushya "humanity" or tiryancha "as flora or fauna"before the soul attains moksha "salvation" as a siddha, having ended the cycle of birth and death and become omniscient.
Prevalence in southern Asia
In Bhutan, India, Nepal and Sri Lanka, the swastika is common. Temples, businesses and other organisations, such as the Buddhist libraries, Ahmedabad Stock Exchange and the Nepal Chamber of Commerce, use the swastika in reliefs or logos. Swastikas are ubiquitous in Indian and Nepalese communities, located on shops, buildings, transport vehicles, and clothing. The swastika remains prominent in Hindu ceremonies such as weddings. The left facing sauwastika symbol is found in tantric rituals.
Musaeus College in Colombo, Sri Lanka, a Buddhist girls' school, has a left facing swastika in their school logo.
In India, Swastik and Swastika, with their spelling variants, are first names for males and females respectively, for instance with Swastika Mukherjee. The Emblem of Bihar contains two swastikas.
In Bhutan, swastika motifs are found in architecture, fabrics and religious ceremonies.
Among the predominantly Hindu population of Bali, in Indonesia, swastikas are common in temples, homes and public spaces. Similarly, the swastika is a common icon associated with Buddha's footprints in Theravada Buddhist communities of Myanmar, Thailand and Cambodia.
The Tantra-based new religious movement Ananda Marga (Devanagari: आनन्द मार्ग, meaning 'Path of Bliss') uses a motif similar to the Raëlians, but in their case the apparent star of David is defined as intersecting triangles with no specific reference to Jewish culture.
Spread to eastern Asia
The swastika is an auspicious symbol in China where it was introduced from India with Buddhism. In 693, during the Tang dynasty, it was declared as "the source of all good fortune" and was called by Wu Zetian becoming a Chinese word. The Chinese character for () is similar to a swastika in shape and has two different variations:《卐》and 《卍》. As the Chinese character ( or ) is homonym for the Chinese word of "ten thousand" () and "infinity", as such the Chinese character is itself a symbol of immortality and infinity. It was also a representation of longevity.
The Chinese character could be used as a stand-alone《》or《》or as be used as pairs《 》in Chinese visual arts, decorative arts, and clothing due to its auspicious connotation.
Adding the character ( or ) to other auspicious Chinese symbols or patterns can multiply that wish by 10,000 times. It can be combined with other Chinese characters, such as the Chinese character 《壽》for longevity where it is sometimes even integrated into the Chinese character to augment the meaning of longevity.
The paired swastika symbols ( and ) are included, at least since the Liao Dynasty (907–1125 CE), as part of the Chinese writing system and are variant characters for 《萬》 or 《万》 (wàn in Mandarin, 《만》(man) in Korean, Cantonese, and Japanese, vạn in Vietnamese) meaning "myriad".
The character can also be stylized in the form of the , Chinese auspicious clouds.
When the Chinese writing system was introduced to Japan in the 8th century, the swastika was adopted into the Japanese language and culture. It is commonly referred as the manji (lit. "10,000-character"). Since the Middle Ages, it has been used as a mon by various Japanese families such as Tsugaru clan, Hachisuka clan or around 60 clans that belong to Tokugawa clan.
The city of Hirosaki in Aomori Prefecture designates this symbol as its official flag, which stemmed from its use in the emblem of the Tsugaru clan, the lords of Hirosaki Domain during the Edo period.
In Japan, the swastika is also used as a map symbol and is designated by the Survey Act and related Japanese governmental rules to denote a Buddhist temple. Japan has considered changing this due to occasional controversy and misunderstanding by foreigners. The symbol is sometimes censored in international versions of Japanese works, such as anime. Censorship of this symbol in Japan and in Japanese media abroad has been subject to occasional controversy related to freedom of speech, with critics of the censorship arguing it does not respect history nor freedom of speech.
In Chinese and Japanese art, swastikas are often found as part of a repeating pattern. One common pattern, called sayagata in Japanese, comprises left- and right-facing swastikas joined by lines. As the negative space between the lines has a distinctive shape, the sayagata pattern is sometimes called the key fret motif in English.
Many Chinese religions make use of swastika symbols, including Guiyidao and Shanrendao. The Red Swastika Society, formed in China in 1922 as the philanthropic branch of Guiyidao, became the largest supplier of emergency relief in China during World War II, in the same manner as the Red Cross in the rest of the world. The Red Swastika Society abandoned mainland China in 1954, settling first in Hong Kong then in Taiwan. They continue to use the red swastika as their symbol.
The Falun Gong qigong movement, founded in China in the early 1990s, uses a symbol that features a large swastika surrounded by four smaller (and rounded) ones, interspersed with yin-and-yang symbols.
Classical Europe
Ancient Greek architectural, clothing and coin designs are replete with single or interlinking swastika motifs. There are also gold plate fibulae from the 8th century BCE decorated with an engraved swastika. Related symbols in classical Western architecture include the cross, the three-legged triskele or triskelion and the rounded lauburu. The swastika symbol is also known in these contexts by a number of names, especially gammadion, or rather the tetra-gammadion. The name gammadion comes from its being seen as being made up of four Greek gamma (Γ) letters. Ancient Greek architectural designs are replete with the interlinking symbol.
In Greco-Roman art and architecture, and in Romanesque and Gothic art in the West, isolated swastikas are relatively rare, and the swastika is more commonly found as a repeated element in a border or tessellation. Swastikas often represented perpetual motion, reflecting the design of a rotating windmill or watermill. A meander of connected swastikas makes up the large band that surrounds the Augustan Ara Pacis.
A design of interlocking swastikas is one of several tessellations on the floor of the cathedral of Amiens, France. A border of linked swastikas was a common Roman architectural motif, and can be seen in more recent buildings as a neoclassical element. A swastika border is one form of meander, and the individual swastikas in such a border are sometimes called Greek keys. There have also been swastikas found on the floors of Pompeii.
Swastikas were widespread among the Illyrians, symbolising the Sun and the fire. The Sun cult was the main Illyrian cult; a swastika in clockwise motion is interpreted in particular as a representation of the movement of the Sun.
The swastika has been preserved by the Albanians since Illyrian times as a pagan symbol commonly found in a variety of contexts of Albanian folk art, including traditional tattooing, grave art, jewellery, clothes, and house carvings. The swastika ( or , "hooked cross") and other crosses in Albanian tradition represent the Sun (Dielli) and the fire (zjarri, evidently called with the theonym Enji). In Albanian paganism fire is regarded as the offspring of the Sun and fire calendar rituals are practiced in order to give strength to the Sun and to ward off evil.
Medieval and early modern Europe
Middle Ages
In Christianity, the swastika is used as a hooked version of the Christian Cross, the symbol of Christ's victory over death. Some Christian churches built in the Romanesque and Gothic eras are decorated with swastikas, carrying over earlier Roman designs. Swastikas are prominently displayed in a mosaic in the St. Sophia church of Kyiv, Ukraine dating from the 12th century. They also appear as a repeating ornamental motif on a tomb in the Basilica of St. Ambrose in Milan.
A ceiling painted in 1910 in the church of St Laurent in Grenoble has many swastikas. It can be visited today because the church became the archaeological museum of the city. A proposed direct link between it and a swastika floor mosaic in the Cathedral of Our Lady of Amiens, which was built on top of a pagan site at Amiens, France in the 13th century, is considered unlikely. The stole worn by a priest in the 1445 painting of the Seven Sacraments by Rogier van der Weyden presents the swastika form simply as one way of depicting the cross.
Swastikas also appear in art and architecture during the Renaissance and Baroque era. The fresco The School of Athens shows an ornament made out of swastikas, and the symbol can also be found on the facade of the Santa Maria della Salute, a Roman Catholic church and minor basilica located at Punta della Dogana in the Dorsoduro sestiere of the city of Venice.
In the Polish First Republic swastika symbols were also popular with the nobility. Several noble houses, e.g. Boreyko, Borzym, and Radziechowski from Ruthenia, also had swastikas as their coat of arms. The family reached its greatness in the 14th and 15th centuries and its crest can be seen in many heraldry books produced at that time.
The swastika was also a heraldic symbol, for example on the Boreyko coat of arms, used by noblemen in Poland and Ukraine. In the 19th century a swastika was one of the Russian Empire's symbols and was used on coinage as a backdrop to the Russian eagle.
Rediscovery by Heinrich Schliemann
At Troy near the Dardanelles, Heinrich Schliemann's 1871–1875 archaeological excavations discovered objects decorated with swastikas. Hearing of this, the director of the French School at Athens, Émile-Louis Burnouf, wrote to Schliemann in 1872, stating "the Swastika should be regarded as a sign of the Aryan race". Burnouf told Schliemann that "It should also be noted that the Jews have completely rejected it". Accordingly, Schliemann believed the Trojans to have been Aryans: "The primitive Trojans, therefore, belonged to the Aryan race, which is further sufficiently proved by the symbols on the round terra-cottas". Schliemann accepted Burnouf's interpretation.
Schliemann believed that use of swastikas spread widely across Eurasia.
Schliemann established a link between the swastika and Germany. He connected objects he excavated at Troy to objects bearing swastikas found in Germany near Königswalde on the Oder.
Sarah Boxer, in an article in 2000 in The New York Times, described this as a "fateful link". According to Steven Heller, "Schliemann presumed that the swastika was a religious symbol of his German ancestors which linked ancient Teutons, Homeric Greeks and Vedic India". According to Bernard Mees, "Of all of the pre-runic symbols, the swastika has always been the most popular among scholars" and "The origin of swastika studies must be traced to the excitement generated by the archaeological finds of Heinrich Schliemann at Troy".
After his excavations at Troy, Schliemann began digging at Mycenae. According to Cathy Gere, "Having burdened the swastika symbol with such cultural, religious and racial significance in Troy and Its Remains, it was incumbent on Schliemann to find the symbol repeated at Mycenae, but its occurrence turned out to be disappointingly infrequent". Gere writes that "He did his best with what he had":
Gere points out that although Schliemann wrote that the motif "may often be seen", his 1878 book Mycenæ did not have illustrations of any examples. Schliemann described "a small and thick terra-cotta disk" on which "are engraved a number of 卍's, the sign which occurs so frequently in the ruins of Troy", but as Gere notes, he did not publish an illustration.
Among the gold grave goods at Grave Circles A and B was a repoussé roundel in grave III of Grave Circle A, the ornamentation of which Schliemann thought was "derived" from the swastika:
According to Gere, this motif is "completely dissimilar" to the swastika, and that Schliemann was "straining desperately after the same connection". Nevertheless, the Mycenaean Greeks and the Trojan people both came to be identified as representatives of the Aryan race: "Despite the difficulties with linking the symbolism of Troy and Mycenae, the common Aryan roots of the two peoples became something of a truism".
The house Schliemann had had built in Panepistimiou Street in Athens by 1880, Iliou Melathron, is decorated with swastika symbols and motifs in numerous places, including the ironwork railing and gates, the window bars, the ceiling fresco of the entrance hall, and the entire floor of one room.
Following Schliemann, academic studies on the swastika were published by , Michał Żmigrodzki, Eugène Goblet d'Alviella, Thomas Wilson, Oscar Montelius and Joseph Déchelette.
German occultism and pan-German nationalism
On 24 June 1875, Guido von List commemorated the 1500th anniversary of the German victory over the Roman Empire at the Battle of Carnuntum by burying a swastika of eight wine bottles beneath the () in the ruins of Carnuntum. In 1891, List began to claim that heraldry's division of the field was derived from the shapes of runes. He claimed that the medieval German was a survival of the pre-Christian Armanist priest-kings and that the cryptic letters "SSGG" inscribed on vehmic knives represented a double sig rune followed by two swastikas.
In 1897, Max Ferdinand Sebaldt von Werth published and , which according to Nicholas Goodrick-Clarke in The Occult Roots of Nazism, "described the sexual-religion of the Aryans, a sacred practice of eugenics designed to maintain the purity of the race". Both works were "illustrated with the magical curved-armed armed swastika". Influenced by Sebaldt, List published in an article ("") which claimed the swastika was a sacred symbol of the Aryans representing the "fire-whisk" () with which the creator deity had begun the world. In September 1903, List published an article discussing the creation of the universe, the "old-Aryan sexual religion", reincarnation, karma, "Wotanism", and "Armanism" from his theosophical viewpoint, which was illustrated by triskelions and various swastikas in the Viennese occult journal . According to Goodrick-Clarke, "This article marked the first stage in List's articulation of a Germanic occult religion, the principal concern of which was racial purity".
Between 1905 and 1907, List published articles in the arguing that the swastika, the triskelion, and the sun-wheel were all "Armanist" occult symbols (Armanen runes) concealed in German heraldry, and in 1908 his () argued that the swastika or Armanen rune "Gibor" was represented in blazons including different heraldic crosses and kinked versions of the ordinaries pale, bend, and fess. List argued that the swastika, triskelion, and other Armanen runes had been concealed in 15th-century rose windows and curvilinear tracery in late Gothic architecture.
List's 1908 book () had chapter headings with triskelions, swastikas, and other symbols attached. The work laid out his belief in an ancient priestly of Wotanist initiates and identified the "Ario-Germans" as a "race" identical with Helena Blavatsky's theosophical fifth "root race". List's 1910 () discussed Yuga cycles and the Kali Yuga, proposing a mathematical relationship with the of the . His () of the same year connected Blavatsky's Hindu-inspired cosmic cycles (kalpas) with the realms of Muspelheim (), Asgard, Vanaheimr (), and Midgard, each with a corresponding symbol. Blavatsky's first Astral and second Hyperborean races List connected with the descendants of Ymir and Orgelmir, her third Lemurian race was his race of Thrudgelmir, her fourth Atlantean race his descendants of Bergelmir, and Blavatsky's fifth root race List identified as the "Ario-Germans". According to Goodrick-Clarke, List again argued that the clockwise swastika was a holy symbol of the "Ario-Germans":
List's 1914 () adopted the geological ideas of theosophist William Scott-Elliot and claimed that fragments of Atlantis remained part of Europe, pointing to rocking stones in Lower Austria and European megaliths as evidence. From Jörg Lanz von Liebenfels, List took on occult ideas about the Aryan homeland Arktogäa (a lost polar continent), and struggle the Ario-German master races and the non-Aryan slave races, and the Knights Templar. List believed that the Templars had been adepts of "Armanism" during the Middle Ages' Christian ascendancy, and that they had been suppressed for worshipping the Maltese cross that List believed to derived from a superimposed clockwise and anti-clockwise swastika and which he identified with Baphomet. Members of the inner circle of the Guido von List Society, the (HAO), expressed their membership of the occult priesthood with swastikas. Heinrich Winter, Friedrich Oskar Wannieck, and Georg Hauerstein senior's first wife all had their graves decorated with swastikas.Lanz, a former Cistercian, established the Order of the New Templars or ONT ( ) in imitation of the Knights Templar whose monastic rule had been written by the Cistercian Bernard of Clairvaux and whom Lanz believed had aimed to establish "a Greater Germanic order-state, which would encompass the entire Mediterranean area and extend its sphere of influence deep into the Middle East" whose eventual suppression had been a triumph of racial inferiority over the "Ario-Christian" eugenics practised by the Templars. As the headquarters of his revived Templar Order and as a museum of Aryan anthropology, Lanz bought on the Danube, where on Christmas Day 1907, he hoisted his heraldic banner (gules, an eagle's wing argent) and the flag of the ONT: a swastika gules surrounded by four fleurs-de-lis azure on a field or.
Post-Schliemann popularity
The swastika (gammadion, fylfot) symbol became a popular symbol in the Western world in the early 20th century, and was often used for ornamentation.
The Benedictine choir school at Lambach Abbey, Upper Austria, which Hitler attended for several months as a boy, had a swastika chiseled into the monastery portal and also the wall above the spring grotto in the courtyard by 1868. Their origin was the personal coat of arms of Theoderich Hagn, abbot of the monastery in Lambach, which bore a golden swastika with slanted points on a blue field.
The British author and poet Rudyard Kipling used the symbol on the cover art of a number of his works, including The Five Nations, 1903, which has it twinned with an elephant. Once Adolf Hitler and the Nazis came to power, Kipling ordered that swastikas should no longer adorn his books. In 1927, a red swastika defaced by a Union Jack was proposed as a flag for the Union of South Africa.
The logo of H/f. Eimskipafjelag Íslands was a swastika, called "Thor's hammer", from its founding in 1914 until the Second World War when it was discontinued and changed to read only the letters Eimskip.
The swastika was also used by the women's paramilitary organisation Lotta Svärd, which was banned in 1944 in accordance with the Moscow Armistice between Finland and the allied Soviet Union and Britain.
Also, the insignias of the Cross of Liberty, designed by Gallen-Kallela in 1918, have swastikas. The 3rd class Cross of Liberty is depicted in the upper left corner of the standard of the President of Finland, who is the grand master of the order, too.
Latvia adopted the swastika, for its Air Force in 1918/1919 and continued its use until the Soviet occupation in 1940. The cross itself was maroon on a white background, mirroring the colours of the Latvian flag. Earlier versions pointed counter-clockwise, while later versions pointed clock-wise and eliminated the white background. Various other Latvian Army units and the Latvian War College (the predecessor of the National Defence Academy) also had adopted the symbol in their battle flags and insignia during the Latvian War of Independence. A stylised fire cross is the base of the Order of Lāčplēsis, the highest military decoration of Latvia for participants of the War of Independence. The Pērkonkrusts, an ultra-nationalist political organisation active in the 1930s, also used the fire cross as one of its symbols.
The swastika symbol (Lithuanian: sūkurėlis) is a traditional Baltic ornament, found on relics dating from at least the 13th century. The sūkurėlis for Lithuanians represents the history and memory of their Lithuanian ancestors as well as the Baltic people at large. There are monuments in Lithuania such as the Freedom Monument in Rokiškis where swastikas can be found.
Starting in 1917, Mikal Sylten's staunchly antisemitic periodical, Nationalt Tidsskrift took up the swastika as a symbol, three years before Adolf Hitler chose to do so.
The left-handed swastika was a favourite sign of the last Russian Empress Alexandra Feodorovna. She wore a talisman in the form of a swastika, put it everywhere for happiness, including on her suicide letters from Tobolsk, later drew with a pencil on the wall and in the window opening of the room in the Ipatiev House, which served as the place of the last imprisonment of the royal family and on the wallpaper above the bed.
The Russian Provisional Government of 1917 printed a number of new bank notes with right-facing, diagonally rotated swastikas in their centres. The banknote design was initially intended for the Mongolian national bank but was re-purposed for Russian rubles after the February revolution. Swastikas were depicted and on some Soviet credit cards (sovznaks) printed with clichés that were in circulation in 1918–1922.
During the Russian Civil War, swastikas were present in the symbolism of the uniform of some units of the White Army Asiatic Cavalry Division of Baron Ungern in Siberia and Bogd Khanate of Mongolia, which is explained by the significant number of Buddhists within it. The Red Army's ethnic Kalmyk units wore distinct armbands featuring a swastika with "РСФСР" (Roman: "RSFSR") inscriptions on them.
New religious movements
Besides its use as a religious symbol in Hinduism, Buddhism and Jainism, which can be traced back to pre-modern traditions, the swastika was also incorporated into a large number of new religious movements which were established in the West in the modern period.
In the 1880s, the U.S.-origined Theosophical Society adopted a swastika as part of its seal, along with an Om, a hexagram or star of David, an Ankh, and an Ouroboros. Unlike the much more recent Raëlian movement, the Theosophical Society symbol has been free from controversy, and the seal is still used. The current seal also includes the text "There is no religion higher than truth."
The Raëlian Movement, whose adherents believe extraterrestrials created all life on earth, use a symbol that is often the source of considerable controversy: an interlaced star of David and a swastika. The Raëlians say the Star of David represents infinity in space whereas the swastika represents infinity in timeno beginning and no end in time, and everything being cyclic. In 1991, the symbol was changed to remove the swastika, out of respect to the victims of the Holocaust, but as of 2007 it has been restored to its original form.
The swastika is a holy symbol in neopagan Germanic Heathenry, along with the hammer of Thor and runes. This traditionwhich is found in Scandinavia, Germany, and elsewhereconsiders the swastika to be derived from a Norse symbol for the sun. Their use of the symbol has led people to accuse them of being a neo-Nazi group.
World War II
Use in Nazism
The swastika was widely used in Europe at the start of the 20th century. It symbolised many things to the Europeans, with the most common symbolism being of good luck and auspiciousness. Before the Nazis, the swastika was already in use as a symbol of German nationalist movements ().
In the wake of widespread popular usage, in post-World War I Germany, the newly established Nazi Party formally adopted the swastika in 1920. The Nazi Party emblem was a black swastika rotated 45 degrees on a white circle on a red background. This insignia was used on the party's flag, badge, and armband. Hitler also designed his personal standard using a black swastika sitting flat on one arm, not rotated.
In his 1925 work , Adolf Hitler writes: "I myself, meanwhile, after innumerable attempts, had laid down a final form; a flag with a red background, a white disk, and a black hooked cross in the middle. After long trials I also found a definite proportion between the size of the flag and the size of the white disk, as well as the shape and thickness of the hooked cross."
When Hitler created a flag for the Nazi Party, he sought to incorporate both the swastika and "those revered colours expressive of our homage to the glorious past and which once brought so much honour to the German nation". (Red, white, and black were the colours of the flag of the old German Empire.) He also stated: "As National Socialists, we see our program in our flag. In red, we see the social idea of the movement; in white, the nationalistic idea; in the hooked cross, the mission of the struggle for the victory of the Aryan man, and, by the same token, the victory of the idea of creative work."
The swastika was also understood as "the symbol of the creating, effecting life" () and as "race emblem of Germanism" ().
The concepts of racial hygiene and scientific racism were central to Nazism. High-ranking Nazi theorist Alfred Rosenberg noted that the Indo-Aryan peoples were both a model to be imitated and a warning of the dangers of the spiritual and racial "confusion" that, he believed, arose from the proximity of races. The Nazis co-opted the swastika as a symbol of the Aryan master race.
On 14 March 1933, shortly after Hitler's appointment as Chancellor of Germany, the NSDAP flag was hoisted alongside Germany's national colours. As part of the Nuremberg Laws, the NSDAP flagwith the swastika slightly offset from centrewas adopted as the sole national flag of Germany on 15 September 1935.
Use by the Allies
During World War II it was common to use small swastikas to mark air-to-air victories on the sides of Allied aircraft, and at least one British fighter pilot inscribed a swastika in his logbook for each German plane he shot down.
Americas
The swastika has been used in the art and iconography of multiple indigenous peoples of North America, including the Hopi, Navajo, and Tlingit. Swastikas were founds on pottery from the Mississippi valley and on copper objects in the Hopewell Mounds in Ross County, Ohio, and on objects associated with the Southeastern Ceremonial Complex (S.E.C.C.). To the Hopi it represents the wandering Hopi clan. The Navajo symbol, called tsin náálwołí ("whirling log"), represents humanity and life, and is used in healing rituals.
A brightly coloured First Nations saddle featuring swastika designs is on display at the Royal Saskatchewan Museum in Canada.
Before the 1930s, the symbol for the 45th Infantry Division of the United States Army was a red diamond with a yellow swastika, a tribute to the large Native American population in the southwestern United States. It was later replaced with a thunderbird symbol.
In the 20th century, traders encouraged Native American artists to use the symbol in their crafts, and it was used by the US Army 45th Infantry Division, an all-Native American division. The symbol lost popularity in the 1930s due to its associations with Nazi Germany. In 1940, partially due to government encouragement, community leaders from several different Native American tribes made a statement promising to no longer use the symbol. However, the symbol has continued to be used by Native American groups, both in reference to the original symbol and as a memorial to the 45th Division, despite external objections to its use. The symbol was used on state road signs in Arizona from the 1920s until the 1940s.
The town of Swastika, Ontario, Canada, and the hamlet of Swastika, New York were named after the symbol.
From 1909 to 1916, the K-R-I-T automobile, manufactured in Detroit, Michigan, used a right-facing swastika as their trademark.
The flag of the Guna people (also "Kuna Yala" or "Guna Yala") of Panama. This flag, adopted in 1925, has a swastika symbol that they call Naa Ukuryaa. According to one explanation, this ancestral symbol symbolises the octopus that created the world, its tentacles pointing to the four cardinal points.
In 1942, a ring was added to the centre of the flag to differentiate it from the symbol of the Nazi Party (this version subsequently fell into disuse).
Africa
Swastikas can be seen in various African cultures. In Ethiopia a swastika is carved in the window of the famous 12th-century Biete Maryam, one of the Rock-Hewn Churches, Lalibela. In Ghana, the adinkra symbol nkontim, used by the Akan people to represent loyalty, takes the form of a swastika. Nkontim symbols could be found on Ashanti gold weights and clothing.
Modern adoptions
A ugunskrusts ('fire cross') is used by the Baltic neopaganism movements Dievturība in Latvia and Romuva in Lithuania.
In the early 1990s, the former dissident and one of the founders of Russian neo-paganism Alexey Dobrovolsky first gave the name "kolovrat" (, literally 'spinning wheel') to a four-beam swastika, identical to the Nazi symbol, and later transferred this name to an eight-beam rectangular swastika. According to the historian and religious scholar Roman Shizhensky, Dobrovolsky took the idea of the swastika from the work "The Chronicle of Oera Linda" by the Nazi ideologist Herman Wirth, the first head of the Ahnenerbe.
Dobrovolsky introduced the eight-beam "kolovrat" as a symbol of "resurgent paganism." He considered this version of the Kolovrat a pagan sign of the sun and, in 1996, declared it a symbol of the uncompromising "national liberation struggle" against the "Zhyd yoke". According to Dobrovolsky, the meaning of the "kolovrat" completely coincides with the meaning of the Nazi swastika.
The kolovrat is the most commonly used religious symbol within neopagan Slavic Native Faith (a.k.a. Rodnovery).
In 2005, authorities in Tajikistan called for the widespread adoption of the swastika as a national symbol. President Emomali Rahmonov declared the swastika an Aryan symbol, and 2006 "the year of Aryan culture", which would be a time to "study and popularise Aryan contributions to the history of the world civilisation, raise a new generation (of Tajiks) with the spirit of national self-determination, and develop deeper ties with other ethnicities and cultures".
Modern controversy
Post- World War II stigmatisation
Because of its use by Nazi Germany, the swastika since the 1930s has been largely associated with Nazism. In the aftermath of World War II, it has been considered a symbol of hate in the West, and of white supremacy in many Western countries.
As a result, all use of it, or its use as a Nazi or hate symbol, is prohibited in some countries, including Germany. In some countries, such as the United States (in the 2003 case Virginia v. Black), the highest courts have ruled that the local governments can prohibit the use of swastika along with other symbols such as cross burning, if the intent of the use is to intimidate others.
Germany
The German and Austrian postwar criminal code makes the public showing of the swastika, the sig rune, the Celtic cross (specifically the variations used by white power activists), the , the odal rune and the skull illegal, except for certain enumerated exemptions. It is also censored from the reprints of 1930s railway timetables published by the . The swastikas on Hindu, Buddhist, and Jain temples are exempt, as religious symbols cannot be banned in Germany.
A controversy was stirred by the decision of several police departments to begin inquiries against anti-fascists. In late 2005 police raided the offices of the punk rock label and mail order store "Nix Gut Records" and confiscated merchandise depicting crossed-out swastikas and fists smashing swastikas. In 2006 the police department started an inquiry against anti-fascist youths using a placard depicting a person dumping a swastika into a trashcan. The placard was displayed in opposition to the campaign of right-wing nationalist parties for local elections.
On Friday, 17 March 2006, a member of the , Claudia Roth reported herself to the German police for displaying a crossed-out swastika in multiple demonstrations against neo-Nazis, and subsequently got the Bundestag to suspend her immunity from prosecution. She intended to show the absurdity of charging anti-fascists with using fascist symbols: "We don't need prosecution of non-violent young people engaging against right-wing extremism." On 15 March 2007, the Federal Court of Justice of Germany () held that the crossed-out symbols were "clearly directed against a revival of national-socialist endeavours", thereby settling the dispute for the future.
On 9 August 2018, Germany lifted the ban on the usage of swastikas and other Nazi symbols in video games. "Through the change in the interpretation of the law, games that critically look at current affairs can for the first time be given a USK age rating," USK managing director Elisabeth Secker told CTV. "This has long been the case for films and with regards to the freedom of the arts, this is now rightly also the case with computer and videogames."
Legislation in other European countries
Until 2013 in Hungary, it was a criminal misdemeanour to publicly display "totalitarian symbols", including the swastika, the SS insignia, and the Arrow Cross, punishable by custodial arrest. Display for academic, educational, artistic or journalistic reasons was allowed at the time. The communist symbols of hammer and sickle and the red star were also regarded as totalitarian symbols and had the same restriction by Hungarian criminal law until 2013.
In Latvia, public display of Nazi and Soviet symbols, including the Nazi swastika, is prohibited in public events since 2013. However, in a court case from 2007 a regional court in Riga held that the swastika can be used as an ethnographic symbol, in which case the ban does not apply.
In Lithuania, public display of Nazi and Soviet symbols, including the Nazi swastika, is an administrative offence, punishable by a fine from 150 to 300 euros. According to judicial practice, display of a non-Nazi swastika is legal.
In Poland, public display of Nazi symbols, including the Nazi swastika, is a criminal offence punishable by up to eight years of imprisonment. The use of the swastika as a religious symbol is legal.
In Geneva, Switzerland, a new constitution article banning the use of hate symbols, emblems, and other hateful images was passed in June 2024, which included banning the use of the swastika.
The European Union's Executive Commission proposed a European Union-wide anti-racism law in 2001, but European Union states failed to agree on the balance between prohibiting racism and freedom of expression. An attempt to ban the swastika across the EU in early 2005 failed after objections from the British Government and others. In early 2007, while Germany held the European Union presidency, Berlin proposed that the European Union should follow German Criminal Law and criminalise the denial of the Holocaust and the display of Nazi symbols including the swastika, which is based on the Ban on the Symbols of Unconstitutional Organisations Act. This led to an opposition campaign by Hindu groups across Europe against a ban on the swastika. They pointed out that the swastika has been around for 5,000 years as a symbol of peace. The proposal to ban the swastika was dropped by Berlin from the proposed European Union wide anti-racism laws on 29 January 2007.
Outside Europe
The manufacture, distribution or broadcasting of a swastika, with the intent to propagate Nazism, is a crime in Brazil as dictated by article 20, paragraph 1, of federal statute 7.716, passed in 1989. The penalty is a two to five years prison term and a fine.
The public display of Nazi-era German flags (or any other flags) is protected by the First Amendment to the United States Constitution, which guarantees the right to freedom of speech. The Nazi Reichskriegsflagge has also been seen on display at white supremacist events within United States borders, side by side with the Confederate battle flag.
In 2010, the Anti-Defamation League (ADL) downgraded the swastika from its status as a Jewish hate symbol, saying "We know that the swastika has, for some, lost its meaning as the primary symbol of Nazism and instead become a more generalised symbol of hate." The ADL notes on their website that the symbol is often used as "shock graffiti" by juveniles, rather than by individuals who hold white supremacist beliefs, but it is still a predominant symbol among American white supremacists (particularly as a tattoo design) and used with antisemitic intention.
In 2022, Victoria was the first Australian state to ban the display of the Nazi's swastika. People who intentionally break this law will face a one-year jail sentence, a fine of 120 penalty units ($23,077.20 AUD as of 2023, equivalent to £12,076.66 or US$15,385.57), or both.
Media
In 2010, Microsoft officially spoke out against use of the swastika by players of the first-person shooter Call of Duty: Black Ops. In Black Ops, players are allowed to customise their name tags to represent whatever they want. The swastika can be created and used, but Stephen Toulouse, director of Xbox Live policy and enforcement, said players with the symbol on their name tag will be banned (if someone reports it as inappropriate) from Xbox Live.
In the Indiana Jones Stunt Spectacular in Disney Hollywood Studios in Orlando, Florida, the swastikas on German trucks, aircraft and actor uniforms in the reenactment of a scene from Raiders of the Lost Ark were removed in 2004. The swastikas have been replaced by a stylised Greek cross.
Use by neo-Nazis
As with many neo-Nazi groups across the world, the American Nazi Party used the swastika as part of its flag before its first dissolution in 1967. The symbol was chosen by the organisation's founder, George Lincoln Rockwell. It was "re-used" by successor organisations in 1983, without the publicity Rockwell's organisation enjoyed.
The swastika, in various iconographic forms, is one of the hate symbols identified in use as graffiti in US schools, and is described as such in a 1999 US Department of Education document, "Responding to Hate at School: A Guide for Teachers, Counsellors and Administrators", edited by Jim Carnes, which provides advice to educators on how to support students targeted by such hate symbols and address hate graffiti. Examples given show that it is often used alongside other white supremacist symbols, such as those of the Ku Klux Klan, and note a "three-bladed" variation used by skinheads, white supremacists, and "some South African extremist groups".
The neo-Nazi Russian National Unity group's branch in Estonia is officially registered under the name "Kolovrat" and published an extremist newspaper in 2001 under the same name. A criminal investigation found the paper included an array of racial epithets. One Narva resident was sentenced to one year in jail for distribution of Kolovrat. The Kolovrat has since been used by the Rusich Battalion, a Russian militant group known for its operation during the war in Donbas. In 2014 and 2015, members of the Ukrainian Azov Regiment were seen with swastika tattoos.
Western misinterpretation of Asian use
Since the end of the 20th century, and through the early 21st century, confusion and controversy has occurred when personal-use goods bearing the traditional Jain, Buddhist, or Hindu symbols have been exported to the West, notably to North America and Europe, and have been interpreted by purchasers as bearing a Nazi symbol. This has resulted in several such products having been boycotted or pulled from shelves.
When a ten-year-old boy in Lynbrook, New York, bought a set of Pokémon cards imported from Japan in 1999, two of the cards contained the left-facing Buddhist swastika. The boy's parents misinterpreted the symbol as the right-facing Nazi swastika and filed a complaint to the manufacturer. Nintendo of America announced that the cards would be discontinued, explaining that what was acceptable in one culture was not necessarily so in another; their action was welcomed by the Anti-Defamation League who recognised that there was no intention to offend, but said that international commerce meant that "Isolating [the swastika] in Asia would just create more problems."
In 2002, Christmas crackers containing plastic toy red pandas sporting swastikas were pulled from shelves after complaints from customers in Canada. The manufacturer, based in China, said the symbol was presented in a traditional sense and not as a reference to the Nazis, and apologised to the customers for the cross-cultural mix-up.
In 2020, the retailer Shein pulled a necklace featuring a left-facing swastika pendant from its website after receiving backlash on social media. The retailer apologized for the lack of sensitivity but noted that the swastika was a Buddhist symbol.
Swastika as distinct from Hakenkreuz debate
Beginning in the early 2000s, partially as a reaction to the publication of a book titled The Swastika: Symbol Beyond Redemption? by Steven Heller, there has been a movement by Hindus, Buddhists, and Native Americans to "reclaim" the swastika as a sacred symbol. These groups argue that the swastika is distinct from the Nazi symbol. However, Hitler said that the Nazi symbol was the same as the Oriental symbol. On 13 August 1920, speaking to his followers in the Hofbräuhaus am Platzl of Munich, Hitler said that the Nazi symbol was shared by various cultures around the world, and could be seen "as far as India and Japan, carved in the temple pillars."
The main barrier to the effort to "reclaim", "restore", or "reassess" the swastika comes from the decades of extremely negative association in the Western world following the Nazi Party's adoption of it in the 1920s. As well, white supremacist groups still cling to the symbol as an icon of power and identity.
Many media organizations in the West also continue to describe neo-Nazi usage of the symbol as a swastika, or sometimes with the "Nazi" adjective written as "Nazi Swastika". Groups that oppose this media terminology do not wish to censor such usage, but rather to shift coverage of antisemitic and hateful events to describe the symbol in this context as a "" or "hooked cross".
See also
Z (military symbol) – sometimes called a Zwastika
Notes
References
Sources
Further reading
External links
History of the Swastika—United States Holocaust Memorial Museum
The Origins of the Swastika—BBC News
Latvian signs, swastikas, and mittens
Buddhist symbols
Cross symbols
Crosses in heraldry
Hindu symbols
Jain symbols
Magic symbols
Nazi symbolism
Religious symbols
Rotational symmetry
Symbols of Indian religions
Symbols of Nazi Germany
Talismans
Visual motifs | Swastika | [
"Physics",
"Mathematics"
] | 14,807 | [
"Visual motifs",
"Symbols",
"Symmetry",
"Rotational symmetry"
] |
45,958 | https://en.wikipedia.org/wiki/Mutual%20assured%20destruction | Mutual assured destruction (MAD) is a doctrine of military strategy and national security policy which posits that a full-scale use of nuclear weapons by an attacker on a nuclear-armed defender with second-strike capabilities would result in the complete annihilation of both the attacker and the defender. It is based on the theory of rational deterrence, which holds that the threat of using strong weapons against the enemy prevents the enemy's use of those same weapons. The strategy is a form of Nash equilibrium in which, once armed, neither side has any incentive to initiate a conflict or to disarm.
The result may be a nuclear peace, in which the presence of nuclear weapons decreases the risk of crisis escalation, since parties will seek to avoid situations that could lead to the use of nuclear weapons. Proponents of nuclear peace theory therefore believe that controlled nuclear proliferation may be beneficial for global stability. Critics argue that nuclear proliferation increases the chance of nuclear war through either deliberate or inadvertent use of nuclear weapons, as well as the likelihood of nuclear material falling into the hands of violent non-state actors.
The term "mutual assured destruction", commonly abbreviated "MAD", was coined by Donald Brennan, a strategist working in Herman Kahn's Hudson Institute in 1962. Brennan conceived the acronym cynically, spelling out the English word "mad" to argue that holding weapons capable of destroying society was irrational.
Theory
Under MAD, each side has enough nuclear weaponry to destroy the other side. Either side, if attacked for any reason by the other, would retaliate with equal or greater force. The expected result is an immediate, irreversible escalation of hostilities resulting in both combatants' mutual, total, and assured destruction. The doctrine requires that neither side construct shelters on a massive scale. If one side constructed a similar system of shelters, it would violate the MAD doctrine and destabilize the situation, because it would have less to fear from a second strike. The same principle is invoked against missile defense.
The doctrine further assumes that neither side will dare to launch a first strike because the other side would launch on warning (also called fail-deadly) or with surviving forces (a second strike), resulting in unacceptable losses for both parties. The payoff of the MAD doctrine was and still is expected to be a tense but stable global peace. However, many have argued that mutually assured destruction is unable to deter conventional war that could later escalate. Emerging domains of cyber-espionage, proxy-state conflict, and high-speed missiles threaten to circumvent MAD as a deterrent strategy.
The primary application of this doctrine started during the Cold War (1940s to 1991), in which MAD was seen as helping to prevent any direct full-scale conflicts between the United States and the Soviet Union while they engaged in smaller proxy wars around the world. MAD was also responsible for the arms race, as both nations struggled to keep nuclear parity, or at least retain second-strike capability. Although the Cold War ended in the early 1990s, the MAD doctrine continues to be applied.
Proponents of MAD as part of the US and USSR strategic doctrine believed that nuclear war could best be prevented if neither side could expect to survive a full-scale nuclear exchange as a functioning state. Since the credibility of the threat is critical to such assurance, each side had to invest substantial capital in their nuclear arsenals even if they were not intended for use. In addition, neither side could be expected or allowed to adequately defend itself against the other's nuclear missiles. This led both to the hardening and diversification of nuclear delivery systems (such as nuclear missile silos, ballistic missile submarines, and nuclear bombers kept at fail-safe points) and to the Anti-Ballistic Missile Treaty.
This MAD scenario is often referred to as rational nuclear deterrence.
Theory of mutually assured destruction
When the possibility of nuclear warfare between the United States and Soviet Union started to become a reality, theorists began to think that mutual assured destruction would be sufficient to deter the other side from launching a nuclear weapon. Kenneth Waltz, an American political scientist, believed that nuclear forces were in fact useful, but even more useful in the fact that they deterred other nuclear threats from using them, based on mutually assured destruction. The theory of mutually assured destruction being a safe way to deter continued even farther with the thought that nuclear weapons intended on being used for the winning of a war, were impractical, and even considered too dangerous and risky. Even with the Cold War ending in 1991, deterrence from mutually assured destruction is still said to be the safest course to avoid nuclear warfare.
A study published in the Journal of Conflict Resolution in 2009 quantitatively evaluated the nuclear peace hypothesis and found support for the existence of the stability-instability paradox. The study determined that nuclear weapons promote strategic stability and prevent large-scale wars but simultaneously allow for more low intensity conflicts. If a nuclear monopoly exists between two states, and one state has nuclear weapons and its opponent does not, there is a greater chance of war. In contrast, if there is mutual nuclear weapon ownership with both states possessing nuclear weapons, the odds of war drop precipitously.
History
Pre-1945
The concept of MAD had been discussed in the literature for nearly a century before the invention of nuclear weapons. One of the earliest references comes from the English author Wilkie Collins, writing at the time of the Franco-Prussian War in 1870: "I begin to believe in only one civilizing influence—the discovery one of these days of a destructive agent so terrible that War shall mean annihilation and men's fears will force them to keep the peace." The concept was also described in 1863 by Jules Verne in his novel Paris in the Twentieth Century, though it was not published until 1994. The book is set in 1960 and describes "the engines of war", which have become so efficient that war is inconceivable and all countries are at a perpetual stalemate.
MAD has been invoked by more than one weapons inventor. For example, Richard Jordan Gatling patented his namesake Gatling gun in 1862 with the partial intention of illustrating the futility of war. Likewise, after his 1867 invention of dynamite, Alfred Nobel stated that "the day when two army corps can annihilate each other in one second, all civilized nations, it is to be hoped, will recoil from war and discharge their troops." In 1937, Nikola Tesla published The Art of Projecting Concentrated Non-dispersive Energy through the Natural Media, a treatise concerning charged particle beam weapons. Tesla described his device as a "superweapon that would put an end to all war."
The March 1940 Frisch–Peierls memorandum, the earliest technical exposition of a practical nuclear weapon, anticipated deterrence as the principal means of combating an enemy with nuclear weapons.
Early Cold War
In August 1945, the United States became the first nuclear power after the nuclear attacks on Hiroshima and Nagasaki. Four years later, on August 29, 1949, the Soviet Union detonated its own nuclear device. At the time, both sides lacked the means to effectively use nuclear devices against each other. However, with the development of aircraft like the American Convair B-36 and the Soviet Tupolev Tu-95, both sides were gaining a greater ability to deliver nuclear weapons into the interior of the opposing country. The official policy of the United States became one of "Instant Retaliation", as coined by Secretary of State John Foster Dulles, which called for massive atomic attack against the Soviet Union if they were to invade Europe, regardless of whether it was a conventional or a nuclear attack.
By the time of the 1962 Cuban Missile Crisis, both the United States and the Soviet Union had developed the capability of launching a nuclear-tipped missile from a submerged submarine, which completed the "third leg" of the nuclear triad weapons strategy necessary to fully implement the MAD doctrine. Having a three-branched nuclear capability eliminated the possibility that an enemy could destroy all of a nation's nuclear forces in a first-strike attack; this, in turn, ensured the credible threat of a devastating retaliatory strike against the aggressor, increasing a nation's nuclear deterrence.
Campbell Craig and Sergey Radchenko argue that Nikita Khrushchev (Soviet leader 1953 to 1964) decided that policies that facilitated nuclear war were too dangerous to the Soviet Union. His approach did not greatly change his foreign policy or military doctrine but is apparent in his determination to choose options that minimized the risk of war.
Strategic Air Command
Beginning in 1955, the United States Strategic Air Command (SAC) kept one-third of its bombers on alert, with crews ready to take off within fifteen minutes and fly to designated targets inside the Soviet Union and destroy them with nuclear bombs in the event of a Soviet first-strike attack on the United States. In 1961, President John F. Kennedy increased funding for this program and raised the commitment to 50 percent of SAC aircraft.
During periods of increased tension in the early 1960s, SAC kept part of its B-52 fleet airborne at all times, to allow an extremely fast retaliatory strike against the Soviet Union in the event of a surprise attack on the United States. This program continued until 1969. Between 1954 and 1992, bomber wings had approximately one-third to one-half of their assigned aircraft on quick reaction ground alert and were able to take off within a few minutes. SAC also maintained the National Emergency Airborne Command Post (NEACP, pronounced "kneecap"), also known as "Looking Glass", which consisted of several EC-135s, one of which was airborne at all times from 1961 through 1990. During the Cuban Missile Crisis the bombers were dispersed to several different airfields, and sixty-five B-52s were airborne at all times.
During the height of the tensions between the US and the USSR in the 1960s, two popular films were made dealing with what could go terribly wrong with the policy of keeping nuclear-bomb-carrying airplanes at the ready: Dr. Strangelove (1964) and Fail Safe (1964).
Retaliation capability (second strike)
The strategy of MAD was fully declared in the early 1960s, primarily by United States Secretary of Defense Robert McNamara. In McNamara's formulation, there was the very real danger that a nation with nuclear weapons could attempt to eliminate another nation's retaliatory forces with a surprise, devastating first strike and theoretically "win" a nuclear war relatively unharmed. The true second-strike capability could be achieved only when a nation had a guaranteed ability to fully retaliate after a first-strike attack.
The United States had achieved an early form of second-strike capability by fielding continual patrols of strategic nuclear bombers, with a large number of planes always in the air, on their way to or from fail-safe points close to the borders of the Soviet Union. This meant the United States could still retaliate, even after a devastating first-strike attack. The tactic was expensive and problematic because of the high cost of keeping enough planes in the air at all times and the possibility they would be shot down by Soviet anti-aircraft missiles before reaching their targets. In addition, as the idea of a missile gap existing between the US and the Soviet Union developed, there was increasing priority being given to ICBMs over bombers.
It was only with the advent of nuclear-powered ballistic missile submarines, starting with the George Washington class in 1959, that a genuine survivable nuclear force became possible and a retaliatory second strike capability guaranteed.
The deployment of fleets of ballistic missile submarines established a guaranteed second-strike capability because of their stealth and by the number fielded by each Cold War adversary—it was highly unlikely that all of them could be targeted and preemptively destroyed (in contrast to, for example, a missile silo with a fixed location that could be targeted during a first strike). Given their long-range, high survivability and ability to carry many medium- and long-range nuclear missiles, submarines were credible and effective means for full-scale retaliation even after a massive first strike.
This deterrence strategy and the program have continued into the 21st century, with nuclear submarines carrying Trident II ballistic missiles as one leg of the US strategic nuclear deterrent and as the sole deterrent of the United Kingdom. The other elements of the US deterrent are intercontinental ballistic missiles (ICBMs) on alert in the continental United States, and nuclear-capable bombers. Ballistic missile submarines are also operated by the navies of China, France, India, and Russia.
The US Department of Defense anticipates a continued need for a sea-based strategic nuclear force. The first of the current Ohio-class SSBNs are expected to be retired by 2029, meaning that a replacement platform must already be seaworthy by that time. A replacement may cost over $4 billion per unit compared to the USS Ohios $2 billion. The USN's follow-on class of SSBN will be the Columbia class, which began construction in 2021 and enter service in 2031.
ABMs threaten MAD
In the 1960s both the Soviet Union (A-35 anti-ballistic missile system) and the United States (LIM-49 Nike Zeus) developed anti-ballistic missile systems. Had such systems been able to effectively defend against a retaliatory second strike, MAD would have been undermined. However, multiple scientific studies showed technological and logistical problems in these systems, including the inability to distinguish between real and decoy weapons.
MIRVs
MIRVs as counter against ABM
The multiple independently targetable re-entry vehicle (MIRV) was another weapons system designed specifically to aid with the MAD nuclear deterrence doctrine. With a MIRV payload, one ICBM could hold many separate warheads. MIRVs were first created by the United States in order to counterbalance the Soviet A-35 anti-ballistic missile systems around Moscow. Since each defensive missile could be counted on to destroy only one offensive missile, making each offensive missile have, for example, three warheads (as with early MIRV systems) meant that three times as many defensive missiles were needed for each offensive missile. This made defending against missile attacks more costly and difficult. One of the largest US MIRVed missiles, the LGM-118A Peacekeeper, could hold up to 10 warheads, each with a yield of around —all together, an explosive payload equivalent to 230 Hiroshima-type bombs. The multiple warheads made defense untenable with the available technology, leaving the threat of retaliatory attack as the only viable defensive option. MIRVed land-based ICBMs tend to put a premium on striking first. The START II agreement was proposed to ban this type of weapon, but never entered into force.
In the event of a Soviet conventional attack on Western Europe, NATO planned to use tactical nuclear weapons. The Soviet Union countered this threat by issuing a statement that any use of nuclear weapons (tactical or otherwise) against Soviet forces would be grounds for a full-scale Soviet retaliatory strike (massive retaliation). Thus it was generally assumed that any combat in Europe would end with apocalyptic conclusions.
Land-based MIRVed ICBMs threaten MAD
MIRVed land-based ICBMs are generally considered suitable for a first strike (inherently counterforce) or a counterforce second strike, due to:
Their high accuracy (low circular error probable), compared to submarine-launched ballistic missiles which used to be less accurate, and more prone to defects;
Their fast response time, compared to bombers which are considered too slow;
Their ability to carry multiple MIRV warheads at once, useful for destroying a whole missile field or several cities with one missile.
Unlike a decapitation strike or a countervalue strike, a counterforce strike might result in a potentially more constrained retaliation. Though the Minuteman III of the mid-1960s was MIRVed with three warheads, heavily MIRVed vehicles threatened to upset the balance; these included the SS-18 Satan which was deployed in 1976, and was considered to threaten Minuteman III silos, which led some neoconservatives to conclude a Soviet first strike was being prepared for. This led to the development of the aforementioned Pershing II, the Trident I and Trident II, as well as the MX missile, and the B-1 Lancer.
MIRVed land-based ICBMs are considered destabilizing because they tend to put a premium on striking first. When a missile is MIRVed, it is able to carry many warheads (up to eight in existing US missiles, limited by New START, though Trident II is capable of carrying up to 12) and deliver them to separate targets. If it is assumed that each side has 100 missiles, with five warheads each, and further that each side has a 95 percent chance of neutralizing the opponent's missiles in their silos by firing two warheads at each silo, then the attacking side can reduce the enemy ICBM force from 100 missiles to about five by firing 40 missiles with 200 warheads, and keeping the rest of 60 missiles in reserve. As such, this type of weapon was intended to be banned under the START II agreement; however, the START II agreement was never brought into force, and neither Russia nor the United States ratified the agreement.
Late Cold War
The original US MAD doctrine was modified on July 25, 1980, with US President Jimmy Carter's adoption of countervailing strategy with Presidential Directive 59. According to its architect, Secretary of Defense Harold Brown, "countervailing strategy" stressed that the planned response to a Soviet attack was no longer to bomb Soviet population centers and cities primarily, but first to kill the Soviet leadership, then attack military targets, in the hope of a Soviet surrender before total destruction of the Soviet Union (and the United States). This modified version of MAD was seen as a winnable nuclear war, while still maintaining the possibility of assured destruction for at least one party. This policy was further developed by the Reagan administration with the announcement of the Strategic Defense Initiative (SDI, nicknamed "Star Wars"), the goal of which was to develop space-based technology to destroy Soviet missiles before they reached the United States.
SDI was criticized by both the Soviets and many of America's allies (including Prime Minister of the United Kingdom Margaret Thatcher) because, were it ever operational and effective, it would have undermined the "assured destruction" required for MAD. If the United States had a guarantee against Soviet nuclear attacks, its critics argued, it would have first-strike capability, which would have been a politically and militarily destabilizing position. Critics further argued that it could trigger a new arms race, this time to develop countermeasures for SDI. Despite its promise of nuclear safety, SDI was described by many of its critics (including Soviet nuclear physicist and later peace activist Andrei Sakharov) as being even more dangerous than MAD because of these political implications. Supporters also argued that SDI could trigger a new arms race, forcing the USSR to spend an increasing proportion of GDP on defense—something which has been claimed to have been an indirect cause of the eventual collapse of the Soviet Union. Gorbachev himself in 1983 announced that “the continuation of the S.D.I. program will sweep the world into a new stage of the arms race and would destabilize the strategic situation.”
Proponents of ballistic missile defense (BMD) argue that MAD is exceptionally dangerous in that it essentially offers a single course of action in the event of a nuclear attack: full retaliatory response. The fact that nuclear proliferation has led to an increase in the number of nations in the "nuclear club", including nations of questionable stability (e.g. North Korea), and that a nuclear nation might be hijacked by a despot or other person or persons who might use nuclear weapons without a sane regard for the consequences, presents a strong case for proponents of BMD who seek a policy which both protect against attack, but also does not require an escalation into what might become global nuclear war. Russia continues to have a strong public distaste for Western BMD initiatives, presumably because proprietary operative BMD systems could exceed their technical and financial resources and therefore degrade their larger military standing and sense of security in a post-MAD environment. Russian refusal to accept invitations to participate in NATO BMD may be indicative of the lack of an alternative to MAD in current Russian war-fighting strategy due to the dilapidation of conventional forces after the breakup of the Soviet Union.
Proud Prophet
Proud Prophet was a series of war games played out by various American military officials. The simulation revealed MAD made the use of nuclear weapons virtually impossible without total nuclear annihilation, regardless of how nuclear weapons were implemented in war plans. These results essentially ruled out the possibility of a limited nuclear strike, as every time this was attempted, it resulted in a complete expenditure of nuclear weapons by both the United States and USSR. Proud Prophet marked a shift in American strategy; following Proud Prophet, American rhetoric of strategies that involved the use of nuclear weapons dissipated and American war plans were changed to emphasize the use of conventional forces.
TTAPS Study
In 1983, a group of researchers including Carl Sagan released the TTAPS study (named for the respective initials of the authors), which predicted that the large scale use of nuclear weapons would cause a “nuclear winter”. The study predicted that the debris burned in nuclear bombings would be lifted into the atmosphere and diminish sunlight worldwide, thus reducing world temperatures by “-15° to -25°C”. These findings led to theory that MAD would still occur with many fewer weapons than were possessed by either the United States or USSR at the height of the Cold War. As such, nuclear winter was used as an argument for significant reduction of nuclear weapons since MAD would occur anyway.
Post-Cold War
After the fall of the Soviet Union, the Russian Federation emerged as a sovereign entity encompassing most of the territory of the former USSR. Relations between the United States and Russia were, at least for a time, less tense than they had been with the Soviet Union.
While MAD has become less applicable for the US and Russia, it has been argued as a factor behind Israel's acquisition of nuclear weapons. Similarly, diplomats have warned that Japan may be pressured to nuclearize by the presence of North Korean nuclear weapons. The ability to launch a nuclear attack against an enemy city is a relevant deterrent strategy for these powers.
The administration of US President George W. Bush withdrew from the Anti-Ballistic Missile Treaty in June 2002, claiming that the limited national missile defense system which they proposed to build was designed only to prevent nuclear blackmail by a state with limited nuclear capability and was not planned to alter the nuclear posture between Russia and the United States.
While relations have improved and an intentional nuclear exchange is more unlikely, the decay in Russian nuclear capability in the post–Cold War era may have had an effect on the continued viability of the MAD doctrine. A 2006 article by Keir Lieber and Daryl Press stated that the United States could carry out a nuclear first strike on Russia and would "have a good chance of destroying every Russian bomber base, submarine, and ICBM." This was attributed to reductions in Russian nuclear stockpiles and the increasing inefficiency and age of that which remains. Lieber and Press argued that the MAD era is coming to an end and that the United States is on the cusp of global nuclear primacy.
However, in a follow-up article in the same publication, others criticized the analysis, including Peter Flory, the US Assistant Secretary of Defense for International Security Policy, who began by writing "The essay by Keir Lieber and Daryl Press contains so many errors, on a topic of such gravity, that a Department of Defense response is required to correct the record." Regarding reductions in Russian stockpiles, another response stated that "a similarly one-sided examination of [reductions in] U.S. forces would have painted a similarly dire portrait".
A situation in which the United States might actually be expected to carry out a "successful" attack is perceived as a disadvantage for both countries. The strategic balance between the United States and Russia is becoming less stable, and the objective, the technical possibility of a first strike by the United States is increasing. At a time of crisis, this instability could lead to an accidental nuclear war. For example, if Russia feared a US nuclear attack, Moscow might make rash moves (such as putting its forces on alert) that would provoke a US preemptive strike.
An outline of current US nuclear strategy toward both Russia and other nations was published as the document "Essentials of Post–Cold War Deterrence" in 1995.
In November 2020, the US successfully destroyed a dummy ICBM outside the atmosphere with another missile. Bloomberg Opinion writes that this defense ability "ends the era of nuclear stability".
India and Pakistan
MAD does not entirely apply to all nuclear-armed rivals. India and Pakistan are an example of this; because of the superiority of conventional Indian armed forces to their Pakistani counterparts, Pakistan may be forced to use their nuclear weapons on invading Indian forces out of desperation regardless of an Indian retaliatory strike. As such, any large-scale attack on Pakistan by India could precipitate the use of nuclear weapons by Pakistan, thus rendering MAD inapplicable. However, MAD is applicable in that it may deter Pakistan from making a “suicidal” nuclear attack rather than a defensive nuclear strike.
North Korea
Since the emergence of North Korea as a nuclear state, military action has not been an option in handling the instability surrounding North Korea because of their option of nuclear retaliation in response to any conventional attack on them, thus rendering non-nuclear neighboring states such as South Korea and Japan incapable of resolving the destabilizing effect of North Korea via military force. MAD may not apply to the situation in North Korea because the theory relies on rational consideration of the use and consequences of nuclear weapons, which may not be the case for potential North Korean deployment.
Official policy
Whether MAD was the officially accepted doctrine of the United States military during the Cold War is largely a matter of interpretation. The United States Air Force, for example, has retrospectively contended that it never advocated MAD as a sole strategy, and that this form of deterrence was seen as one of numerous options in US nuclear policy. Former officers have emphasized that they never felt as limited by the logic of MAD (and were prepared to use nuclear weapons in smaller-scale situations than "assured destruction" allowed), and did not deliberately target civilian cities (though they acknowledge that the result of a "purely military" attack would certainly devastate the cities as well). However, according to a declassified 1959 Strategic Air Command study, US nuclear weapons plans specifically targeted the populations of Beijing, Moscow, Leningrad, East Berlin, and Warsaw for systematic destruction. MAD was implied in several US policies and used in the political rhetoric of leaders in both the United States and the USSR during many periods of the Cold War:
The doctrine of MAD was officially at odds with that of the USSR, which had, contrary to MAD, insisted survival was possible. The Soviets believed they could win not only a strategic nuclear war, which they planned to absorb with their extensive civil defense planning, but also the conventional war that they predicted would follow after their strategic nuclear arsenal had been depleted. Official Soviet policy, though, may have had internal critics towards the end of the Cold War, including some in the USSR's own leadership:
Other evidence of this comes from the Soviet minister of defense, Dmitriy Ustinov, who wrote that "A clear appreciation by the Soviet leadership of what a war under contemporary conditions would mean for mankind determines the active position of the USSR." The Soviet doctrine, although being seen as primarily offensive by Western analysts, fully rejected the possibility of a "limited" nuclear war by 1975.
Criticism
Deterrence theory has been criticized by numerous scholars for various reasons. A prominent strain of criticism argues that rational deterrence theory is contradicted by frequent deterrence failures, which may be attributed to misperceptions. Critics have also argued that leaders do not behave in ways that are consistent with the predictions of nuclear deterrence theory. For example, it has been argued that it is inconsistent with the logic of rational deterrence theory that states continue to build nuclear arsenals once they have reached the second-strike threshold.
Additionally, many scholars have advanced philosophical objections against the principles of deterrence theory on purely ethical grounds. Included in this group is Robert L. Holmes who observes that mankind's reliance upon a system of preventing war which is based exclusively upon the threat of waging war is inherently irrational and must be considered immoral according to fundamental deontological principles. In addition, he questions whether it can be conclusively demonstrated that such a system has in fact served to prevent warfare in the past and may actually serve to increase the probability of waging war in the future due to its reliance upon the continuous development of new generations of technologically advanced nuclear weapons.
Challengeable assumptions
Second-strike capability
A first strike must not be capable of preventing a retaliatory second strike or else mutual destruction is not assured. In this case, a state would have nothing to lose with a first strike or might try to preempt the development of an opponent's second-strike capability with a first strike. To avoid this, countries may design their nuclear forces to make decapitation strike almost impossible, by dispersing launchers over wide areas and using a combination of sea-based, air-based, underground, and mobile land-based launchers.
Another method of ensuring second strike capability is through the use of dead man's switch or "fail-deadly:" in the absence of ongoing action from a functional command structure—such as would occur after suffering a successful decapitation strike—an automatic system defaults to launching a nuclear strike upon some target. A particular example is the Soviet (now Russian) Dead Hand system, which has been described as a semi-automatic "version of Dr. Strangelove's Doomsday Machine" which, once activated, can launch a second strike without human intervention. The purpose of the Dead Hand system is to ensure a second strike even if Russia were to suffer a decapitation attack, thus maintaining MAD.
Perfect detection
No false positives (errors) in the equipment and/or procedures that must identify a launch by the other side. The implication of this is that an accident could lead to a full nuclear exchange. During the Cold War there were several instances of false positives, as in the case of Stanislav Petrov.
Perfect attribution. If there is a launch from the Sino-Russian border, it could be difficult to distinguish which nation is responsible—both Russia and China have the capability—and, hence, against which nation retaliation should occur. A launch from a nuclear-armed submarine could also be difficult to attribute.
Perfect rationality
No rogue commanders will have the ability to corrupt the launch decision process. Such an incident very nearly occurred during the Cuban Missile Crisis when an argument broke out aboard a nuclear-armed submarine cut off from radio communication. The second-in-command, Vasili Arkhipov, refused to launch despite an order from Captain Savitsky to do so.
All leaders with launch capability seem to care about the survival of their citizens. Winston Churchill is quoted as saying that any strategy will not "cover the case of lunatics or dictators in the mood of Hitler when he found himself in his final dugout."
Inability to defend
No fallout shelter networks of sufficient capacity to protect large segments of the population and/or industry.
No development of anti-missile technology or deployment of remedial protective gear.
Inherent instability
Another reason is that deterrence has an inherent instability. As Kenneth Boulding said: "If deterrence were really stable... it would cease to deter." If decision-makers were perfectly rational, they would never order the largescale use of nuclear weapons, and the credibility of the nuclear threat would be low.
However, that apparent perfect rationality criticism is countered and so is consistent with current deterrence policy. In Essentials of Post-Cold War Deterrence, the authors detail an explicit advocation of ambiguity regarding "what is permitted" for other nations and its endorsement of "irrationality" or, more precisely, the perception thereof as an important tool in deterrence and foreign policy. The document claims that the capacity of the United States, in exercising deterrence, would be hurt by portraying US leaders as fully rational and cool-headed:
Terrorism
The threat of foreign and domestic nuclear terrorism has been a criticism of MAD as a defensive strategy. Deterrent strategies are ineffective against those who attack without regard for their life. Furthermore, the doctrine of MAD has been critiqued in regard to terrorism and asymmetrical warfare. Critics contend that a retaliatory strike would not be possible in this case because of the decentralization of terrorist organizations, which may be operating in several countries and dispersed among civilian populations. A misguided retaliatory strike made by the targeted nation could even advance terrorist goals in that a contentious retaliatory strike could drive support for the terrorist cause that instigated the nuclear exchange.
However Robert Gallucci, the president of the John D. and Catherine T. MacArthur Foundation, argues that although traditional deterrence is not an effective approach toward terrorist groups bent on causing a nuclear catastrophe, "the United States should instead consider a policy of expanded deterrence, which focuses not solely on the would-be nuclear terrorists but on those states that may deliberately transfer or inadvertently lead nuclear weapons and materials to them. By threatening retaliation against those states, the United States may be able to deter that which it cannot physically prevent."
Graham Allison makes a similar case and argues that the key to expanded deterrence is coming up with ways of tracing nuclear material to the country that forged the fissile material: "After a nuclear bomb detonates, nuclear forensic cops would collect debris samples and send them to a laboratory for radiological analysis. By identifying unique attributes of the fissile material, including its impurities and contaminants, one could trace the path back to its origin." The process is analogous to identifying a criminal by fingerprints: "The goal would be twofold: first, to deter leaders of nuclear states from selling weapons to terrorists by holding them accountable for any use of their own weapons; second, to give leaders every incentive to tightly secure their nuclear weapons and materials."
Space weapons
Strategic analysts have criticized the doctrine of MAD for its inability to respond to the proliferation of space weaponry. First, military space systems have unequal dependence across countries. This means that less-dependent countries may find it beneficial to attack a more-dependent country's space weapons, which complicates deterrence. This is especially true for countries like North Korea which have extensive ballistic missiles that could strike space-based systems. Even across countries with similar dependence, anti-satellite weapons (ASATs) have the ability to remove the command and control of nuclear weapons. This encourages crisis-instability and pre-emptive nuclear-disabling strikes. Third, there is a risk of asymmetrical challengers. Countries that fall behind in space weapon advancement may turn to using chemical or biological weapons. This may heighten the risk of escalation, bypassing any deterrent effects of nuclear weapons.
Entanglements
Cold-war bipolarity no longer is applicable to the global power balance. The complex modern alliance system makes allies and enemies tied to one another. Thus, action by one country to deter another could threaten the safety of a third country. "Security trilemmas" could increase tension during mundane acts of cooperation, complicating MAD.
Emerging hypersonic weapons
Hypersonic ballistic or cruise missiles threaten the retaliatory backbone of mutual assured destruction. The high precision and speed of these weapons may allow for the development of "decapitory" strikes that remove the ability of another nation to have a nuclear response. In addition, the secretive nature of these weapons' development can make deterrence more asymmetrical.
Failure to retaliate
If it was known that a country's leader would not resort to nuclear retaliation, adversaries may be emboldened. Edward Teller, a member of the Manhattan Project, echoed these concerns as early as 1985 when he said that "The MAD policy as a deterrent is totally ineffective if it becomes known that in case of attack, we would not retaliate against the aggressor."
See also
References
External links
"The Rise of U.S. Nuclear Primacy" from Foreign Affairs, March/April 2006
First Strike and Mutual Deterrence from the Dean Peter Krogh Foreign Affairs Digital Archives
Herman Kahn's Doomsday Machine
Robert McNamara's "Mutual Deterrence" speech from 1967
Getting MAD: Nuclear Mutual Assured Destruction
Center for Arms Control and Non-Proliferation
Council for a Livable World
Nuclear Files.org Mutual Assured Destruction
John G. Hines et al. Soviet Intentions 1965–1985. BDM, 1995.
Cold War policies
Nuclear strategy
Nuclear weapons
Nuclear warfare
English phrases
Military doctrines
Cold War terminology
Nuclear doomsday
Theories of history | Mutual assured destruction | [
"Chemistry"
] | 7,616 | [
"Radioactivity",
"Nuclear warfare"
] |
45,959 | https://en.wikipedia.org/wiki/Nuclear%20strategy | Nuclear strategy involves the development of doctrines and strategies for the production and use of nuclear weapons.
As a sub-branch of military strategy, nuclear strategy attempts to match nuclear weapons as means to political ends. In addition to the actual use of nuclear weapons whether in the battlefield or strategically, a large part of nuclear strategy involves their use as a bargaining tool.
Some of the issues considered within nuclear strategy include:
Conditions which serve a nation's interest to develop nuclear weapons
Types of nuclear weapons to be developed
How and when weapons are to be used
Many strategists argue that nuclear strategy differs from other forms of military strategy. The immense and terrifying power of the weapons makes their use, in seeking victory in a traditional military sense, impossible.
Perhaps counterintuitively, an important focus of nuclear strategy has been determining how to prevent and deter their use, a crucial part of mutually assured destruction.
In the context of nuclear proliferation and maintaining the balance of power, states also seek to prevent other states from acquiring nuclear weapons as part of nuclear strategy.
Nuclear deterrent composition
The doctrine of mutual assured destruction (MAD) assumes that a nuclear deterrent force must be credible and survivable. That is, each deterrent force must survive a first strike with sufficient capability to effectively destroy the other country in a second strike. Therefore, a first strike would be suicidal for the launching country.
In the late 1940s and 1950s as the Cold War developed, the United States and Soviet Union pursued multiple delivery methods and platforms to deliver nuclear weapons. Three types of platforms proved most successful and are collectively called a "nuclear triad". These are air-delivered weapons (bombs or missiles), ballistic missile submarines (usually nuclear-powered and called SSBNs), and intercontinental ballistic missiles (ICBMs), usually deployed in land-based hardened missile silos or on vehicles.
Although not considered part of the deterrent forces, all of the nuclear powers deployed large numbers of tactical nuclear weapons in the Cold War. These could be delivered by virtually all platforms capable of delivering large conventional weapons.
During the 1970s there was growing concern that the combined conventional forces of the Soviet Union and the Warsaw Pact could overwhelm the forces of NATO. It seemed unthinkable to respond to a Soviet/Warsaw Pact incursion into Western Europe with strategic nuclear weapons, inviting a catastrophic exchange. Thus, technologies were developed to greatly reduce collateral damage while being effective against advancing conventional military forces. Some of these were low-yield neutron bombs, which were lethal to tank crews, especially with tanks massed in tight formation, while producing relatively little blast, thermal radiation, or radioactive fallout. Other technologies were so-called "suppressed radiation devices," which produced mostly blast with little radioactivity, making them much like conventional explosives, but with much more energy.
See also
Assured destruction
Bernard Brodie
Counterforce, Countervalue
Decapitation strike
Deterrence theory
Doctrine for Joint Nuclear Operations
Dr. Strangelove (1964), a film by Stanley Kubrick, satirizing nuclear strategy.
Fail-deadly
Pre-emptive nuclear strike, Second strike
Force de frappe
Game theory, wargaming
Herman Kahn
Madman theory
Massive retaliation
Military strategy
Minimal deterrence
Mutual assured destruction (MAD)
No first use
National Security Strategy of the United States
Nuclear blackmail
Nuclear proliferation
Nuclear utilization target selection (NUTS)
Nuclear weapons debate
Single Integrated Operational Plan (SIOP)
Strategic bombing
Tactical nuclear weapons
Thomas Schelling
Bibliography
Early texts
Brodie, Bernard. The Absolute Weapon. Freeport, N.Y.: Books for Libraries Press, 1946.
Brodie, Bernard. Strategy in the Missile Age. Princeton: Princeton University Press, 1959.
Dunn, Lewis A. Deterrence Today – Roles, Challenges, and Responses Paris: IFRI Proliferation Papers n° 19, 2007.
Kahn, Herman. On Thermonuclear War. 2nd ed. Princeton, N.J.: Princeton University Press, 1961.
Kissinger, Henry A. Nuclear Weapons and Foreign Policy. New York: Harper, 1957.
Schelling, Thomas C. Arms and Influence. New Haven: Yale University Press, 1966.
Wohlstetter, Albert. "The Delicate Balance of Terror." Foreign Affairs 37, 211 (1958): 211–233.
Secondary literature
Baylis, John, and John Garnett. Makers of Nuclear Strategy. London: Pinter, 1991. .
Buzan, Barry, and Herring, Eric. "The Arms Dynamic in World Politics". London: Lynne Rienner Publishers, 1998. .
Freedman, Lawrence. The Evolution of Nuclear Strategy. 2nd ed. New York: St. Martin's Press, 1989. .
Heuser, Beatrice. NATO, Britain, France and the FRG: Nuclear Strategies and Forces for Europe, 1949–2000 (London: Macmillan, hardback 1997, paperback 1999), 256p.,
Heuser, Beatrice. Nuclear Mentalities? Strategies and Belief Systems in Britain, France and the FRG (London: Macmillan, July 1998), 277p., Index, Tables.
Heuser, Beatrice. "Victory in a Nuclear War? A Comparison of NATO and WTO War Aims and Strategies", Contemporary European History Vol. 7 Part 3 (November 1998), pp. 311–328.
Heuser, Beatrice. "Warsaw Pact Military Doctrines in the 70s and 80s: Findings in the East German Archives", Comparative Strategy Vol. 12 No. 4 (Oct.–Dec. 1993), pp. 437–457.
Kaplan, Fred M. The Wizards of Armageddon. New York: Simon and Schuster, 1983. .
Rai Chowdhuri, Satyabrata. Nuclear Politics: Towards A Safer World, Ilford: New Dawn Press, 2004.
Rosenberg, David. "The Origins of Overkill: Nuclear Weapons and American Strategy, 1945–1960." International Security 7, 4 (Spring, 1983): 3–71.
Schelling, Thomas C. The Strategy of Conflict. Cambridge: Harvard University Press, 1960.
Smoke, Richard. National Security and the Nuclear Dilemma. 3rd ed. New York: McGraw–Hill, 1993. .
References
Nuclear warfare | Nuclear strategy | [
"Chemistry"
] | 1,250 | [
"Radioactivity",
"Nuclear warfare"
] |
45,960 | https://en.wikipedia.org/wiki/Cloaking%20device | A cloaking device is a hypothetical or fictional stealth technology that can cause objects, such as spaceships or individuals, to be partially or wholly invisible to parts of the electromagnetic (EM) spectrum. Fictional cloaking devices have been used as plot devices in various media for many years.
Developments in scientific research show that real-world cloaking devices can obscure objects from at least one wavelength of EM emissions. Scientists already use artificial materials called metamaterials to bend light around an object. However, over the entire spectrum, a cloaked object scatters more than an uncloaked object.
Fictional origins
Cloaks with magical powers of invisibility appear from the earliest days of story-telling. Since the advent of modern Science fiction, many variations on the theme with proposed basis in reality have been imagined. Star Trek screenwriter Paul Schneider, inspired in part by the 1958 film Run Silent, Run Deep, and in part by The Enemy Below, which had been released in 1957, imagined cloaking as a space-travel analog of a submarine submerging, and employed it in the 1966 Star Trek episode "Balance of Terror", in which he introduced the Romulan species, whose space vessels employ cloaking devices extensively. (He likewise predicted, in the same episode, that invisibility, "selective bending of light" as described above, would have an enormous power requirement.) Another Star Trek screenwriter, D.C. Fontana, coined the term "cloaking device" for the 1968 episode "The Enterprise Incident", which also featured Romulans.
Star Trek placed a limit on use of this device: a space vessel cannot fire weapons, employ defensive shields, or operate transporters while cloaked; thus it must "decloak" to fire—essentially like a submarine needing to "surface" in order to launch torpedoes.
Writers and game designers have since incorporated cloaking devices into many other science-fiction narratives, including Doctor Who, Star Wars, and Stargate.
Scientific experimentation
An operational, non-fictional cloaking device might be an extension of the basic technologies used by stealth aircraft, such as radar-absorbing dark paint, optical camouflage, cooling the outer surface to minimize electromagnetic emissions (usually infrared), or other techniques to minimize other EM emissions, and to minimize particle emissions from the object. The use of certain devices to jam and confuse remote sensing devices would greatly aid in this process, but is more properly referred to as "active camouflage". Alternatively, metamaterials provide the theoretical possibility of making electromagnetic radiation pass freely around the 'cloaked' object.
Metamaterial research
Optical metamaterials have featured in several proposals for invisibility schemes. "Metamaterials" refers to materials that owe their refractive properties to the way they are structured, rather than the substances that compose them. Using transformation optics it is possible to design the optical parameters of a "cloak" so that it guides light around some region, rendering it invisible over a certain band of wavelengths.
These spatially varying optical parameters do not correspond to any natural material, but may be implemented using metamaterials. There are several theories of cloaking, giving rise to different types of invisibility.
In 2014, scientists demonstrated good cloaking performance in murky water, demonstrating that an object shrouded in fog can disappear completely when appropriately coated with metamaterial. This is due to the random scattering of light, such as that which occurs in clouds, fog, milk, frosted glass, etc., combined with the properties of the metamaterial coating. When light is diffused, a thin coat of metamaterial around an object can make it essentially invisible under a range of lighting conditions.
Active camouflage
Active camouflage (or adaptive camouflage) is a group of camouflage technologies which would allow an object (usually military in nature) to blend into its surroundings by use of panels or coatings capable of changing color or luminosity. Active camouflage can be seen as having the potential to become the perfection of the art of camouflaging things from visual detection.
Optical camouflage is a kind of active camouflage in which one wears a fabric which has an image of the scene directly behind the wearer projected onto it, so that the wearer appears invisible. The drawback to this system is that, when the cloaked wearer moves, a visible distortion is often generated as the 'fabric' catches up with the object's motion. The concept exists for now only in theory and in proof-of-concept prototypes, although many experts consider it technically feasible.
It has been reported that the British Army has tested an invisible tank.
Plasma stealth
Plasma at certain density ranges absorbs certain bandwidths of broadband waves, potentially rendering an object invisible. However, generating plasma in air is too expensive and a feasible alternative is generating plasma between thin membranes instead. The Defense Technical Information Center is also following up research on plasma reducing RCS technologies. A plasma cloaking device was patented in 1991.
Metascreen
A prototype Metascreen is a claimed cloaking device, which is just few micrometers thick and to a limited extent can hide 3D objects from microwaves in their natural environment, in their natural positions, in all directions, and from all of the observer's positions. It was prepared at the University of Texas, Austin by Professor Andrea Alù.
The metascreen consisted of a 66 micrometre thick polycarbonate film supporting an arrangement of 20 micrometer thick copper strips that resembled a fishing net. In the experiment, when the metascreen was hit by 3.6 GHz microwaves, it re-radiated microwaves of the same frequency that were out of phase, thus cancelling out reflections from the object being hidden. The device only cancelled out the scattering of microwaves in the first order. The same researchers published a paper on "plasmonic cloaking" the previous year.
Howell/Choi cloaking device
University of Rochester physics professor John Howell and graduate student Joseph Choi have announced a scalable cloaking device which uses common optical lenses to achieve visible light cloaking on the macroscopic scale, known as the "Rochester Cloak". The device consists of a series of four lenses which direct light rays around objects which would otherwise occlude the optical pathway.
Cloaking in mechanics
The concepts of cloaking are not limited to optics but can also be transferred to other fields of physics. For example, it was possible to cloak acoustics for certain frequencies as well as touching in mechanics. This renders an object "invisible" to sound or even hides it from touching.
See also
Cloak
Invisibility
Cloak of invisibility
Metamaterial
Philadelphia Experiment
Stealth technology
References
"MSNBC: Can objects be turned invisible?"
Optical Camouflage by the Tachi Lab in Japan
Space Daily - Engineers Create Optical Cloaking Design For Invisibility, April, 2007
External links
University of Texas at Austin, Cockrell School of Engineering, Researchers at UT Austin Create an Ultrathin Invisibility Cloak, 26 March 2013.
New Journal of Physics, "Demonstration of an ultralow profile cloak for scattering suppression of a finite-length rod in free space", by JC Soric, PY Chen, A Kerkhoff, D Rainwater, K Melin, and Andrea Alù, March 2013.
New Journal of Physics, "Experimental verification of three-dimensional plasmonic cloaking in free-space", by D Rainwater, A Kerkhoff, K Melin, J C Soric, G Moreno and Andrea Alù, January 2012.
Physical Review X, "Do Cloaked Objects Really Scatter Less", by Francesco Monticone and Andrea Alù, October 2013.
Fictional technology
Hypothetical technology
Invisibility
Science fiction themes
Star Trek devices
Theoretical physics
Fiction about invisibility
Stealth technology | Cloaking device | [
"Physics"
] | 1,580 | [
"Optical phenomena",
"Physical phenomena",
"Theoretical physics",
"Invisibility"
] |
45,995 | https://en.wikipedia.org/wiki/Building | A building or edifice is an enclosed structure with a roof and walls, usually standing permanently in one place, such as a house or factory. Buildings come in a variety of sizes, shapes, and functions, and have been adapted throughout history for numerous factors, from building materials available, to weather conditions, land prices, ground conditions, specific uses, prestige, and aesthetic reasons. To better understand the concept, see Nonbuilding structure for contrast.
Buildings serve several societal needs – occupancy, primarily as shelter from weather, security, living space, privacy, to store belongings, and to comfortably live and work. A building as a shelter represents a physical separation of the human habitat (a place of comfort and safety) from the outside (a place that may be harsh and harmful at times).
Ever since the first cave paintings, buildings have been objects or canvasses of much artistic expression. In recent years, interest in sustainable planning and building practices has become an intentional part of the design process of many new buildings and other structures, usually green buildings.
Definition
A building is 'a structure that has a roof and walls and stands more or less permanently in one place'; "there was a three-storey building on the corner"; "it was an imposing edifice". In the broadest interpretation a fence or wall is a building. However, the word structure is used more broadly than building, to include natural and human-made formations and ones that do not have walls; structure is more often used for a fence. Sturgis' Dictionary included that differs from architecture in excluding all idea of artistic treatment; and it differs from construction in the idea of excluding scientific or highly skilful treatment."
Structural height in technical usage is the height to the highest architectural detail on the building from street level. Spires and masts may or may not be included in this height, depending on how they are classified. Spires and masts used as antennas are not generally included. The distinction between a low-rise and high-rise building is a matter of debate, but generally three stories or less is considered low-rise.
History
There is clear evidence of homebuilding from around 18,000 BC. Buildings became common during the Neolithic period.
Types
Residential
Single-family residential buildings are most often called houses or homes. Multi-family residential buildings containing more than one dwelling unit are called duplexes or apartment buildings. Condominiums are apartments that occupants own rather than rent. Houses may be built in pairs (semi-detached) or in terraces, where all but two of the houses have others on either side. Apartments may be built round courtyards or as rectangular blocks surrounded by plots of ground. Houses built as single dwellings may later be divided into apartments or bedsitters, or converted to other uses (e.g., offices or shops). Hotels, especially of the extended-stay variety (apartels), can be classed as residential.
Building types may range from huts to multimillion-dollar high-rise apartment blocks able to house thousands of people. Increasing settlement density in buildings (and smaller distances between buildings) is usually a response to high ground prices resulting from the desire of many people to live close to their places of employment or similar attractors.
Terms for residential buildings reflect such characteristics as function (e.g., holiday cottage (vacation home) or timeshare if occupied seasonally); size (cottage or great house); value (shack or mansion); manner of construction (log home or mobile home); architectural style (castle or Victorian); and proximity to geographical features (earth shelter, stilt house, houseboat, or floating home). For residents in need of special care, or those society considers dangerous enough to deprive of liberty, there are institutions (nursing homes, orphanages, psychiatric hospitals, and prisons) and group housing (barracks and dormitories).
Historically, many people lived in communal buildings called longhouses, smaller dwellings called pit-houses, and houses combined with barns, sometimes called housebarns.
Common building materials include brick, concrete, stone, and combinations thereof. Buildings are defined to be substantial, permanent structures. Such forms as yurts and motorhomes are therefore considered dwellings but not buildings.
Commercial
A commercial building is one in which at least one business is based and people do not live. Examples include stores, restaurant, and hotels.
Industrial
Industrial buildings are those in which heavy industry is done, such as manufacturing. These edifices include warehouses and factories.
Agricultural
Agricultural buildings are the outbuildings, such as barns located on farms.
Mixed use
Some buildings incorporate several or multiple different uses, most commonly commercial and residential.
Complex
Sometimes a group of inter-related (and possibly inter-connected) builds are referred to as a complex – for example a housing complex, educational complex, hospital complex, etc.
Creation
The practice of designing, constructing, and operating buildings is most usually a collective effort of different groups of professionals and trades. Depending on the size, complexity, and purpose of a particular building project, the project team may include:
A real estate developer who secures funding for the project;
One or more financial institutions or other investors that provide the funding
Local planning and code authorities
A surveyor who performs an ALTA/ACSM and construction surveys throughout the project;
Construction managers who coordinate the effort of different groups of project participants;
Licensed architects and engineers who provide building design and prepare construction documents;
The principal design Engineering disciplines which would normally include the following professionals: Civil, Structural, Mechanical building services or HVAC (heating Ventilation and Air Conditioning) Electrical Building Services, Plumbing and drainage. Also other possible design Engineer specialists may be involved such as Fire (prevention), Acoustic, façade engineers, building physics, Telecoms, AV (Audio Visual), BMS (Building Management Systems) Automatic controls etc. These design Engineers also prepare construction documents which are issued to specialist contractors to obtain a price for the works and to follow for the installations.
Landscape architects;
Interior designers;
Other consultants;
Contractors who provide construction services and install building systems such as climate control, electrical, plumbing, decoration, fire protection, security and telecommunications;
Marketing or leasing agents;
Facility managers who are responsible for operating the building.
Regardless of their size or intended use, all buildings in the US must comply with zoning ordinances, building codes and other regulations such as fire codes, life safety codes and related standards.
Vehicles—such as trailers, caravans, ships and passenger aircraft—are treated as "buildings" for life safety purposes.
Ownership and funding
Mortgage loan
Real estate developer
Environmental impacts
Building services
Physical plant
Any building requires a certain general amount of internal infrastructure to function, which includes such elements like heating / cooling, power and telecommunications, water and wastewater etc. Especially in commercial buildings (such as offices or factories), these can be extremely intricate systems taking up large amounts of space (sometimes located in separate areas or double floors / false ceilings) and constitute a big part of the regular maintenance required.
Conveying systems
Systems for transport of people within buildings:
Elevator
Escalator
Moving sidewalk (horizontal and inclined)
Systems for transport of people between interconnected buildings:
Skyway
Underground city
Building damage
Buildings may be damaged during construction or during maintenance. They may be damaged by accidents involving storms, explosions, subsidence caused by mining, water withdrawal or poor foundations and landslides. Buildings may suffer fire damage and flooding. They may become dilapidated through lack of proper maintenance, or alteration work improperly carried out.
See also
Autonomous building
Commercial modular construction
Earthquake engineering
Float glass
Hurricane-proof building
List of largest buildings
List of tallest buildings
Lists of buildings and structures
Natural building
Natural disaster and earthquake
Skyscraper
Steel building
Tent
References
External links | Building | [
"Engineering"
] | 1,567 | [
"Construction",
"Building",
"Buildings and structures",
"Architecture"
] |
46,016 | https://en.wikipedia.org/wiki/Psion%20%28company%29 | Psion PLC was a designer and manufacturer of mobile handheld computers for commercial and industrial uses. The company was headquartered in London, England, with major operations in Mississauga, Ontario, Canada, and other company offices in Europe, the United States, Asia, Latin America, and the Middle East. It was a public company listed on the London Stock Exchange () and was once a constituent of the FTSE 100 Index.
Psion's operational business was formed in September 2000 from a merger of Psion and Canadian-based Teklogix Inc., and was a global provider of solutions for mobile computing and wireless data collection. The Group's products and services included rugged mobile hardware, secure software and wireless networks, professional services, and support programs. Psion worked with its clients in the area of burgeoning technologies, including imaging, voice recognition, and radio-frequency identification (RFID). They had operations worldwide in 14 countries, and customers in more than 80 countries.
Formed in 1980, Psion first achieved success as a consumer hardware company that developed the Psion Organiser and a wide range of more sophisticated clamshell personal digital assistants (PDAs). Psion either closed or disposed of all its prior operations and then focused on rugged mobile computing systems. It withdrew from the consumer device market in 2001. Motorola Solutions announced in June 2012 that it had agreed to acquire Psion for $200 million.
History
Beginnings (1980–1984)
Psion was established in 1980 as a software house with a close relationship with Sinclair Research. The company developed games and other software for the ZX81 and ZX Spectrum home computers, released under the Sinclair/Psion brand. Psion's games for the ZX Spectrum included Chess, Chequered Flag, Flight Simulation and the Horace series. Psion Chess was later ported to other platforms, including the early Macintosh in 1984.
Early software releases for the ZX Spectrum included titles such as VU-Calc, VU-File and VU-3D, along with dozens of other titles.
The company name is an acronym standing for "Potter Scientific Instruments", after the company's founder, David Potter. The acronym PSI was already in use elsewhere in the world so "ON" was added to make the name unique. Potter remained managing director until 1999 and was chairman of the company until late 2009.
In early 1983, Sinclair approached Psion regarding the development of a suite of office applications for the forthcoming Sinclair QL personal computer. Psion were already working on a project in this area, and when the QL was launched in 1984 it was bundled with Quill, Archive, Abacus and Easel; respectively a word processor, database, spreadsheet, and business graphics application. These were later ported to DOS and made available for the IBM PC and ACT's Sirius and Apricot computers, collectively called PC-Four, or Xchange in an enhanced version. Xchange was also available for ICL's One Per Desk computer, which was based on the QL.
Psion Organiser (1984)
In 1984, Psion first entered the hardware market with the Psion Organiser, an early handheld computer, in appearance resembling a pocket calculator with an alphanumeric computer keyboard. In 1986, the vastly improved Psion Organiser II was released, and was assembled by Speedboard Assembly Services. Its success led the company into a decade long period of Psion Computer and operating system development. It included the simple-to-use Open Programming Language (OPL) for database programming, which sparked a large independent software market.
EPOC (1987)
In 1987, Psion began developing its SIxteen Bit Organiser (SIBO) family of devices and its own new multitasking operating system named EPOC, to run its third generation product, Laptops (MC), industrial handhelds (HC and Workabout) and PDA (Series 3) products.
It is often rumoured that EPOC stands for "Electronic Piece Of Cheese" however Colly Myers, who was Symbian's CEO from founding until 2002, said in an interview that it stood for 'epoch' and nothing more. This development effort produced the clamshell QWERTY-based Psion Series 3 palmtops (1993–98), which sold in the hundreds of thousands, and the Psion MC-series laptops, which sold poorly compared to the DOS-based laptops of the era.
A second effort, dubbed Project Protea, produced the Psion Series 5 for sale in 1997, a completely new product from the 32-bit hardware upwards through the OS, UI, and applications. It is still remembered for its high quality keyboard which, despite its size, allowed for touch-typing. However, the new feel of the product, and the removal of certain familiar quirks, alienated loyal Series 3 users, who tended to stick with their PDAs rather than upgrade.
In 1999, Psion released the Psion Series 7, which was much like a larger version of the Series 5, but with a double-size VGA-resolution screen that featured 256 colours (the Series 5 had a half-VGA screen with 16 grey shades). It was followed by the very similar Psion netBook.
Psion was being challenged by the arrival of cheaper PDAs such as the Palm Pilot, and PocketPCs running Microsoft's Windows CE, and in 2003, Psion released a Netbook Pro running Windows CE .NET 4.2 instead of EPOC.
Symbian and telephony (1998)
The 32-bit EPOC developed by Project Protea resulted in the eventual formation of Symbian Ltd. in June 1998 in conjunction with Nokia, Ericsson and Motorola. The OS was renamed the Symbian Operating System and was envisioned as the base for a new range of smartphones. Psion transferred 130 key staff to the new company and retained a 31% shareholding in the spun-out business. By 2007, the Symbian operating system powered around 125 million mobile phones, including many Nokia models and the Sony Ericsson P900 series.
Psion had previously sought to expand into mobile telephony itself, having engaged in talks to acquire Amstrad – mainly for its Dancall subsidiary – in 1996. Although Amstrad's owner and founder, Alan Sugar, had reportedly been seeking to sell the entire business, no agreement could apparently be made on a price or on "a plan for the disposal of the other parts of the Amstrad Group". This setback left Psion promising "to introduce GSM-based products during 1997". Meanwhile, Psion did license EPOC to Digital Equipment Corporation so that the system could be ported to Digital's StrongARM processor.
The development of new and updated products by Psion slowed after the Symbian spin-off. Other products failed or had limited success; these included Psion Siemens' GSM device, a Series 5 based set-top box, the Wavefinder DAB radio, and an attempt to add Dragon's speech recognition software to a PDA. Ericsson cancelled a Series 5MX derived smartphone project in 2001.
Psion had sold its sole manufacturing plant in 1999 and started to withdraw from its PDA markets in late 2001, shedding 250 of 1,200 staff and writing-off £40 million. The PDA, which was once a niche market, had become a global horizontal marketplace where it was difficult for Psion to compete. The final blow for Psion's Organiser and PDA business came in January 2001 when Motorola pulled out of a joint project with Psion, Samsung, and Parthus, to create "Odin", an ARM-based PDA-phone.
In 2000, Psion acquired Teklogix of Canada for £240 million, and merged its business-to-business division, Psion Enterprise, with the newly acquired company. Teklogix was rebranded Psion Teklogix, and this division formed the core of Psion Plc's business.
In 2002, Psion launched the Psion Software division. This business developed push email solutions for Symbian smartphones, Microsoft Exchange and Lotus Notes. This business was sold to Visto of the United States in 2003.
In 2004, Psion disposed of the company's remaining Symbian shareholding to Nokia, as they no longer regarded it as a core part of their strategy.
Last years (2010–2012)
In its last years, Psion made tailored and customized modular variants of its products through its online community, Ingenuity Working. Launched in March 2010, Ingenuity Working had more than 35,000 visitors per month within its first six months.
In January 2011, the launched a new logo, simultaneously removing Teklogix from its operating company name.
Motorola Solutions announced in June 2012 that it had agreed to acquire Psion for $200 million.
Netbook trademark litigation
Psion registered the trademark Netbook in various territories, including the European Union and , which was applied for on 18 December 1996 and registered by USPTO on 21 November 2000. They used this trademark for the Psion netBook product, discontinued in November 2003, and from October 2003, the NETBOOK PRO, later also discontinued.
Intel started using the term netbook in March 2008 as a generic term to describe "small laptops that are designed for wireless communication and access to the Internet", believing they were "not offering a branded line of computers here" and "see no naming conflict".
In response to the growing use of the term, on 23 December 2008 Psion Teklogix sent cease and desist letters to various parties including enthusiast website(s) demanding they no longer use the term "netbook".
In early 2009, Intel sued Psion Teklogix (US & Canada) and Psion (UK) in the Federal Court, seeking a cancellation of the trademark and an order enjoining Psion from asserting any trademark rights in the term "netbook", a declarative judgement regarding their use of the term, attorneys' fees, costs and disbursements and "such other and further relief as the Court deems just and proper". The suit was settled out of court, and on June 2, 2009, Psion announced that the company was withdrawing all of its trademark registrations for the term "Netbook" and that Psion agreed to "waive all its rights against third parties in respect of past, current or future use" of the term.
Similar marks were rejected by the USPTO citing a "likelihood of confusion" under section 2(d), including 'G NETBOOK' ( rejected 31 October 2008), Micro-Star International's (MSI) 'WIND NETBOOK' () and Coby Electronics' 'COBY NETBOOK' ( rejected 13 January 2009)
Integration with Linux
Psion had a lengthy, but distant, interest in Linux as an operating system on its electronic devices. In 1998, it supported the Linux7K project that had been initiated by Ed Bailey at Red Hat, which was to port Linux to its Series 5 personal computer. The project was named after the Cirrus Logic PS-7110 chip of the Series 5. Although this project was one of the earliest attempts to port Linux to a handheld computer, it did not come to fruition for Psion. The project soon transitioned to an informal open-source software project at Calcaria.net that kept the name Linux7K. After the project transitioned again to sourceforge.net, the project's name was changed to a more general name PsiLinux, and later to OpenPsion. The project has developed Linux kernels and file systems for the Revo, Series 5 and 5MX, and Series 7 and netBook.
In 2003–4, Psion Teklogix and its founder David Potter expressed interest in Linux as the operating system for its devices as it divested from Symbian. However, the only result of that interest was Linux as the operating system on a limited number of custom NetBook Pros designed for a hospital setting.
The Embeddable Linux Kernel Subset project has produced a small subset of Linux that runs on Psion Series 3 PDAs.
PDAs
Psion Organiser and Psion Organiser II
Psion HC
Psion Series 3, 3a, 3c & 3mx
Psion Siena
Psion Series 5, 5mx & 5mx Pro
Psion Revo
Psion netBook
Psion Netpad
Psion Series 7
Psion Teklogix Netbook Pro (Windows CE)
Psion Workabout
Psion iKon
All these PDAs except the Psion netpad have a small keyboard, which excepting the Organiser, HC and Workabout was of the standard QWERTY layout, or a regional variation thereof.
Laptops
Psion MC 200
Psion MC 400
Psion MC 400 WORD
Psion MC 600 (DOS)
See also
Gemini (PDA)
References
External links
A Brief History Of Psion's Machines
A detailed history of Psion around the time of the Series 5
Abandoned Psion software collected
OpenPsion: A project to port Linux to Psion Handhelds!
Psion shareware library and tips/articles
Psion website
Psion's online community - ingenuityworking.com
The History of Psion
Unofficial Psion F.A.Q
OpenPsion
Personal digital assistants
Defunct companies based in London
Software companies of Canada
Companies established in 1980
Radio-frequency identification
Radio-frequency identification companies | Psion (company) | [
"Engineering"
] | 2,775 | [
"Radio-frequency identification",
"Radio electronics"
] |
46,024 | https://en.wikipedia.org/wiki/Abraham%20Robinson | Abraham Robinson (born Robinsohn; October 6, 1918 – April 11, 1974) was a mathematician who is most widely known for development of nonstandard analysis, a mathematically rigorous system whereby infinitesimal and infinite numbers were reincorporated into modern mathematics. Nearly half of Robinson's papers were in applied mathematics rather than in pure mathematics.
Biography
He was born to a Jewish family with strong Zionist beliefs, in Waldenburg, Germany, which is now Wałbrzych, in Poland. In 1933, he emigrated to British Mandate of Palestine, where he earned a first degree from the Hebrew University. Robinson was in France when the Nazis invaded during World War II, and escaped by train and on foot, being alternately questioned by French soldiers suspicious of his German passport and asked by them to share his map, which was more detailed than theirs. While in London, he joined the Free French Air Force and contributed to the war effort by teaching himself aerodynamics and becoming an expert on the airfoils used in the wings of fighter planes.
After the war, Robinson worked in London, Toronto, and Jerusalem, but ended up at the University of California, Los Angeles in 1962.
Work in model theory
He became known for his approach of using the methods of mathematical logic to attack problems in analysis and abstract algebra. He "introduced many of the fundamental notions of model theory". Using these methods, he found a way of using formal logic to show that there are self-consistent nonstandard models of the real number system that include infinite and infinitesimal numbers. Others, such as Wilhelmus Luxemburg, showed that the same results could be achieved using ultrafilters, which made Robinson's work more accessible to mathematicians who lacked training in formal logic. Robinson's book Non-standard Analysis was published in 1966. Robinson was strongly interested in the history and philosophy of mathematics, and often remarked that he wanted to get inside the head of Leibniz, the first mathematician to attempt to articulate clearly the concept of infinitesimal numbers.
While at UCLA his colleagues remember him as working hard to accommodate PhD students of all levels of ability by finding them projects of the appropriate difficulty. He was courted by Yale, and after some initial reluctance, he moved there in 1967. In the Spring of 1973 he was a member of the Institute for Advanced Study. He died of pancreatic cancer in 1974.
See also
Notes
Publications
References
External links
Kutateladze S.S., Abraham Robinson, the creator of nonstandard analysis
1918 births
1974 deaths
20th-century American mathematicians
Alumni of the University of London
20th-century German mathematicians
Jewish emigrants from Nazi Germany to Mandatory Palestine
Jews who emigrated to escape Nazism
German emigrants to the United States
People from Wałbrzych
University of California, Los Angeles faculty
Yale University faculty
Brouwer Medalists
Mathematical logicians
Model theorists
Institute for Advanced Study visiting scholars
Yale Sterling Professors | Abraham Robinson | [
"Mathematics"
] | 596 | [
"Model theorists",
"Mathematical logic",
"Model theory",
"Mathematical logicians"
] |
46,065 | https://en.wikipedia.org/wiki/G%C3%A1bor%20Szeg%C5%91 | Gábor Szegő () (January 20, 1895 – August 7, 1985) was a Hungarian-American mathematician. He was one of the foremost mathematical analysts of his generation and made fundamental contributions to the theory of orthogonal polynomials and Toeplitz matrices building on the work of his contemporary Otto Toeplitz.
Life
Szegő was born in Kunhegyes, Austria-Hungary (today Hungary), into a Jewish family as the son of Adolf Szegő and Hermina Neuman. He married the chemist Anna Elisabeth Neményi in 1919, with whom he had two children.
In 1912 he started studies in mathematical physics at the University of Budapest, with summer visits to the University of Berlin and the University of Göttingen, where he attended lectures by Frobenius and Hilbert, amongst others. In Budapest he was taught mainly by Fejér, Beke, Kürschák and Bauer and made the acquaintance of his future collaborators George Pólya and Michael Fekete. His studies were interrupted in 1915 by World War I, in which he served in the infantry, artillery and air corps.
In 1918 while stationed in Vienna, he was awarded a doctorate by the University of Vienna for his work on Toeplitz determinants. He received his Privat-Dozent from the University of Berlin in 1921, where he stayed until being appointed as successor to Knopp at the University of Königsberg in 1926. Intolerable working conditions during the Nazi regime resulted in a temporary position at the Washington University in St. Louis, Missouri in 1936, before his appointment as chairman of the mathematics department at Stanford University in 1938, where he helped build up the department until his retirement in 1966. He died in Palo Alto, California. His doctoral students include Paul Rosenbloom and Joseph Ullman. The Gábor Szegö Prize, Szegő Gábor Primary School, and Szegő Gábor Matematikaverseny (a mathematics competition in his former school) are all named in his honor.
Works
Szegő's most important work was in analysis. He was one of the foremost analysts of his generation and made fundamental contributions to the theory of Toeplitz matrices and orthogonal polynomials. He wrote over 130 papers in several languages. Each of his four books, several written in collaboration with others, has become a classic in its field. The monograph Orthogonal polynomials, published in 1939, contains much of his research and has had a profound influence in many areas of applied mathematics, including theoretical physics, stochastic processes and numerical analysis.
Tutoring von Neumann
At the age of 15, the young John von Neumann, recognised as a mathematical prodigy, was sent to study advanced calculus under Szegő. On their first meeting, Szegő was so astounded by von Neumann's mathematical talent and speed that, as recalled by his wife, he came back home with tears in his eyes. Szegő subsequently visited the von Neumann house twice a week to tutor the child prodigy. Some of von Neumann's instant solutions to the problems in calculus posed by Szegő, sketched out on his father's stationery, are now on display at the von Neumann archive at Budapest.
Honours
Amongst the many honours received during his lifetime were:
Julius König Prize of the Hungarian Mathematical Society (1928)
Member of the Königsberger Gelehrten Gesellschaft (1928)
Corresponding member of the Austrian Academy of Sciences in Vienna (1960)
Honorary member of the Hungarian Academy of Sciences (1965)
Bibliography
; 2nd edn. 1955
Selected articles
with A. C. Schaeffer:
with Max Schiffer:
with Albert Edrei:
References
External links
Gábor Szegő: 1895-1985, by Richard Askey and Paul Nevai
1895 births
1985 deaths
20th-century Hungarian mathematicians
American people of Hungarian-Jewish descent
Mathematicians from Austria-Hungary
Hungarian emigrants to the United States
Hungarian Jews
Mathematical analysts
People from Kunhegyes
Stanford University Department of Mathematics faculty
Washington University in St. Louis mathematicians | Gábor Szegő | [
"Mathematics"
] | 815 | [
"Mathematical analysis",
"Mathematical analysts"
] |
46,075 | https://en.wikipedia.org/wiki/Kazimierz%20Kuratowski | Kazimierz Kuratowski (; 2 February 1896 – 18 June 1980) was a Polish mathematician and logician. He was one of the leading representatives of the Warsaw School of Mathematics. He worked as a professor at the University of Warsaw and at the Mathematical Institute of the Polish Academy of Sciences (IM PAN). Between 1946 and 1953, he served as President of the Polish Mathematical Society.
He is primarily known for his contributions to set theory, topology, measure theory and graph theory. Some of the notable mathematical concepts bearing Kuratowski's name include Kuratowski's theorem, Kuratowski closure axioms, Kuratowski-Zorn lemma and Kuratowski's intersection theorem.
Life and career
Early life
Kazimierz Kuratowski was born in Warsaw, (then part of Congress Poland controlled by the Russian Empire), on 2 February 1896, into an assimilated Jewish family. He was a son of Marek Kuratow, a barrister, and Róża Karzewska. He completed a Warsaw secondary school, which was named after general Paweł Chrzanowski. In 1913, he enrolled in an engineering course at the University of Glasgow in Scotland, in part because he did not wish to study in Russian; instruction in Polish was prohibited. He completed only one year of study when the outbreak of World War I precluded any further enrolment. In 1915, Russian forces withdrew from Warsaw and Warsaw University was reopened with Polish as the language of instruction. Kuratowski restarted his university education there the same year, this time in mathematics. He obtained his Ph.D. in 1921, in the newly established Second Polish Republic.
Doctoral thesis
In autumn 1921 Kuratowski was awarded the Ph.D. degree for his groundbreaking work. His thesis statement consisted of two parts. One was devoted to an axiomatic construction of topology via the closure axioms. This first part (republished in a slightly modified form in 1922) has been cited in hundreds of scientific articles.
The second part of Kuratowski's thesis was devoted to continua irreducible between two points. This was the subject of a French doctoral thesis written by Zygmunt Janiszewski. Since Janiszewski was deceased, Kuratowski's supervisor was Stefan Mazurkiewicz. Kuratowski's thesis solved certain problems in set theory raised by a Belgian mathematician, Charles-Jean Étienne Gustave Nicolas, Baron de la Vallée Poussin.
Academic career until World War II
Two years later, in 1923, Kuratowski was appointed deputy professor of mathematics at Warsaw University. He was then appointed a full professor of mathematics at Lwów Polytechnic in Lwów, in 1927. He was the head of the Mathematics department there until 1933. Kuratowski was also dean of the department twice. In 1929, Kuratowski became a member of the Warsaw Scientific Society
While Kuratowski associated with many of the scholars of the Lwów School of Mathematics, such as Stefan Banach and Stanislaw Ulam, and the circle of mathematicians based around the Scottish Café he kept close connections with Warsaw. Kuratowski left Lwów for Warsaw in 1934, before the famous Scottish Book was begun (in 1935), hence did not contribute any problems to it. He did however, collaborate closely with Banach in solving important problems in measure theory.
In 1934 he was appointed the professor at Warsaw University. A year later Kuratowski was nominated as the head of the Mathematics department there. From 1936 to 1939 he was secretary of the Mathematics Committee in The Council of Science and Applied Sciences.
During and after the war
During World War II, he gave lectures at the underground university in Warsaw, since higher education for Poles was forbidden under German occupation.
In February 1945, Kuratowski started to lecture at the reopened Warsaw University. In 1945, he became a member of the Polish Academy of Learning, in 1946 he was appointed vice-president of the Mathematics department at Warsaw University, and from 1949 he was chosen to be the vice-president of Warsaw Scientific Society. In 1952 he became a member of the Polish Academy of Sciences, of which he was the vice-president from 1957 to 1968.
After World War II, Kuratowski was actively involved in the rebuilding of scientific life in Poland. He helped to establish the State Institute of Mathematics, which was incorporated into the Polish Academy of Sciences in 1952. From 1948 until 1967 Kuratowski was director of the Institute of Mathematics of the Polish Academy of Sciences, and was also a long-time chairman of the Polish and International Mathematics Societies. He served as vice-president of the International Mathematical Union (1963–1966) as well as president of the Scientific Council of the State Institute of Mathematics (1968–1980). From 1948 to 1980 he was the head of the topology section. One of his students was Andrzej Mostowski.
Legacy
Kazimierz Kuratowski was one of a celebrated group of Polish mathematicians who would meet at Lwów's Scottish Café. He was a president of the Polish Mathematical Society (PTM) and a member of the Warsaw Scientific Society (TNW). What is more, he was chief editor in "Fundamenta Mathematicae", a series of publications in "Polish Mathematical Society Annals". Furthermore, Kuratowski worked as an editor in the Polish Academy of Sciences Bulletin. He was also one of the writers of the Mathematical monographs, which were created in cooperation with the Institute of Mathematics of the Polish Academy of Sciences (IMPAN). High quality research monographs of the representatives of Warsaw's and Lwów's School of Mathematics, which concerned all areas of pure and applied mathematics, were published in these volumes.
Kazimierz Kuratowski was an active member of many scientific societies and foreign scientific academies, including the Royal Society of Edinburgh, Austria, Germany, Hungary, Italy and the Union of Soviet Socialist Republics (USSR).
Kazimierz Kuratowski Prize
In 1981, IMPAN, the Polish Mathematical Society, and Kuratowski's daughter Zofia Kuratowska established a prize in his name, the Kuratowski Prize, for achievements in mathematics to people under the age of 30 years. The prize is considered the most prestigious of awards for young Polish mathematicians; past recipients have included Józef H. Przytycki, Mariusz Lemańczyk, Tomasz Łuczak, Mikołaj Bojańczyk, and Wojciech Samotij.
Research
Kuratowski's research mainly focused on abstract topological and metric structures. He implemented the closure axioms (known in mathematical circles as the Kuratowski closure axioms). This was fundamental for the development of topological space theory and irreducible continuum theory between two points. The most valuable results, which were obtained by Kuratowski after the war are those that concern the relationship between topology and analytic functions (theory), and also research in the field of cutting Euclidean spaces. Together with Ulam, who was Kuratowski's most talented student during the Lwów Period, he introduced the concept of so-called quasi homeomorphism that opened up a new field in topological studies.
Kuratowski's research in the field of measure theory, including research with Banach and Tarski, was continued by many students. Moreover, with Alfred Tarski and Wacław Sierpiński he provided most of the theory concerning Polish spaces (that are indeed named after these mathematicians and their legacy). Knaster and Kuratowski brought a comprehensive and precise study to connected components theory. It was applied to issues such as cutting-plane, with the paradoxical examples of connected components.
Kuratowski proved the Kuratowski-Zorn lemma (often called just Zorn's lemma) in 1922. This result has important connections to many basic theorems. Zorn gave its application in 1935. Kuratowski implemented many concepts in set theory and topology. In many cases, Kuratowski established new terminologies and symbolisms.
His contributions to mathematics include:
a characterization of topological spaces which are now called the Kuratowski closure axioms;
proof of the Kuratowski–Zorn lemma;
in graph theory, the characterization of planar graphs now known as Kuratowski's theorem;
identification of the ordered pair with the set
the Kuratowski finite set definition, see Kuratowski-finite;
introduction of the Tarski–Kuratowski algorithm;
Kuratowski's closure-complement problem;
Kuratowski's free set theorem;
Kuratowski's intersection theorem;
Knaster-Kuratowski fan;
Kuratowski-Ulam theorem;
Kuratowski convergence of subsets of metric spaces;
the Kuratowski and Ryll-Nardzewski measurable selection theorem;
Kuratowski's post-war works were mainly focused on three strands:
The development of homotopy in continuous functions.
The construction of connected space theory in higher dimensions.
The uniform depiction of cutting Euclidean spaces by any of its subsets, based on the properties of continuous transformations of these sets.
Publications
Among over 170 published works are valuable monographs and books including Topologie (Vol. I, 1933, translated into English and Russian, and Vol. II, 1950) and Introduction to Set Theory and Topology (Vol. I, 1952, translated into English, French, Spanish, and Bulgarian). He authored "A Half Century of Polish Mathematics 1920-1970: Remembrances and Reflections" (1973) and "Notes to his autobiography" (1981). The latter was published posthumously thanks to Kuratowski's daughter Zofia Kuratowska, who prepared his notes for printing. Kazimierz Kuratowski represented Polish mathematics in the International Mathematical Union where he was vice president from 1963 to 1966. What is more, he participated in numerous international congresses and lectured at dozens of universities around the world. He was an honorary causa doctor at the Universities in Glasgow, Prague, Wroclaw, and Paris. He received the highest national awards, as well as a gold medal of the Czechoslovak Academy of Sciences, and the Polish Academy of Science. Kuratowski died on 18 June 1980 in Warsaw.
See also
List of Polish mathematicians
Scottish Café
List of things named after Kazimierz Kuratowski
Timeline of Polish science and technology
Notes
References
(in Polish)
.
External links
TOPOLOGIE I, Espaces Métrisables, Espaces Complets Monografie Matematyczne series, vol. 20, Polish Mathematical Society, Warszawa-Lwów, 1948.
TOPOLOGIE II, Espaces Compacts, Espaces Connexes, Plan Euclidien Monografie Matematyczne series, vol. 21, Polish Mathematical Society, Warszawa-Lwów, 1950.
1896 births
1980 deaths
20th-century Polish philosophers
Warsaw School of Mathematics
People from Warsaw Governorate
Fellows of the Royal Society of Edinburgh
Foreign members of the USSR Academy of Sciences
Members of the Polish Academy of Learning
Members of the Polish Academy of Sciences
20th-century Polish Jews
Polish logicians
Polish set theorists
Topologists
University of Warsaw alumni
Academic staff of the University of Warsaw
Institute for Advanced Study visiting scholars
Members of the German Academy of Sciences at Berlin
Recipients of the Medal of the 10th Anniversary of the People's Republic of Poland | Kazimierz Kuratowski | [
"Mathematics"
] | 2,337 | [
"Topologists",
"Topology"
] |
46,083 | https://en.wikipedia.org/wiki/Halley%27s%20Comet | Halley's Comet is the only known short-period comet that is consistently visible to the naked eye from Earth, appearing every 72–80 years, though with the majority of recorded apparations (25 of 30) occurring after 75–77 years. It last appeared in the inner parts of the Solar System in 1986 and will next appear in mid-2061. Officially designated 1P/Halley, it is also commonly called Comet Halley, or sometimes simply Halley.
Halley's periodic returns to the inner Solar System have been observed and recorded by astronomers around the world since at least 240 BC, but it was not until 1705 that the English astronomer Edmond Halley understood that these appearances were re-appearances of the same comet. As a result of this discovery, the comet is named after Halley.
During its 1986 visit to the inner Solar System, Halley's Comet became the first comet to be observed in detail by a spacecraft, Giotto, providing the first observational data on the structure of a comet nucleus and the mechanism of coma and tail formation. These observations supported a number of longstanding hypotheses about comet construction, particularly Fred Whipple's "dirty snowball" model, which correctly predicted that Halley would be composed of a mixture of volatile ices—such as water, carbon dioxide, ammonia—and dust. The missions also provided data that substantially reformed and reconfigured these ideas; for instance, it is now understood that the surface of Halley is largely composed of dusty, non-volatile materials, and that only a small portion of it is icy.
Pronunciation
Comet Halley is usually pronounced , rhyming with valley, or sometimes , rhyming with daily. As to the surname Halley, Colin Ronan, one of Edmond Halley's biographers, preferred , rhyming with crawly. Spellings of Halley's name during his lifetime included Hailey, Haley, Hayley, Halley, Haly, Hawley, and Hawly, so its contemporary pronunciation is uncertain, but the version rhyming with valley seems to be preferred by current bearers of the surname.
Computation of orbit
Halley was the first comet to be recognised as periodic. Until the Renaissance, the philosophical consensus on the nature of comets, promoted by Aristotle, was that they were disturbances in Earth's atmosphere. This idea was disproven in 1577 by Tycho Brahe, who used parallax measurements to show that comets must lie beyond the Moon. Many were still unconvinced that comets orbited the Sun, and assumed instead that they must follow straight paths through the Solar System. In 1687, Sir Isaac Newton published his Philosophiæ Naturalis Principia Mathematica, in which he outlined his laws of gravity and motion. His work on comets was decidedly incomplete. Although he had suspected that two comets that had appeared in succession in 1680 and 1681 were the same comet before and after passing behind the Sun (he was later found to be correct; see Newton's Comet), he was initially unable to completely reconcile comets into his model.
Ultimately, it was Newton's friend, editor and publisher, Edmond Halley, who, in his 1705 Synopsis of the Astronomy of Comets, used Newton's new laws to calculate the gravitational effects of Jupiter and Saturn on cometary orbits. Having compiled a list of 24 comet observations, he calculated that the orbital elements of a second comet that had appeared in 1682 were nearly the same as those of two comets that had appeared in 1531 (observed by Petrus Apianus) and 1607 (observed by Johannes Kepler). Halley thus concluded that all three comets were, in fact, the same object returning about every 76 years, a period that has since been found to vary between 72 and 80 years. After a rough estimate of the perturbations the comet would sustain from the gravitational attraction of the planets, he predicted its return for 1758. While he had personally observed the comet around perihelion in September 1682, Halley died in 1742 before he could observe its predicted return.
Halley's prediction of the comet's return proved to be correct, although it was not seen until 25 December 1758, by Johann Georg Palitzsch, a German farmer and amateur astronomer. Other observers from throughout Europe and its colonies sent in confirmations to Paris after the comet brightened the following spring. In the Americas, John Winthrop lectured at Harvard University to explain the implications of the comet's reappearance for Newtonian mechanics and natural theology.
Another independent recognition that the comet had returned was made by the Jamaican astronomer Francis Williams, but his observations did not reach Europe. A unique portrait commissioned by Williams demonstrates the impact of the comet's return on period astronomers. Williams' hand rests on the page of Newton's Principia with procedures to predict comet sightings. The white smudge in the sky is probably a depiction of Halley's comet relative to the constellations in March 1759, and the chord hanging above the book likely represents the comet's orbit. In 2024, using X-ray imaging, the painting was shown to depict the field of stars in which the comet would have been visible in 1759. Williams likely commissioned the portrait to commemorate his observations.
The comet did not pass through its perihelion until 13 March 1759, the attraction of Jupiter and Saturn having caused a delay of 618 days. This effect was computed before its return (with a one-month error to 13 April) by a team of three French mathematicians, Alexis Clairaut, Joseph Lalande, and Nicole-Reine Lepaute. The confirmation of the comet's return was the first time anything other than planets had been shown to orbit the Sun. It was also one of the earliest successful tests of Newtonian physics, and a clear demonstration of its explanatory power. The comet was first named in Halley's honour by French astronomer Nicolas-Louis de Lacaille in 1759.
Some scholars have proposed that first-century Mesopotamian astronomers already had recognised Halley's Comet as periodic. This theory notes a passage in the Babylonian Talmud, tractate Horayot that refers to "a star which appears once in seventy years that makes the captains of the ships err". Others doubt this idea based on historical considerations about the exact timing of this alleged observation, and suggest it refers to the variable star Mira.
Researchers in 1981 attempting to calculate the past orbits of Halley by numerical integration starting from accurate observations in the seventeenth and eighteenth centuries could not produce accurate results further back than 837 owing to a close approach to Earth in that year. It was necessary to use ancient Chinese comet observations to constrain their calculations.
Orbit and origin
Halley's orbital period has varied between 74 and 80 years since 240 BC. Its orbit around the Sun is highly elliptical, with an orbital eccentricity of 0.967 (with 0 being a circle and 1 being a parabolic trajectory). The perihelion, the point in the comet's orbit when it is nearest the Sun, is . This is between the orbits of Mercury and Venus. Its aphelion, or farthest distance from the Sun, is , roughly the orbital distance of Pluto. Unlike the overwhelming majority of objects in the Solar System, Halley's orbit is retrograde; it orbits the Sun in the opposite direction to the planets, or, clockwise from above the Sun's north pole. The orbit is inclined by 18° to the ecliptic, with much of it lying south of the ecliptic. This is usually represented as 162°, to account for Halley's retrograde orbit. The 1910 passage was at a relative velocity of . Because its orbit comes close to Earth's in two places, Halley is associated with two meteor showers: the Eta Aquariids in early May, and the Orionids in late October.
Halley is classified as a periodic or short-period comet: one with an orbit lasting 200 years or less. This contrasts it with long-period comets, whose orbits last for thousands of years. Periodic comets have an average inclination to the ecliptic of only ten degrees, and an orbital period of just 6.5 years, so Halley's orbit is atypical. Most short-period comets (those with orbital periods shorter than 20 years and inclinations of 30 degrees or less) are called Jupiter-family comets. Those resembling Halley, with orbital periods of between 20 and 200 years and inclinations extending from zero to more than 90 degrees, are called Halley-type comets. , 105 Halley-type comets have been observed, compared with 816 identified Jupiter-family comets.
The orbits of the Halley-type comets suggest that they were originally long-period comets whose orbits were perturbed by the gravity of the giant planets and directed into the inner Solar System. If Halley was once a long-period comet, it is likely to have originated in the Oort cloud, a sphere of cometary bodies around 20,000–50,000 au from the Sun. Conversely the Jupiter-family comets are generally believed to originate in the Kuiper belt, a flat disc of icy debris between 30 au (Neptune's orbit) and 50 au from the Sun (in the scattered disc). Another point of origin for the Halley-type comets was proposed in 2008, when a trans-Neptunian object with a retrograde orbit similar to Halley's was discovered, , whose orbit takes it from just outside that of Uranus to twice the distance of Pluto. It may be a member of a new population of small Solar System bodies that serves as the source of Halley-type comets.
Halley has probably been in its current orbit for 16,000–200,000 years, although it is not possible to numerically integrate its orbit for more than a few tens of apparitions, and close approaches before 837 AD can only be verified from recorded observations. The non-gravitational effects can be crucial; as Halley approaches the Sun, it expels jets of sublimating gas from its surface, which knock it very slightly off its orbital path. These orbital changes cause delays in its perihelion passage of four days on average.
In 1989, Boris Chirikov and Vitold Vecheslavov performed an analysis of 46 apparitions of Halley's Comet taken from historical records and computer simulations, which showed that its dynamics were chaotic and unpredictable on long timescales. Halley's projected dynamical lifetime is estimated to be about 10 million years. The dynamics of its orbit can be approximately described by a two-dimensional symplectic map, known as the Kepler map, a solution to the restricted three-body problem for highly eccentric orbits. Based on records from the 1910 apparition, David Hughes calculated in 1985 that Halley's nucleus has been reduced in mass by 80 to 90% over the last 2,000 to 3,000 revolutions, and that it will most likely disappear completely after another 2,300 perihelion passages. More recent work suggests that Halley will evaporate, or split in two, within the next few tens of thousands of years, or will be ejected from the Solar System within a few hundred thousand years.
Structure and composition
The Giotto and Vega missions gave planetary scientists their first view of Halley's surface and structure. The nucleus is a conglomerate of ices and dust, often referred to as a "dirty snowball". Like all comets, as Halley nears the Sun, its volatile compounds (those with low boiling points, such as water, carbon monoxide, carbon dioxide and other ices) begin to sublimate from the surface. This causes the comet to develop a coma, or atmosphere, at distances up to from the nucleus. Sublimation of this dirty ice releases dust particles, which travel with the gas away from the nucleus. Gas molecules in the coma absorb solar light and then re-radiate it at different wavelengths, a phenomenon known as fluorescence, whereas dust particles scatter the solar light. Both processes are responsible for making the coma visible. As a fraction of the gas molecules in the coma are ionised by the solar ultraviolet radiation, pressure from the solar wind, a stream of charged particles emitted by the Sun, pulls the coma's ions out into a long tail, which may extend more than 100 million kilometres into space. Changes in the flow of the solar wind can cause disconnection events, in which the tail completely breaks off from the nucleus.
Despite the vast size of its coma, Halley's nucleus is relatively small: barely long, wide and perhaps thick. Based on a reanalysis of images taken by the Giotto and Vega spacecraft, Lamy et al. determined an effective diameter of . Its shape has been variously compared to that of a peanut, a potato, or an avocado. Its mass is roughly 2.2 kg, with an average density of about . The low density indicates that it is made of a large number of small pieces, held together very loosely, forming a structure known as a rubble pile. Ground-based observations of coma brightness suggested that Halley's rotation period was about 7.4 days. Images taken by the various spacecraft, along with observations of the jets and shell, suggested a period of 52 hours. Given the irregular shape of the nucleus, Halley's rotation is likely to be complex. The flyby images revealed an extremely varied topography, with hills, mountains, ridges, depressions, and at least one crater.
Halley's day side (the side facing the Sun) is far more active than the night side. Spacecraft observations showed that the gases ejected from the nucleus were 80% water vapour, 17% carbon monoxide and 3–4% carbon dioxide, with traces of hydrocarbons although more recent sources give a value of 10% for carbon monoxide and also include traces of methane and ammonia. The dust particles were found to be primarily a mixture of carbon–hydrogen–oxygen–nitrogen (CHON) compounds common in the outer Solar System, and silicates, such as are found in terrestrial rocks. The dust particles ranged in size down to the limits of detection (≈0.001 μm). The ratio of deuterium to hydrogen in the water released by Halley was initially thought to be similar to that found in Earth's ocean water, suggesting that Halley-type comets may have delivered water to Earth in the distant past. Subsequent observations showed Halley's deuterium ratio to be far higher than that found in Earth's oceans, making such comets unlikely sources for Earth's water.
Giotto provided the first evidence in support of Fred Whipple's "dirty snowball" hypothesis for comet construction; Whipple postulated that comets are icy objects warmed by the Sun as they approach the inner Solar System, causing ices on their surfaces to sublime (change directly from a solid to a gas), and jets of volatile material to burst outward, creating the coma. Giotto showed that this model was broadly correct, though with modifications. Halley's albedo, for instance, is about 4%, meaning that it reflects only 4% of the sunlight hitting it – about what one would expect for coal. Thus, despite astronomers predicting that Halley would have an albedo of about 0.17 (roughly equivalent to bare soil), Halley's Comet is in fact pitch black. The "dirty ices" on the surface sublime at temperatures between in sections of higher albedo to at low albedo; Vega 1 found Halley's surface temperature to be in the range . This suggested that only 10% of Halley's surface was active, and that large portions of it were coated in a layer of dark dust that retained heat. Together, these observations suggested that Halley was in fact predominantly composed of non-volatile materials, and thus more closely resembled a "snowy dirtball" than a "dirty snowball".
History
Before 1066
The first certain appearance of Halley's Comet in the historical record is a description from 240 BC, in the Chinese chronicle Records of the Grand Historian or Shiji, which describes a comet that appeared in the east and moved north. The only surviving record of the 164 BC apparition is found on two fragmentary Babylonian tablets, which were rediscovered in August 1984 in the collection of the British Museum.
The apparition of 87 BC was recorded in Babylonian tablets which state that the comet was seen "day beyond day" for a month. This appearance may be recalled in the representation of Tigranes the Great, an Armenian king who is depicted on coins with a crown that features, according to Vahe Gurzadyan and R. Vardanyan, "a star with a curved tail [that] may represent the passage of Halley's Comet in 87 BC." Gurzadyan and Vardanyan argue that "Tigranes could have seen Halley's Comet when it passed closest to the Sun on August 6 in 87 BC" as the comet would have been a "most recordable event"; for ancient Armenians it could have heralded the New Era of the brilliant King of Kings.
The apparition of 12 BC was recorded in the Book of Han by Chinese astronomers of the Han dynasty who tracked it from August through October. It passed within 0.16 au of Earth. According to the Roman historian Cassius Dio, a comet appeared suspended over Rome for several days portending the death of Marcus Vipsanius Agrippa in that year. Halley's appearance in 12 BC, only a few years distant from the conventionally assigned date of the birth of Jesus Christ, has led some theologians and astronomers to suggest that it might explain the biblical story of the Star of Bethlehem. There are other explanations for the phenomenon, such as planetary conjunctions, and there are also records of other comets that appeared closer to the date of Jesus's birth.
If Yehoshua ben Hananiah's reference to "a star which arises once in seventy years and misleads the sailors" refers to Halley's Comet, he can only have witnessed the 66 AD appearance. Another possible report comes from Jewish historian Josephus, who wrote that in 66 AD "The signs ... were so evident, and did so plainly foretell their future desolation ... there was a star resembling a sword, which stood over the city, and a comet, that continued a whole year". This portent was in reference to the city of Jerusalem and the First Jewish–Roman War.
The 141 AD apparition was recorded in Chinese chronicles, with observations of a bluish white comet on 27 March and 16, 22 and 23 April. The early Tamil bards of southern India (c. 1st - 4th century CE) also describe a certain relatable event.
The 374 AD and 607 approaches each came within 0.09 au of Earth. The 451 AD apparition was said to herald the defeat of Attila the Hun at the Battle of Chalons.
The 684 AD apparition was reported in Chinese records as the "broom star".
The 760 AD apparition was recorded in the Zuqnin Chronicle'''s entry for iyyōr 1071 SE (May 760 AD), calling it a "white sign":
In 837 AD, Halley's Comet may have passed as close as from Earth, by far its closest approach. Its tail may have stretched 60 degrees across the sky. It was recorded by astronomers in China, Japan, Germany, the Byzantine Empire, and the Middle East; Emperor Louis the Pious observed this appearance and devoted himself to prayer and penance, fearing that "by this token a change in the realm and the death of a prince are made known".
In 912 AD, Halley is recorded in the Annals of Ulster, which states "A dark and rainy year. A comet appeared."
1066
In 1066, the comet was seen in England and thought to be an omen: later that year Harold II of England died at the Battle of Hastings and William the Conqueror claimed the throne. The comet is represented on the Bayeux Tapestry and described in the tituli as a star. Surviving accounts from the period describe it as appearing to be four times the size of Venus, and shining with a light equal to a quarter of that of the Moon. Halley came within 0.10 au of Earth at that time.
This appearance of the comet is also noted in the Anglo-Saxon Chronicle. Eilmer of Malmesbury may have seen Halley in 989 and 1066, as recorded by William of Malmesbury:
Not long after, a comet, portending (they say) a change in governments, appeared, trailing its long flaming hair through the empty sky: concerning which there was a fine saying of a monk of our monastery called Æthelmær. Crouching in terror at the sight of the gleaming star, "You've come, have you?", he said. "You've come, you source of tears to many mothers. It is long since I saw you; but as I see you now you are much more terrible, for I see you brandishing the downfall of my country."
The Irish Annals of the Four Masters recorded the comet as "A star [that] appeared on the seventh of the Calends of May, on Tuesday after Little Easter, than whose light the brilliance or light of The Moon was not greater; and it was visible to all in this manner till the end of four nights afterwards." Chaco Native Americans in New Mexico may have recorded the 1066 apparition in their petroglyphs.
The Italo-Byzantine chronicle of Lupus the Protospatharios mentions that a "comet-star" appeared in the sky in the year 1067 (the chronicle is erroneous, as the event occurred in 1066, and by Robert he means William).
The Emperor Constantine Ducas died in the month of May, and his son Michael received the Empire. And in this year there appeared a comet star, and the Norman count Robert [sic] fought a battle with Harold, King of the English, and Robert was victorious and became king over the people of the English.
1145–1378
The 1145 apparition may have been recorded by the monk Eadwine.
According to legend, Genghis Khan was inspired to turn his conquests toward Europe by the westward-seeming trajectory of the 1222 apparition. In Korea, the comet was reportedly visible during the daylight on 9 September 1222.
The 1301 apparition was visually spectacular, and may be the first that resulted in convincing portraits of a particular comet. The Florentine chronicler Giovanni Villani wrote that the comet left "great trails of fumes behind", and that it remained visible from September 1301 until January 1302. It was seen by the artist Giotto di Bondone, who represented the Star of Bethlehem as a fire-coloured comet in the Nativity section of his Arena Chapel cycle, completed in 1305. Giotto's depiction includes details of the coma, a sweeping tail, and the central condensation. According to the art historian Roberta Olson, it is much more accurate than other contemporary descriptions, and was not equaled in painting until the 19th century. Olson's identification of Halley's Comet in Giotto's Adoration of the Magi is what inspired the European Space Agency to name their mission to the comet Giotto, after the artist.
Halley's 1378 appearance is recorded in the Annales Mediolanenses as well as in East Asian sources.
1456
In 1456, the year of Halley's next apparition, the Ottoman Empire invaded the Kingdom of Hungary, culminating in the siege of Belgrade in July of that year. In a papal bull, Pope Callixtus III ordered special prayers be said for the city's protection. In 1470, the humanist scholar Bartolomeo Platina wrote in his that,
A hairy and fiery star having then made its appearance for several days, the mathematicians declared that there would follow grievous pestilence, dearth and some great calamity. Calixtus, to avert the wrath of God, ordered supplications that if evils were impending for the human race He would turn all upon the Turks, the enemies of the Christian name. He likewise ordered, to move God by continual entreaty, that notice should be given by the bells to call the faithful at midday to aid by their prayers those engaged in battle with the Turk.
Platina's account is not mentioned in official records. In the 18th century, a Frenchman further embellished the story, in anger at the Church, by claiming that the Pope had "excommunicated" the comet, though this story was most likely his own invention.
Halley's apparition of 1456 was also witnessed in Kashmir and depicted in great detail by Śrīvara, a Sanskrit poet and biographer to the Sultans of Kashmir. He read the apparition as a cometary portent of doom foreshadowing the imminent fall of Sultan Zayn al-Abidin (AD 1418/1420–1470).
After witnessing a bright light in the sky which most historians have identified as Halley's Comet, Zara Yaqob, Emperor of Ethiopia from 1434 to 1468, founded the city of Debre Berhan (tr. City of Light) and made it his capital for the remainder of his reign.
1531-1759
Petrus Apianus and Girolamo Fracastoro described the comet's visit in 1531, with the former even including graphics in his publication. Through his observations, Apianus was able to prove that a comet's tail always points away from the Sun.
In the Sikh scriptures of the Guru Granth Sahib, the founder of the faith Guru Nanak makes reference to "a long star that has risen" at Ang 1110, and it is believed by some Sikh scholars to be a reference to Halley's appearance in 1531.
Halley's periodic returns have been subject to scientific investigation since the 16th century. The three apparitions from 1531 to 1682 were noted by Edmond Halley, enabling him to predict it would return. One key breakthrough occurred when Halley talked with Newton about his ideas of the laws of motion. Newton also helped Halley get John Flamsteed's data on the 1682 apparition. By studying data on the 1531, 1607, and 1682 comets, he came to the conclusion these were the same comet, and presented his findings in 1696.
One difficulty was accounting for variations in the comet's orbital period, which was over a year longer between 1531 and 1607 than it was between 1607 and 1682. Newton had theorised that such delays were caused by the gravity of other comets, but Halley found that Jupiter and Saturn would cause the appropriate delays. In the decades that followed, more refined mathematics would be worked on, notable by Paris Observatory; the work on Halley also provided a boost to Newton and Kepler's rules for celestial motions. (See also computation of orbit.)
1835
At Markree Observatory in Ireland, Edward Joshua Cooper used a Cauchoix of Paris lens telescope with an aperture of to sketch Halley's comet in 1835. The same apparition was sketched by German astronomer Friedrich Wilhelm Bessel. Observations of streams of vapour prompted Bessel to propose that the jet forces of evaporating material could be great enough to significantly alter a comet's orbit.
An interview in 1910, of someone who was a teenager at the time of the 1835 apparition had this to say:
They go on to describe the comet's tail as being more broad and not as long as the comet of 1843 they had also witnessed.
Famous astronomers across the world made observations starting August 1835, including Struve at Dorpat observatory, and Sir John Herschel, who made of observations from the Cape of Good Hope. In the United States telescopic observations were made from Yale College. The new observations helped confirm early appearances of this comet including its 1456 and 1378 apparitions.
At Yale College in Connecticut, the comet was first reported on 31 August 1835 by astronomers D. Olmstead and E. Loomis. In Canada reports were made from Newfoundland and also Quebec. Reports came in from all over by later 1835, and often reported in newspapers of this time in Canada.
Several accounts of the 1835 apparition were made by observers who survived until the 1910 return, where increased interest in the comet led to their being interviewed.
The time to Halley's return in 1910 would be only 74.42 years, one of the shortest known periods of its return, which is calculated to be as long as 79 years owing to the effects of the planets.
At Paris Observatory Halley's Comet 1835 apparition was observed with a Lerebours telescope of aperture by the astronomer François Arago. Arago recorded polarimetric observations of Halley, and suggested that the tail might be sunlight reflecting off a sparsely distributed material; he had earlier made similar observations of Comet Tralles of 1819.
1910
The 1910 approach, which came into naked-eye view around 10 April and came to perihelion on 20 April, was notable for several reasons: it was the first approach of which photographs exist, and the first for which spectroscopic data were obtained. Furthermore, the comet made a relatively close approach of 0.15 au, making it a spectacular sight. Indeed, on 19 May, Earth actually passed through the tail of the comet. One of the substances discovered in the tail by spectroscopic analysis was the toxic gas cyanogen, which led press to misquote the astronomer Camille Flammarion by stating he claimed that, when Earth passed through the tail, the gas "would impregnate the atmosphere and possibly snuff out all life on the planet". Despite reassurances from scientists that the gas would not inflict harm on Earth, the damage had already been done with members of the public panic buying gas masks and quack "anti-comet pills".
The comet added to the unrest in China on the eve of the Xinhai Revolution that would end the last dynasty in 1911. As James Hutson, a missionary in Sichuan Province at the time, recorded:
"The people believe that it indicates calamity such as war, fire, pestilence, and a change of dynasty. In some places on certain days the doors were unopened for half a day, no water was carried and many did not even drink water as it was rumoured that pestilential vapour was being poured down upon the earth from the comet."
The 1910 visitation coincided with a visit from Hedley Churchward, the first known English Muslim to make the Haj pilgrimage to Mecca. His explanation of its scientific predictability did not meet with favour in the Holy City.
The comet was used in an advertising campaign of Le Bon Marché, a well-known department store in Paris.
The comet was also fertile ground for hoaxes. One that reached major newspapers claimed that the Sacred Followers, a supposed Oklahoma religious group, attempted to sacrifice a virgin to ward off the impending disaster, but were stopped by the police.
American satirist and writer Mark Twain was born on 30 November 1835, exactly two weeks after the comet's perihelion. In his autobiography, published in 1909, he said,
I came in with Halley's comet in 1835. It is coming again next year, and I expect to go out with it. It will be the greatest disappointment of my life if I don't go out with Halley's comet. The Almighty has said, no doubt: "Now here are these two unaccountable freaks; they came in together, they must go out together."
Twain died on 21 April 1910, the day following the comet's subsequent perihelion. The 1985 fantasy film The Adventures of Mark Twain was inspired by the quotation.
Halley's 1910 apparition is distinct from the Great Daylight Comet of 1910, which surpassed Halley in brilliance and was visible in broad daylight for a short period, approximately four months before Halley made its appearance.
1986
The 1986 apparition of Halley's Comet was the least favourable on record. In February 1986, the comet and the Earth were on opposite sides of the Sun, creating the worst possible viewing circumstances for Earth observers during the previous 2,000 years. Halley's closest approach was 0.42 au. Additionally, increased light pollution from urbanisation caused many people to fail in attempts to see the comet. With the help of binoculars, observation from areas outside cities was more successful. Further, the comet appeared brightest when it was almost invisible from the northern hemisphere in March and April 1986, with best opportunities occurring when the comet could be sighted close to the horizon at dawn and dusk, if not obscured by clouds.
The approach of the comet was first detected by astronomers David C. Jewitt and G. Edward Danielson on 16 October 1982 using the 5.1 m Hale Telescope at Mount Palomar and a CCD camera.
The first visual observation of the comet on its 1986 return was by an amateur astronomer, Stephen James O'Meara, on 24 January 1985. O'Meara used a home-built telescope on top of Mauna Kea to detect the magnitude 19.6 comet. The first to observe Halley's Comet with the naked eye during its 1986 apparition were Stephen Edberg (then serving as the coordinator for amateur observations at the NASA Jet Propulsion Laboratory) and Charles Morris on 8 November 1985.
The 1986 apparition gave scientists the opportunity to study the comet closely, and several probes were launched to do so. The Soviet Vega 1 probe began returning images of Halley on 4 March 1986, captured the first-ever image of its nucleus, and made its flyby on 6 March. It was followed by the Vega 2 probe, making its flyby on 9 March. On 14 March, the Giotto space probe, launched by the European Space Agency, made the closest pass of the comet's nucleus. There also were two Japanese probes, Suisei and Sakigake. Unofficially, the numerous probes became known as the Halley Armada.
Based on data retrieved by the largest ultraviolet space telescope of the time, Astron, in December 1985, a group of Soviet scientists developed a model of the comet's coma. The comet also was observed from space by the International Cometary Explorer (ICE). Originally launched as the International Sun-Earth Explorer 3, the spacecraft was renamed, and departed the Sun-Earth Lagrangian point in 1982 in order to intercept the comets 21P/Giacobini-Zinner and Halley. ICE flew through the tail of Halley's Comet, coming within about of the nucleus on 28 March 1986.
Two U.S. Space Shuttle missions—STS-51-L and STS-61-E—had been scheduled to observe Halley's Comet from low Earth orbit. The STS-51-L mission carried the Shuttle-Pointed Tool for Astronomy (Spartan Halley) satellite, also called the Halley's Comet Experiment Deployable (HCED). The mission to capture the ultraviolet spectrum of the comet ended in disaster when the Space Shuttle Challenger exploded in flight, killing all seven astronauts onboard. Scheduled for March 1986, STS-61-E was a Columbia mission carrying the ASTRO-1 platform to study the comet, but the mission was cancelled following the Challenger disaster and ASTRO-1 would not fly until late 1990 on STS-35.
In Japan, the comet was observed by Emperor Hirohito, who was 84. He had already seen it in 1910 when he was 8. He became one of the few people in human history to have seen Halley's Comet on two different cycles.
After 1986
On 12 February 1991, at a distance of from the Sun, Halley displayed an outburst that lasted for several months. The comet released dust with a total mass of about 108 kg, which spread into an elongated cloud roughly by in size. The outburst likely started in December 1990, and then the comet brightened from about magnitude 25 to magnitude 19. Comets rarely show outburst activity at distances beyond 5 au from the Sun. Different mechanisms have been proposed for the outburst, ranging from interaction with the solar wind to a collision with an undiscovered asteroid. The most likely explanation is a combination of two effects, the polymerisation of hydrogen cyanide and a phase transition of amorphous water ice, which raised the temperature of the nucleus enough for some of the more volatile compounds on its surface to sublime.
Halley was most recently observed in 2003 by three of the Very Large Telescopes at Paranal, Chile, when Halley's magnitude was 28.2. The telescopes observed Halley, at the faintest and farthest any comet had ever been imaged, in order to verify a method for finding very faint trans-Neptunian objects. Astronomers are now able to observe the comet at any point in its orbit.
On 9 December 2023, Halley's Comet reached the farthest and slowest point in its orbit from the Sun when it was travelling at with respect to the Sun.
2061
The next perihelion of Halley's Comet is predicted for 28 July 2061, when it will be better positioned for observation than during the 1985–1986 apparition, as it will be on the same side of the Sun as Earth. The closest approach to Earth will be one day after perihelion. It is expected to have an apparent magnitude of −0.3, compared with only +2.1 for the 1986 apparition. On 9 September 2060, Halley will pass within of Jupiter, and then on 20 August 2061 will pass within of Venus.
2134
Halley will come to perihelion on 27 March 2134. Then on 7 May 2134, Halley will pass within of Earth. Its apparent magnitude is expected to be −2.0.
Apparitions
Halley's calculations enabled the comet's earlier appearances to be found in the historical record. The following table sets out the astronomical designations for every apparition of Halley's Comet from 240 BC, the earliest documented sighting.
In the designations, "1P/" refers to Halley's Comet; the first periodic comet discovered. The number represents the year, with negatives representing BC. The letter-number combination indicates which it was of the comets observed for a given segment of the year, divided into 24 equal parts. The Roman numeral indicates which comet past perihelion it was for a given year, while the lower-case letter indicates which comet it was for a given year overall. The perihelion dates farther from the present are approximate, mainly because of uncertainties in the modelling of non-gravitational effects. Perihelion dates of 1531 and earlier are in the Julian calendar, while perihelion dates 1607 and after are in the Gregorian calendar. The perihelion dates for some of the early apparitions (particularly before 837 AD) are uncertain by a couple of days. While Halley's Comet usually peaks at around 0th magnitude, there are indications that the comet got considerably brighter than that in the past.
See also
Kepler orbit
List of Halley-type comets
Notes
References
Bibliography
External links
Synopsis of the Astronomy of Comets (1706 reprint of Halley's 1705 paper)
Image of Halley's Comet by the Giotto'' spacecraft
seds.org, links to images and further information about Halley's Comet
Photographs of 1910 approach from the Lick Observatory Records Digital Archive
Ephemeris from JPL
Halley's Comet
Astronomical objects known since antiquity
Comets visited by spacecraft
001P
001P
001P
0001
Periodic comets
Star of Bethlehem
Great comets
Mark Twain
1531 in science
1607 in science
1682 in science
1759 in science
1835 in science
1910 in science
1986 in science
2061 in science | Halley's Comet | [
"Astronomy"
] | 8,321 | [
"Star of Bethlehem",
"Astronomical myths"
] |
46,095 | https://en.wikipedia.org/wiki/Russell%27s%20paradox | In mathematical logic, Russell's paradox (also known as Russell's antinomy) is a set-theoretic paradox published by the British philosopher and mathematician, Bertrand Russell, in 1901. Russell's paradox shows that every set theory that contains an unrestricted comprehension principle leads to contradictions. According to the unrestricted comprehension principle, for any sufficiently well-defined property, there is the set of all and only the objects that have that property. Let R be the set of all sets that are not members of themselves. (This set is sometimes called "the Russell set".) If R is not a member of itself, then its definition entails that it is a member of itself; yet, if it is a member of itself, then it is not a member of itself, since it is the set of all sets that are not members of themselves. The resulting contradiction is Russell's paradox. In symbols:
Russell also showed that a version of the paradox could be derived in the axiomatic system constructed by the German philosopher and mathematician Gottlob Frege, hence undermining Frege's attempt to reduce mathematics to logic and calling into question the logicist programme. Two influential ways of avoiding the paradox were both proposed in 1908: Russell's own type theory and the Zermelo set theory. In particular, Zermelo's axioms restricted the unlimited comprehension principle. With the additional contributions of Abraham Fraenkel, Zermelo set theory developed into the now-standard Zermelo–Fraenkel set theory (commonly known as ZFC when including the axiom of choice). The main difference between Russell's and Zermelo's solution to the paradox is that Zermelo modified the axioms of set theory while maintaining a standard logical language, while Russell modified the logical language itself. The language of ZFC, with the help of Thoralf Skolem, turned out to be that of first-order logic.
The paradox had already been discovered independently in 1899 by the German mathematician Ernst Zermelo. However, Zermelo did not publish the idea, which remained known only to David Hilbert, Edmund Husserl, and other academics at the University of Göttingen. At the end of the 1890s, Georg Cantor – considered the founder of modern set theory – had already realized that his theory would lead to a contradiction, as he told Hilbert and Richard Dedekind by letter.
Informal presentation
Most sets commonly encountered are not members of themselves. Let us call a set "normal" if it is not a member of itself, and "abnormal" if it is a member of itself. Clearly every set must be either normal or abnormal. For example, consider the set of all squares in a plane. This set is not itself a square in the plane, thus it is not a member of itself and is therefore normal. In contrast, the complementary set that contains everything which is not a square in the plane is itself not a square in the plane, and so it is one of its own members and is therefore abnormal.
Now we consider the set of all normal sets, R, and try to determine whether R is normal or abnormal. If R were normal, it would be contained in the set of all normal sets (itself), and therefore be abnormal; on the other hand if R were abnormal, it would not be contained in the set of all normal sets (itself), and therefore be normal. This leads to the conclusion that R is neither normal nor abnormal: Russell's paradox.
Formal presentation
The term "naive set theory" is used in various ways. In one usage, naive set theory is a formal theory, that is formulated in a first-order language with a binary non-logical predicate , and that includes the axiom of extensionality:
and the axiom schema of unrestricted comprehension:
for any predicate with as a free variable inside . Substitute for to get
Then by existential instantiation (reusing the symbol ) and universal instantiation we have
a contradiction. Therefore, this naive set theory is inconsistent.
Philosophical implications
Prior to Russell's paradox (and to other similar paradoxes discovered around the time, such as the Burali-Forti paradox), a common conception of the idea of set was the "extensional concept of set", as recounted by von Neumann and Morgenstern:
In particular, there was no distinction between sets and proper classes as collections of objects. Additionally, the existence of each of the elements of a collection was seen as sufficient for the existence of the set of said elements. However, paradoxes such as Russell's and Burali-Forti's showed the impossibility of this conception of set, by examples of collections of objects that do not form sets, despite all said objects being existent.
Set-theoretic responses
From the principle of explosion of classical logic, any proposition can be proved from a contradiction. Therefore, the presence of contradictions like Russell's paradox in an axiomatic set theory is disastrous; since if any formula can be proved true it destroys the conventional meaning of truth and falsity. Further, since set theory was seen as the basis for an axiomatic development of all other branches of mathematics, Russell's paradox threatened the foundations of mathematics as a whole. This motivated a great deal of research around the turn of the 20th century to develop a consistent (contradiction-free) set theory.
In 1908, Ernst Zermelo proposed an axiomatization of set theory that avoided the paradoxes of naive set theory by replacing arbitrary set comprehension with weaker existence axioms, such as his axiom of separation (Aussonderung). (Avoiding paradox was not Zermelo's original intention, but instead to document which assumptions he used in proving the well-ordering theorem.) Modifications to this axiomatic theory proposed in the 1920s by Abraham Fraenkel, Thoralf Skolem, and by Zermelo himself resulted in the axiomatic set theory called ZFC. This theory became widely accepted once Zermelo's axiom of choice ceased to be controversial, and ZFC has remained the canonical axiomatic set theory down to the present day.
ZFC does not assume that, for every property, there is a set of all things satisfying that property. Rather, it asserts that given any set X, any subset of X definable using first-order logic exists. The object R defined by Russell's paradox above cannot be constructed as a subset of any set X, and is therefore not a set in ZFC. In some extensions of ZFC, notably in von Neumann–Bernays–Gödel set theory, objects like R are called proper classes.
ZFC is silent about types, although the cumulative hierarchy has a notion of layers that resemble types. Zermelo himself never accepted Skolem's formulation of ZFC using the language of first-order logic. As José Ferreirós notes, Zermelo insisted instead that "propositional functions (conditions or predicates) used for separating off subsets, as well as the replacement functions, can be 'entirely arbitrary [ganz beliebig]"; the modern interpretation given to this statement is that Zermelo wanted to include higher-order quantification in order to avoid Skolem's paradox. Around 1930, Zermelo also introduced (apparently independently of von Neumann), the axiom of foundation, thus—as Ferreirós observes—"by forbidding 'circular' and 'ungrounded' sets, it [ZFC] incorporated one of the crucial motivations of TT [type theory]—the principle of the types of arguments". This 2nd order ZFC preferred by Zermelo, including axiom of foundation, allowed a rich cumulative hierarchy. Ferreirós writes that "Zermelo's 'layers' are essentially the same as the types in the contemporary versions of simple TT [type theory] offered by Gödel and Tarski. One can describe the cumulative hierarchy into which Zermelo developed his models as the universe of a cumulative TT in which transfinite types are allowed. (Once we have adopted an impredicative standpoint, abandoning the idea that classes are constructed, it is not unnatural to accept transfinite types.) Thus, simple TT and ZFC could now be regarded as systems that 'talk' essentially about the same intended objects. The main difference is that TT relies on a strong higher-order logic, while Zermelo employed second-order logic, and ZFC can also be given a first-order formulation. The first-order 'description' of the cumulative hierarchy is much weaker, as is shown by the existence of countable models (Skolem's paradox), but it enjoys some important advantages."
In ZFC, given a set A, it is possible to define a set B that consists of exactly the sets in A that are not members of themselves. B cannot be in A by the same reasoning in Russell's Paradox. This variation of Russell's paradox shows that no set contains everything.
Through the work of Zermelo and others, especially John von Neumann, the structure of what some see as the "natural" objects described by ZFC eventually became clear: they are the elements of the von Neumann universe, V, built up from the empty set by transfinitely iterating the power set operation. It is thus now possible again to reason about sets in a non-axiomatic fashion without running afoul of Russell's paradox, namely by reasoning about the elements of V. Whether it is appropriate to think of sets in this way is a point of contention among the rival points of view on the philosophy of mathematics.
Other solutions to Russell's paradox, with an underlying strategy closer to that of type theory, include Quine's New Foundations and Scott–Potter set theory. Yet another approach is to define multiple membership relation with appropriately modified comprehension scheme, as in the Double extension set theory.
History
Russell discovered the paradox in May or June 1901. By his own account in his 1919 Introduction to Mathematical Philosophy, he "attempted to discover some flaw in Cantor's proof that there is no greatest cardinal". In a 1902 letter, he announced the discovery to Gottlob Frege of the paradox in Frege's 1879 Begriffsschrift and framed the problem in terms of both logic and set theory, and in particular in terms of Frege's definition of function:
Russell would go on to cover it at length in his 1903 The Principles of Mathematics, where he repeated his first encounter with the paradox:
Russell wrote to Frege about the paradox just as Frege was preparing the second volume of his Grundgesetze der Arithmetik. Frege responded to Russell very quickly; his letter dated 22 June 1902 appeared, with van Heijenoort's commentary in Heijenoort 1967:126–127. Frege then wrote an appendix admitting to the paradox, and proposed a solution that Russell would endorse in his Principles of Mathematics, but was later considered by some to be unsatisfactory. For his part, Russell had his work at the printers and he added an appendix on the doctrine of types.
Ernst Zermelo in his (1908) A new proof of the possibility of a well-ordering (published at the same time he published "the first axiomatic set theory") laid claim to prior discovery of the antinomy in Cantor's naive set theory. He states: "And yet, even the elementary form that Russell9 gave to the set-theoretic antinomies could have persuaded them [J. König, Jourdain, F. Bernstein] that the solution of these difficulties is not to be sought in the surrender of well-ordering but only in a suitable restriction of the notion of set". Footnote 9 is where he stakes his claim:
Frege sent a copy of his Grundgesetze der Arithmetik to Hilbert; as noted above, Frege's last volume mentioned the paradox that Russell had communicated to Frege. After receiving Frege's last volume, on 7 November 1903, Hilbert wrote a letter to Frege in which he said, referring to Russell's paradox, "I believe Dr. Zermelo discovered it three or four years ago". A written account of Zermelo's actual argument was discovered in the Nachlass of Edmund Husserl.
In 1923, Ludwig Wittgenstein proposed to "dispose" of Russell's paradox as follows:
The reason why a function cannot be its own argument is that the sign for a function already contains the prototype of its argument, and it cannot contain itself. For let us suppose that the function F(fx) could be its own argument: in that case there would be a proposition F(F(fx)), in which the outer function F and the inner function F must have different meanings, since the inner one has the form O(fx) and the outer one has the form Y(O(fx)). Only the letter 'F' is common to the two functions, but the letter by itself signifies nothing. This immediately becomes clear if instead of F(Fu) we write (do) : F(Ou) . Ou = Fu. That disposes of Russell's paradox. (Tractatus Logico-Philosophicus, 3.333)
Russell and Alfred North Whitehead wrote their three-volume Principia Mathematica hoping to achieve what Frege had been unable to do. They sought to banish the paradoxes of naive set theory by employing a theory of types they devised for this purpose. While they succeeded in grounding arithmetic in a fashion, it is not at all evident that they did so by purely logical means. While Principia Mathematica avoided the known paradoxes and allows the derivation of a great deal of mathematics, its system gave rise to new problems.
In any event, Kurt Gödel in 1930–31 proved that while the logic of much of Principia Mathematica, now known as first-order logic, is complete, Peano arithmetic is necessarily incomplete if it is consistent. This is very widely—though not universally—regarded as having shown the logicist program of Frege to be impossible to complete.
In 2001, A Centenary International Conference celebrating the first hundred years of Russell's paradox was held in Munich and its proceedings have been published.
Applied versions
There are some versions of this paradox that are closer to real-life situations and may be easier to understand for non-logicians. For example, the barber paradox supposes a barber who shaves all men who do not shave themselves and only men who do not shave themselves. When one thinks about whether the barber should shave himself or not, a similar paradox begins to emerge.
An easy refutation of the "layman's versions" such as the barber paradox seems to be that no such barber exists, or that the barber is not a man, and so can exist without paradox. The whole point of Russell's paradox is that the answer "such a set does not exist" means the definition of the notion of set within a given theory is unsatisfactory. Note the difference between the statements "such a set does not exist" and "it is an empty set". It is like the difference between saying "There is no bucket" and saying "The bucket is empty".
A notable exception to the above may be the Grelling–Nelson paradox, in which words and meaning are the elements of the scenario rather than people and hair-cutting. Though it is easy to refute the barber's paradox by saying that such a barber does not (and cannot) exist, it is impossible to say something similar about a meaningfully defined word.
One way that the paradox has been dramatised is as follows: Suppose that every public library has to compile a catalogue of all its books. Since the catalogue is itself one of the library's books, some librarians include it in the catalogue for completeness; while others leave it out as it being one of the library's books is self evident. Now imagine that all these catalogues are sent to the national library. Some of them include themselves in their listings, others do not. The national librarian compiles two master catalogues—one of all the catalogues that list themselves, and one of all those that do not.
The question is: should these master catalogues list themselves? The 'catalogue of all catalogues that list themselves' is no problem. If the librarian does not include it in its own listing, it remains a true catalogue of those catalogues that do include themselves. If he does include it, it remains a true catalogue of those that list themselves. However, just as the librarian cannot go wrong with the first master catalogue, he is doomed to fail with the second. When it comes to the 'catalogue of all catalogues that do not list themselves', the librarian cannot include it in its own listing, because then it would include itself, and so belong in the other catalogue, that of catalogues that do include themselves. However, if the librarian leaves it out, the catalogue is incomplete. Either way, it can never be a true master catalogue of catalogues that do not list themselves.
Applications and related topics
Russell-like paradoxes
As illustrated above for the barber paradox, Russell's paradox is not hard to extend. Take:
A transitive verb , that can be applied to its substantive form.
Form the sentence:
The er that s all (and only those) who do not themselves,
Sometimes the "all" is replaced by "all ers".
An example would be "paint":
The painter that paints all (and only those) that do not paint themselves.
or "elect"
The elector (representative), that elects all that do not elect themselves.
In the Season 8 episode of The Big Bang Theory, "The Skywalker Intrusion", Sheldon Cooper analyzes the song "Play That Funky Music", concluding that the lyrics present a musical example of Russell's Paradox.
Paradoxes that fall in this scheme include:
The barber with "shave".
The original Russell's paradox with "contain": The container (Set) that contains all (containers) that do not contain themselves.
The Grelling–Nelson paradox with "describer": The describer (word) that describes all words, that do not describe themselves.
Richard's paradox with "denote": The denoter (number) that denotes all denoters (numbers) that do not denote themselves. (In this paradox, all descriptions of numbers get an assigned number. The term "that denotes all denoters (numbers) that do not denote themselves" is here called Richardian.)
"I am lying.", namely the liar paradox and Epimenides paradox, whose origins are ancient
Russell–Myhill paradox
Related paradoxes
The Burali-Forti paradox, about the order type of all well-orderings
The Kleene–Rosser paradox, showing that the original lambda calculus is inconsistent, by means of a self-negating statement
Curry's paradox (named after Haskell Curry), which does not require negation
The smallest uninteresting integer paradox
Girard's paradox in type theory
See also
Basic Law V
"On Denoting"
Quine's paradox
Self-reference
List of self–referential paradoxes
Notes
References
Sources
External links
Bertrand Russell
Eponymous paradoxes
Paradoxes of naive set theory
1901 in science
Self-referential paradoxes | Russell's paradox | [
"Mathematics"
] | 4,036 | [
"Basic concepts in infinite set theory",
"Basic concepts in set theory",
"Paradoxes of naive set theory"
] |
46,096 | https://en.wikipedia.org/wiki/Simpson%27s%20paradox | Simpson's paradox is a phenomenon in probability and statistics in which a trend appears in several groups of data but disappears or reverses when the groups are combined. This result is often encountered in social-science and medical-science statistics, and is particularly problematic when frequency data are unduly given causal interpretations. The paradox can be resolved when confounding variables and causal relations are appropriately addressed in the statistical modeling (e.g., through cluster analysis).
Simpson's paradox has been used to illustrate the kind of misleading results that the misuse of statistics can generate.
Edward H. Simpson first described this phenomenon in a technical paper in 1951, but the statisticians Karl Pearson (in 1899) and Udny Yule (in 1903) had mentioned similar effects earlier. The name Simpson's paradox was introduced by Colin R. Blyth in 1972. It is also referred to as Simpson's reversal, the Yule–Simpson effect, the amalgamation paradox, or the reversal paradox.
Mathematician Jordan Ellenberg argues that Simpson's paradox is misnamed as "there's no contradiction involved, just two different ways to think about the same data" and suggests that its lesson "isn't really to tell us which viewpoint to take but to insist that we keep both the parts and the whole in mind at once."
Examples
UC Berkeley gender bias
One of the best-known examples of Simpson's paradox comes from a study of gender bias among graduate school admissions to University of California, Berkeley. The admission figures for the fall of 1973 showed that men applying were more likely than women to be admitted, and the difference was so large that it was unlikely to be due to chance.
However, when taking into account the information about departments being applied to, the different rejection percentages reveal the different difficulty of getting into the department, and at the same time it showed that women tended to apply to more competitive departments with lower rates of admission, even among qualified applicants (such as in the English department), whereas men tended to apply to less competitive departments with higher rates of admission (such as in the engineering department). The pooled and corrected data showed a "small but statistically significant bias in favor of women".
The data from the six largest departments are listed below:
The entire data showed total of 4 out of 85 departments to be significantly biased against women, while 6 to be significantly biased against men (not all present in the 'six largest departments' table above). Notably, the numbers of biased departments were not the basis for the conclusion, but rather it was the gender admissions pooled across all departments, while weighing by each department's rejection rate across all of its applicants.
Kidney stone treatment
Another example comes from a real-life medical study comparing the success rates of two treatments for kidney stones. The table below shows the success rates (the term success rate here actually means the success proportion) and numbers of treatments for treatments involving both small and large kidney stones, where Treatment A includes open surgical procedures and Treatment B includes closed surgical procedures. The numbers in parentheses indicate the number of success cases over the total size of the group.
The paradoxical conclusion is that treatment A is more effective when used on small stones, and also when used on large stones, yet treatment B appears to be more effective when considering both sizes at the same time. In this example, the "lurking" variable (or confounding variable) causing the paradox is the size of the stones, which was not previously known to researchers to be important until its effects were included.
Which treatment is considered better is determined by which success ratio (successes/total) is larger. The reversal of the inequality between the two ratios when considering the combined data, which creates Simpson's paradox, happens because two effects occur together:
The sizes of the groups, which are combined when the lurking variable is ignored, are very different. Doctors tend to give cases with large stones the better treatment A, and the cases with small stones the inferior treatment B. Therefore, the totals are dominated by groups 3 and 2, and not by the two much smaller groups 1 and 4.
The lurking variable, stone size, has a large effect on the ratios; i.e., the success rate is more strongly influenced by the severity of the case than by the choice of treatment. Therefore, the group of patients with large stones using treatment A (group 3) does worse than the group with small stones, even if the latter used the inferior treatment B (group 2).
Based on these effects, the paradoxical result is seen to arise because the effect of the size of the stones overwhelms the benefits of the better treatment (A). In short, the less effective treatment B appeared to be more effective because it was applied more frequently to the small stones cases, which were easier to treat.
Jaynes argues that the correct conclusion is that though treatment A remains noticeably better than treatment B, the kidney stone size is more important.
Batting averages
A common example of Simpson's paradox involves the batting averages of players in professional baseball. It is possible for one player to have a higher batting average than another player each year for a number of years, but to have a lower batting average across all of those years. This phenomenon can occur when there are large differences in the number of at bats between the years. Mathematician Ken Ross demonstrated this using the batting average of two baseball players, Derek Jeter and David Justice, during the years 1995 and 1996:
In both 1995 and 1996, Justice had a higher batting average (in bold type) than Jeter did. However, when the two baseball seasons are combined, Jeter shows a higher batting average than Justice. According to Ross, this phenomenon would be observed about once per year among the possible pairs of players.
Vector interpretation
Simpson's paradox can also be illustrated using a 2-dimensional vector space. A success rate of (i.e., successes/attempts) can be represented by a vector , with a slope of . A steeper vector then represents a greater success rate. If two rates and are combined, as in the examples given above, the result can be represented by the sum of the vectors and , which according to the parallelogram rule is the vector , with slope .
Simpson's paradox says that even if a vector (in orange in figure) has a smaller slope than another vector (in blue), and has a smaller slope than , the sum of the two vectors can potentially still have a larger slope than the sum of the two vectors , as shown in the example. For this to occur one of the orange vectors must have a greater slope than one of the blue vectors (here and ), and these will generally be longer than the alternatively subscripted vectors – thereby dominating the overall comparison.
Correlation between variables
Simpson's reversal can also arise in correlations, in which two variables appear to have (say) a positive correlation towards one another, when in fact they have a negative correlation, the reversal having been brought about by a "lurking" confounder. Berman et al. give an example from economics, where a dataset suggests overall demand is positively correlated with price (that is, higher prices lead to more demand), in contradiction of expectation. Analysis reveals time to be the confounding variable: plotting both price and demand against time reveals the expected negative correlation over various periods, which then reverses to become positive if the influence of time is ignored by simply plotting demand against price.
Psychology
Psychological interest in Simpson's paradox seeks to explain why people deem sign reversal to be impossible at first. The question is where people get this strong intuition from, and how it is encoded in the mind.
Simpson's paradox demonstrates that this intuition cannot be derived from either classical logic or probability calculus alone, and thus led philosophers to speculate that it is supported by an innate causal logic that guides people in reasoning about actions and their consequences. Savage's sure-thing principle is an example of what such logic may entail. A qualified version of Savage's sure thing principle can indeed be derived from Pearl's do-calculus and reads: "An action A that increases the probability of an event B in each subpopulation Ci of C must also increase the probability of B in the population as a whole, provided that the action does not change the distribution of the subpopulations." This suggests that knowledge about actions and consequences is stored in a form resembling Causal Bayesian Networks.
Probability
A paper by Pavlides and Perlman presents a proof, due to Hadjicostas, that in a random 2 × 2 × 2 table with uniform distribution, Simpson's paradox will occur with a probability of exactly . A study by Kock suggests that the probability that Simpson's paradox would occur at random in path models (i.e., models generated by path analysis) with two predictors and one criterion variable is approximately 12.8 percent; slightly higher than 1 occurrence per 8 path models.
Simpson's second paradox
A second, less well-known paradox was also discussed in Simpson's 1951 paper. It can occur when the "sensible interpretation" is not necessarily found in the separated data, like in the Kidney Stone example, but can instead reside in the combined data. Whether the partitioned or combined form of the data should be used hinges on the process giving rise to the data, meaning the correct interpretation of the data cannot always be determined by simply observing the tables.
Judea Pearl has shown that, in order for the partitioned data to represent the correct causal relationships between any two variables, and , the partitioning variables must satisfy a graphical condition called "back-door criterion":
They must block all spurious paths between and
No variable can be affected by
This criterion provides an algorithmic solution to Simpson's second paradox, and explains why the correct interpretation cannot be determined by data alone; two different graphs, both compatible with the data, may dictate two different back-door criteria.
When the back-door criterion is satisfied by a set Z of covariates, the adjustment formula (see Confounding) gives the correct causal effect of X on Y. If no such set exists, Pearl's do-calculus can be invoked to discover other ways of estimating the causal effect. The completeness of do-calculus can be viewed as offering a complete resolution of the Simpson's paradox.
Criticism
One criticism is that the paradox is not really a paradox at all, but rather a failure to properly account for confounding variables or to consider causal relationships between variables.
Another criticism of the apparent Simpson's paradox is that it may be a result of the specific way that data are stratified or grouped. The phenomenon may disappear or even reverse if the data is stratified differently or if different confounding variables are considered. Simpson's example actually highlighted a phenomenon called noncollapsibility, which occurs when subgroups with high proportions do not make simple averages when combined. This suggests that the paradox may not be a universal phenomenon, but rather a specific instance of a more general statistical issue.
Critics of the apparent Simpson's paradox also argue that the focus on the paradox may distract from more important statistical issues, such as the need for careful consideration of confounding variables and causal relationships when interpreting data.
Despite these criticisms, the apparent Simpson's paradox remains a popular and intriguing topic in statistics and data analysis. It continues to be studied and debated by researchers and practitioners in a wide range of fields, and it serves as a valuable reminder of the importance of careful statistical analysis and the potential pitfalls of simplistic interpretations of data.
See also
Spurious correlation
Omitted-variable bias
References
Bibliography
Leila Schneps and Coralie Colmez, Math on trial. How numbers get used and abused in the courtroom, Basic Books, 2013. . (Sixth chapter: "Math error number 6: Simpson's paradox. The Berkeley sex bias case: discrimination detection").
External links
Simpson's Paradox at the Stanford Encyclopedia of Philosophy, by Jan Sprenger and Naftali Weinberger.
How statistics can be misleading – Mark Liddell – TED-Ed video and lesson.
Pearl, Judea, "Understanding Simpson's Paradox" (PDF)
Simpson's Paradox, a short article by Alexander Bogomolny on the vector interpretation of Simpson's paradox
The Wall Street Journal column "The Numbers Guy" for December 2, 2009 dealt with recent instances of Simpson's paradox in the news. Notably a Simpson's paradox in the comparison of unemployment rates of the 2009 recession with the 1983 recession.
At the Plate, a Statistical Puzzler: Understanding Simpson's Paradox by Arthur Smith, August 20, 2010
Simpson's Paradox, a video by Henry Reich of MinutePhysics
Probability theory paradoxes
Statistical paradoxes
Causal inference
1951 introductions | Simpson's paradox | [
"Mathematics"
] | 2,624 | [
"Probability theory paradoxes",
"Mathematical problems",
"Statistical paradoxes",
"Mathematical paradoxes"
] |
46,112 | https://en.wikipedia.org/wiki/Violence | Violence is often defined as the use of physical force or power by humans to cause harm and degradation to other living beings, such as humiliation, pain, injury, disablement, damage to property and ultimately death, as well as destruction to a society's living environment. The World Health Organization (WHO) defines violence as "the intentional use of physical force or power, threatened or actual, against oneself, another person, or against a group or community, which either results in or has a high likelihood of resulting in injury, death, psychological harm, maldevelopment, or deprivation." There is growing recognition among researchers and practitioners of the need to include violence that does not necessarily result in injury or death.
Violence in many forms can be preventable. There is a strong relationship between levels of violence and modifiable factors in a country such as concentrated (regional) poverty, income and gender inequality, the harmful use of alcohol, and the absence of safe, stable, and nurturing relationships between children and parents. Strategies addressing the underlying causes of violence can be relatively effective in preventing violence, although mental and physical health and individual responses, personalities, etc. have always been decisive factors in the formation of these behaviors.
Types
The World Health Organization (WHO) divides violence into three broad categories:
self-directed violence
interpersonal violence
collective violence
This initial categorization differentiates between violence that a person inflicts upon themself, violence inflicted by another individual or by a small group of individuals, and violence inflicted by larger groups such as states, organized political groups, militia groups and terrorist organizations.
Alternatively, violence can primarily be classified as either instrumental or reactive / hostile.
Self-directed
Self-directed violence is subdivided into suicidal behaviour and self-abuse. The former includes suicidal thoughts, attempted suicides—also called para suicide or deliberate self-injury in some countries—and suicide itself. Self-abuse, in contrast, includes acts such as self-mutilation.
Collective
Collective violence is the instrumental use of violence by people who identify themselves as members of a group – whether this group is transitory or has a more permanent identity – against another group or set of individuals in order
to achieve political, economic or social objectives.
Unlike the other two broad categories, the subcategories of collective violence suggest possible motives for violence committed by larger groups of individuals or by states. Collective violence that is committed to advance a particular social agenda includes, for example, crimes of hate committed by organized groups, terrorist acts and mob violence. Political violence includes war and related violent conflicts, state violence and similar acts carried out by armed groups. There may be multiple determinants of violence against civilians in such situations. Economic violence includes attacks motivated by economic gain—such as attacks carried out with the purpose of disrupting economic activity, denying access to essential services, or creating economic division and fragmentation. Clearly, acts committed by domestic and subnational groups can have multiple motives. Slow violence is a long-duration form of violence which is often invisible (at least to those not impacted by it), such as environmental degradation, pollution and climate change.
Warfare
War is a state of prolonged violent large-scale conflict involving two or more groups of people, usually under the auspices of government. It is the most extreme form of collective violence.
War is fought as a means of resolving territorial and other conflicts, as war of aggression to conquer territory or loot resources, in national self-defence or liberation, or to suppress attempts of part of the nation to secede from it. There are also ideological, religious and revolutionary wars.
Since the Industrial Revolution the lethality of modern warfare has grown. World War I casualties were over 40 million and World War II casualties were over 70 million.
Interpersonal
Interpersonal violence is divided into two subcategories: Family and intimate partner violence—that is, violence largely between family members and intimate partners, usually, though not exclusively, taking place in the home. Community violence—violence between individuals who are unrelated, and who may or may not know each other, generally taking place outside the home. The former group includes forms of violence such as child abuse and child corporal punishment, intimate partner violence and abuse of the elderly. The latter includes youth violence, random acts of violence, rape or sexual assault by strangers, and violence in institutional settings such as schools, workplaces, prisons and nursing homes. When interpersonal violence occurs in families, its psychological consequences can affect parents, children, and their relationship in the short- and long-terms.
Violence against children
Violence against children includes all forms of violence against people under 18 years old, whether perpetrated by parents or other caregivers, peers, romantic partners, or strangers.
Exposure to any form of trauma, particularly in childhood, can increase the risk of mental illness and suicide; smoking, alcohol and substance abuse; chronic diseases like heart disease, diabetes and cancer; and social problems such as poverty, crime and violence.
Globally, it is estimated that up to 1 billion children aged 2–17 years, have experienced physical, sexual, or emotional violence or neglect in the past year.
Most violence against children involves at least one of six main types of interpersonal violence that tend to occur at different stages in a child’s development.
Maltreatment
Maltreatment (including violent punishment) involves physical, sexual and psychological/emotional violence; and neglect of infants, children and adolescents by parents, caregivers and other authority figures, most often in the home but also in settings such as schools and orphanages. It includes all types of physical and/or emotional ill-treatment, sexual abuse, neglect, negligence and commercial or other child exploitation, which results in actual or potential harm to the child's health, survival, development or dignity in the context of a relationship of responsibility, trust, or power. Exposure to intimate partner violence is also sometimes included as a form of child maltreatment.
Child maltreatment is a global problem with serious lifelong consequences. It is complex and difficult to study.
There are no reliable global estimates for the prevalence of child maltreatment. Data for many countries, especially low- and middle-income countries, are lacking. Current estimates vary widely depending on the country and the method of research used. Approximately 20% of women and 5–10% of men report being sexually abused as children, while 25–50% of all children report being physically abused.
Consequences of child maltreatment include impaired lifelong physical and mental health, and social and occupational functioning (e.g. school, job, and relationship difficulties). These can ultimately slow a country's economic and social development. Preventing child maltreatment before it starts is possible and requires a multisectoral approach. Effective prevention programmes support parents and teach positive parenting skills. Ongoing care of children and families can reduce the risk of maltreatment reoccurring and can minimize its consequences.
Bullying
Bullying (including cyber-bullying) is unwanted aggressive behaviour by another child or group of children who are neither siblings nor in a romantic relationship with the victim. It involves repeated physical, psychological or social harm, and often takes place in schools and other settings where children gather, and online.
Youth violence
Following the World Health Organization, youth are defined as people between the ages of 10 and 29 years. Youth violence refers to violence occurring between youths, and includes acts that range from bullying and physical fighting, through more severe sexual and physical assault to homicide.
Worldwide some 250,000 homicides occur among youth 10–29 years of age each year, which is 41% of the total number of homicides globally each year ("Global Burden of Disease", World Health Organization, 2008). For each young person killed, 20–40 more sustain injuries requiring hospital treatment. Youth violence has a serious, often lifelong, impact on a person's psychological and social functioning. Youth violence greatly increases the costs of health, welfare and criminal justice services; reduces productivity; decreases the value of property; and generally undermines the fabric of society.
Prevention programmes shown to be effective or to have promise in reducing youth violence include life skills and social development programmes designed to help children and adolescents manage anger, resolve conflict, and develop the necessary social skills to solve problems; schools-based anti-bullying prevention programmes; and programmes to reduce access to alcohol, illegal drugs and guns. Also, given significant neighbourhood effects on youth violence, interventions involving relocating families to less poor environments have shown promising results. Similarly, urban renewal projects such as business improvement districts have shown a reduction in youth violence.
Different types of youth on youth violence include witnessing or being involved in physical, emotional and sexual abuse (e.g. physical attacks, bullying, rape), and violent acts like gang shootings and robberies. According to researchers in 2018, "More than half of children and adolescents living in cities have experienced some form of community violence." The violence "can also all take place under one roof, or in a given community or neighborhood and can happen at the same time or at different stages of life." Youth violence has immediate and long term adverse impact whether the individual was the recipient of the violence or a witness to it.
Youth violence impacts individuals, their families, and society. Victims can have lifelong injuries which means ongoing doctor and hospital visits, the cost of which quickly add up. Since the victims of youth-on-youth violence may not be able to attend school or work because of their physical and/or mental injuries, it is often up to their family members to take care of them, including paying their daily living expenses and medical bills. Their caretakers may have to give up their jobs or work reduced hours to provide help to the victim of violence. This causes a further burden on society because the victim and maybe even their caretakers have to obtain government assistance to help pay their bills. Recent research has found that psychological trauma during childhood can change a child's brain. "Trauma is known to physically affect the brain and the body which causes anxiety, rage, and the ability to concentrate. They can also have problems remembering, trusting, and forming relationships." Since the brain becomes used to violence it may stay continually in an alert state (similar to being stuck in the fight or flight mode). "Researchers claim that the youth who are exposed to violence may have emotional, social, and cognitive problems. They may have trouble controlling emotions, paying attention in school, withdraw from friends, or show signs of post-traumatic stress disorder".
It is important for youth exposed to violence to understand how their bodies may react so they can take positive steps to counteract any possible short- and long-term negative effects (e.g., poor concentration, feelings of depression, heightened levels of anxiety). By taking immediate steps to mitigate the effects of the trauma they've experienced, negative repercussions can be reduced or eliminated. As an initial step, the youths need to understand why they may be feeling a certain way and to understand how the violence they have experienced may be causing negative feelings and making them behave differently. Pursuing a greater awareness of their feelings, perceptions, and negative emotions is the first step that should be taken as part of recovering from the trauma they have experienced. "Neuroscience research shows that the only way we can change the way we feel is by becoming aware of our inner experience and learning to befriend what is going on inside ourselves".
Some of the ways to combat the adverse effects of exposure to youth violence would be to try various mindfulness and movement activities, deep breathing exercises and other actions that enable youths to release their pent up emotions. Using these techniques will teach body awareness, reduce anxiety and nervousness, and reduce feelings of anger and annoyance.
Youth who have experienced violence benefit from having a close relationship with one or more people. This is important because the trauma victims need to have people who are safe and trustworthy that they can relate and talk to about their horrible experiences. Some youth do not have adult figures at home or someone they can count on for guidance and comfort. Schools in bad neighborhoods where youth violence is prevalent should assign counselors to each student so that they receive regular guidance. In addition to counseling/therapy sessions and programs, it has been recommended that schools offer mentoring programs where students can interact with adults who can be a positive influence on them. Another way is to create more neighborhood programs to ensure that each child has a positive and stable place to go when school in not in session. Many children have benefited from formal organizations now which aim to help mentor and provide a safe environment for the youth especially those living in neighborhoods with higher rates of violence. This includes organizations such as Becoming a Man, CeaseFire Illinois, Chicago Area Project, Little Black Pearl, and Rainbow House". These programs are designed to help give the youth a safe place to go, stop the violence from occurring, offering counseling and mentoring to help stop the cycle of violence. If the youth do not have a safe place to go after school hours they will likely get into trouble, receive poor grades, drop out of school and use drugs and alcohol. The gangs look for youth who do not have positive influences in their life and need protection. This is why these programs are so important for the youth to have a safe environment rather than resorting to the streets.
Intimate partner violence
Intimate partner violence (or domestic violence) involves physical, sexual and emotional violence by an intimate partner or ex-partner. Although males can also be victims, intimate partner violence disproportionately affects females. It commonly occurs against girls within child marriages and early/forced marriages. Among romantically involved but unmarried adolescents it is sometimes called “dating violence”.
Sexual violence
Sexual violence includes non-consensual completed or attempted sexual contact and acts of a sexual nature not involving contact (such as voyeurism or sexual harassment); acts of sexual trafficking committed against someone who is unable to consent or refuse; and online exploitation.
Emotional or psychological violence
Emotional or psychological violence includes restricting a child’s movements, denigration, ridicule, threats and intimidation, discrimination, rejection and other non-physical forms of hostile treatment.
Intimate partner
Population-level surveys based on reports from victims provide the most accurate estimates of the prevalence of intimate partner violence and sexual violence in non-conflict settings. A study conducted by WHO in 10 mainly developing countries found that, among women aged 15 to 49 years, between 15% (Japan) and 70% (Ethiopia and Peru) of women reported physical and/or sexual violence by an intimate partner. A growing body of research on men and intimate partner violence focuses on men as both perpetrators and victims of violence, as well as on how to involve men and boys in anti-violence work.
Intimate partner and sexual violence have serious short- and long-term physical, mental, sexual and reproductive health problems for victims and for their children, and lead to high social and economic costs. These include both fatal and non-fatal injuries, depression and post-traumatic stress disorder, unintended pregnancies, sexually transmitted infections, including HIV.
Factors associated with the perpetration and experiencing of intimate partner violence are low levels of education, history of violence as a perpetrator, a victim or a witness of parental violence, harmful use of alcohol, attitudes that are accepting of violence as well as marital discord and dissatisfaction. Factors associated only with perpetration of intimate partner violence are having multiple partners, and antisocial personality disorder.
A recent theory named "The Criminal Spin" suggests a mutual flywheel effect between partners that is manifested by an escalation in the violence. A violent spin may occur in any other forms of violence, but in Intimate partner violence the added value is the mutual spin, based on the unique situation and characteristics of intimate relationship.
The primary prevention strategy with the best evidence for effectiveness for intimate partner violence is school-based programming for adolescents to prevent violence within dating relationships. Evidence is emerging for the effectiveness of several other primary prevention strategies—those that: combine microfinance with gender equality training; promote communication and relationship skills within communities; reduce access to, and the harmful use of alcohol; and change cultural gender norms.
Sexual
Sexual violence is any sexual act, attempt to obtain a sexual act, unwanted sexual comments or advances, or acts to traffic, or otherwise directed against a person's sexuality using coercion, by any person regardless of their relationship to the victim, in any setting. It includes rape, defined as the physically forced or otherwise coerced penetration of the vulva or anus with a penis, other body part or object.
Population-level surveys based on reports from victims estimate that between 0.3 and 11.5% of women reported experiencing sexual violence. Sexual violence has serious short- and long-term consequences on physical, mental, sexual and reproductive health for victims and for their children as described in the section on intimate partner violence. If perpetrated during childhood, sexual violence can lead to increased smoking, drug and alcohol misuse, and risky sexual behaviors in later life. It is also associated with perpetration of violence and being a victim of violence.
Many of the risk factors for sexual violence are the same as for domestic violence. Risk factors specific to sexual violence perpetration include beliefs in family honor and sexual purity, ideologies of male sexual entitlement and weak legal sanctions for sexual violence.
Few interventions to prevent sexual violence have been demonstrated to be effective. School-based programmes to prevent child sexual abuse by teaching children to recognize and avoid potentially sexually abusive situations are run in many parts of the world and appear promising, but require further research. To achieve lasting change, it is important to enact legislation and develop policies that protect women; address discrimination against women and promote gender equality; and help to move the culture away from violence.
Elder maltreatment
Elder maltreatment is a single or repeated act, or lack of appropriate action, occurring within any relationship where there is an expectation of trust which causes harm or distress to an older person.
While there is little information regarding the extent of maltreatment in elderly populations, especially in developing countries, it is estimated that 4–6% of elderly people in high-income countries have experienced some form of maltreatment at home However, older people are often afraid to report cases of maltreatment to family, friends, or to the authorities. Data on the extent of the problem in institutions such as hospitals, nursing homes and other long-term care facilities are scarce. Elder maltreatment can lead to serious physical injuries and long-term psychological consequences. Elder maltreatment is predicted to increase as many countries are experiencing rapidly ageing populations.
Many strategies have been implemented to prevent elder maltreatment and to take action against it and mitigate its consequences including public and professional awareness campaigns, screening (of potential victims and abusers), caregiver support interventions (e.g. stress management, respite care), adult protective services and self-help groups. Their effectiveness has, however, not so far been well-established.
Targeted
Several rare but painful episodes of assassination, attempted assassination and school shootings at elementary, middle, high schools, as well as colleges and universities in the United States, led to a considerable body of research on ascertainable behaviors of persons who have planned or carried out such attacks. These studies (1995–2002) investigated what the authors called "targeted violence," described the "path to violence" of those who planned or carried out attacks and laid out suggestions for law enforcement and educators. A major point from these research studies is that targeted violence does not just "come out of the blue".
Everyday
As an anthropological concept, "everyday violence" may refer to the incorporation of different forms of violence (mainly political violence) into daily practices. Latin America and the Caribbean, the region with the highest murder rate in the world, experienced more than 2.5 million murders between 2000 and 2017.
Prevalence
Injuries and violence are a significant cause of death and burden of disease in all countries; however, they are not evenly distributed across or within countries. Violence-related injuries kill 1.25 million people every year, as of 2024. This is relatively similar to 2014 (1.3 million people or 2.5% of global mortality), 2013 (1.28 million people) and 1990 (1.13 million people). For people aged 15–44 years, violence is the fourth leading cause of death worldwide, as of 2014. Between 1990 and 2013, age-standardised death rates fell for self-harm and interpersonal violence. Of the deaths in 2013, roughly 842,000 were attributed to suicide, 405,000 to interpersonal violence, and 31,000 to collective violence and legal intervention. For each single death due to violence, there are dozens of hospitalizations, hundreds of emergency department visits, and thousands of doctors' appointments. Furthermore, violence often has lifelong consequences for physical and mental health and social functioning and can slow economic and social development. It's particularly the case if it happened in childhood.
In 2013, of the estimated 405,000 deaths due to interpersonal violence globally, assault by firearm was the cause in 180,000 deaths, assault by sharp object was the cause in 114,000 deaths, and the remaining 110,000 deaths from other causes.
Philosophical perspectives
Some philosophers have argued that any interpretation of reality is intrinsically violent. Slavoj Žižek in his book Violence stated that "something violent is the very symbolization of a thing." An ontological perspective considers the harm inflicted by the very interpretation of the world as a form of violence that is distinct from physical violence in that it is possible to avoid physical violence whereas some ontological violence is intrinsic to all knowledge.
Both Foucault and Arendt considered the relationship between power and violence but concluded that while related they are distinct.
In feminist philosophy, epistemic violence is the act of causing harm by an inability to understand the conversation of others due to ignorance. Some philosophers think this will harm marginalized groups.
Brad Evans states that violence "represents a violation in the very conditions constituting what it means to be human as such", "is always an attack upon a person's dignity, their sense of selfhood, and their future", and "is both an ontological crime ... and a form of political ruination".
Factors and models of understanding
Violence cannot be attributed to solely protective factors or risk factors. Both of these factor groups are equally important in the prevention, intervention, and treatment of violence as a whole. The CDC outlines several risk and protective factors for youth violence at the individual, family, social and community levels.
Individual risk factors include poor behavioral control, high emotional stress, low IQ, and antisocial beliefs or attitudes. Family risk factors include authoritarian childrearing attitudes, inconsistent disciplinary practices, low emotional attachment to parents or caregivers, and low parental income and involvement. Social risk factors include social rejection, poor academic performance and commitment to school, and gang involvement or association with delinquent peers. Community risk factors include poverty, low community participation, and diminished economic opportunities.
On the other hand, individual protective factors include an intolerance towards deviance, higher IQ and GPA, elevated popularity and social skills, as well as religious beliefs. Family protective factors include a connectedness and ability to discuss issues with family members or adults, parent/family use of constructive coping strategies, and consistent parental presence during at least one of the following: when awakening, when arriving home from school, at dinner time, or when going to bed. Social protective factors include quality school relationships, close relationships with non-deviant peers, involvement in prosocial activities, and exposure to school climates that are: well supervised, use clear behavior rules and disciplinary approaches, and engage parents with teachers.
With many conceptual factors that occur at varying levels in the lives of those impacted, the exact causes of violence are complex. To represent this complexity, the ecological, or social ecological model is often used. The following four-level version of the ecological model is often used in the study of violence:
The first level identifies biological and personal factors that influence how individuals behave and increase their likelihood of becoming a victim or perpetrator of violence: demographic characteristics (age, education, income), genetics, brain lesions, personality disorders, substance abuse, and a history of experiencing, witnessing, or engaging in violent behaviour.
The second level focuses on close relationships, such as those with family and friends. In youth violence, for example, having friends who engage in or encourage violence can increase a young person's risk of being a victim or perpetrator of violence. For intimate partner violence, a consistent marker at this level of the model is marital conflict or discord in the relationship. In elder abuse, important factors are stress due to the nature of the past relationship between the abused person and the care giver.
The third level explores the community context—i.e., schools, workplaces, and neighbourhoods. Risk at this level may be affected by factors such as the existence of a local drug trade, the absence of social networks, and concentrated poverty. All these factors have been shown to be important in several types of violence.
Finally, the fourth level looks at the broad societal factors that help to create a climate in which violence is encouraged or inhibited: the responsiveness of the criminal justice system, social and cultural norms regarding gender roles or parent-child relationships, income inequality, the strength of the social welfare system, the social acceptability of violence, the availability of weapons, the exposure to violence in mass media, and political instability.
Child-rearing
While studies showing associations between physical punishment of children and later aggression cannot prove that physical punishment causes an increase in aggression, a number of longitudinal studies suggest that the experience of physical punishment has a direct causal effect on later aggressive behaviors. Cross-cultural studies have shown that greater prevalence of corporal punishment of children tends to predict higher levels of violence in societies. For instance, a 2005 analysis of 186 pre-industrial societies found that corporal punishment was more prevalent in societies which also had higher rates of homicide, assault, and war. In the United States, domestic corporal punishment has been linked to later violent acts against family members and spouses. The American family violence researcher Murray A. Straus believes that disciplinary spanking forms "the most prevalent and important form of violence in American families", whose effects contribute to several major societal problems, including later domestic violence and crime.
Psychology
The causes of violent behavior in people are often a topic of research in psychology. Neurobiologist Jan Vodka emphasizes that, for those purposes, "violent behavior is defined as overt and intentional physically aggressive behavior against another person."
Based on the idea of human nature, scientists do agree violence is inherent in humans. Among prehistoric humans, there is archaeological evidence for both contentions of violence and peacefulness as primary characteristics.
Since violence is a matter of perception as well as a measurable phenomenon, psychologists have found variability in whether people perceive certain physical acts as "violent". For example, in a state where execution is a legalized punishment we do not typically perceive the executioner as "violent", though we may talk, in a more metaphorical way, of the state acting violently. Likewise, understandings of violence are linked to a perceived aggressor-victim relationship: hence psychologists have shown that people may not recognise defensive use of force as violent, even in cases where the amount of force used is significantly greater than in the original aggression.
The concept of violence normalization is known as socially sanctioned, or structural violence and is a topic of increasing interest to researchers trying to understand violent behavior. It has been discussed at length by researchers in sociology, medical anthropology, psychology, psychiatry, philosophy, and bioarchaeology.
Evolutionary psychology offers several explanations for human violence in various contexts, such as sexual jealousy in humans, child abuse, and homicide. Goetz (2010) argues that humans are similar to most mammal species and use violence in specific situations. He writes that "Buss and Shackelford (1997a) proposed seven adaptive problems our ancestors recurrently faced that might have been solved by aggression: co-opting the resources of others, defending against attack, inflicting costs on same-sex rivals, negotiating status and hierarchies, deterring rivals from future aggression, deterring mate from infidelity, and reducing resources expended on genetically unrelated children."
Goetz writes that most homicides seem to start from relatively trivial disputes between unrelated men who then escalate to violence and death. He argues that such conflicts occur when there is a status dispute between men of relatively similar status. If there is a great initial status difference, then the lower status individual usually offers no challenge and if challenged the higher status individual usually ignores the lower status individual. At the same an environment of great inequalities between people may cause those at the bottom to use more violence in attempts to gain status.
Media
Research into the media and violence examines whether links between consuming media violence and subsequent aggressive and violent behaviour exists. Although some scholars had claimed media violence may increase aggression, this view is coming increasingly in doubt both in the scholarly community and was rejected by the US Supreme Court in the Brown v EMA case, as well as in a review of video game violence by the Australian Government (2010) which concluded evidence for harmful effects were inconclusive at best and the rhetoric of some scholars was not matched by good data.
Mental disorders
Prevention
The threat and enforcement of physical punishment has been a tried and tested method of preventing some violence since civilisation began. It is used in various degrees in most countries.
Public awareness campaigns
Cities and counties throughout the United States organize "Violence Prevention Months" where the mayor, by proclamation, or the county, by a resolution, encourage the private, community and public sectors to engage in activities that raise awareness that violence is not acceptable through art, music, lectures and events. For example, Violence Prevention Month coordinator, Karen Earle Lile in Contra Costa County, California created a Wall of Life, where children drew pictures that were put up in the walls of banks and public spaces, displaying a child's view of violence they had witnessed and how it affected them, in an effort to draw attention to how violence affects the community, not just the people involved.
Interpersonal violence
A review of scientific literature by the World Health Organization on the effectiveness of strategies to prevent interpersonal violence identified the seven strategies below as being supported by either strong or emerging evidence for effectiveness. These strategies target risk factors at all four levels of the ecological model.
Child–caregiver relationships
Among the most effective such programmes to prevent child maltreatment and reduce childhood aggression are the Nurse Family Partnership home-visiting programme and the Triple P (Parenting Program). There is also emerging evidence that these programmes reduce convictions and violent acts in adolescence and early adulthood, and probably help decrease intimate partner violence and self-directed violence in later life.
Life skills in youth
Evidence shows that the life skills acquired in social development programmes can reduce involvement in violence, improve social skills, boost educational achievement and improve job prospects. Life skills refer to social, emotional, and behavioural competencies which help children and adolescents effectively deal with the challenges of everyday life.
Gender equality
Evaluation studies are beginning to support community interventions that aim to prevent violence against women by promoting gender equality. For instance, evidence suggests that programmes that combine microfinance with gender equity training can reduce intimate partner violence. School-based programmes such as Safe Dates programme in the United States of America and the Youth Relationship Project in Canada have been found to be effective for reducing dating violence.
Cultural norms
Rules or expectations of behaviour – norms – within a cultural or social group can encourage violence. Interventions that challenge cultural and social norms supportive of violence can prevent acts of violence and have been widely used, but the evidence base for their effectiveness is currently weak. The effectiveness of interventions addressing dating violence and sexual abuse among teenagers and young adults by challenging social and cultural norms related to gender is supported by some evidence.
Support programmes
Interventions to identify victims of interpersonal violence and provide effective care and support are critical for protecting health and breaking cycles of violence from one generation to the next. Examples for which evidence of effectiveness is emerging includes: screening tools to identify victims of intimate partner violence and refer them to appropriate services; psychosocial interventions—such as trauma-focused cognitive behavioural therapy—to reduce mental health problems associated with violence, including post-traumatic stress disorder; and protection orders, which prohibit a perpetrator from contacting the victim, to reduce repeat victimization among victims of intimate partner violence.
Collective violence
Not surprisingly, scientific evidence about the effectiveness of interventions to prevent collective violence is lacking. However, policies that facilitate reductions in poverty, that make decision-making more accountable, that reduce inequalities between groups, as well as policies that reduce access to biological, chemical, nuclear and other weapons have been recommended. When planning responses to violent conflicts, recommended approaches include assessing at an early stage who is most vulnerable and what their needs are, co-ordination of activities between various players and working towards global, national and local capabilities so as to deliver effective health services during the various stages of an emergency.
Criminal justice
One of the main functions of law is to regulate violence. Sociologist Max Weber stated that the state claims the monopoly of the legitimate use of force to cause harm practised within the confines of a specific territory. Law enforcement is the main means of regulating nonmilitary violence in society. Governments regulate the use of violence through legal systems governing individuals and political authorities, including the police and military. Civil societies authorize some amount of violence, exercised through the police power, to maintain the status quo and enforce laws.
However, German political theorist Hannah Arendt noted: "Violence can be justifiable, but it never will be legitimate ... Its justification loses in plausibility the farther its intended end recedes into the future. No one questions the use of violence in self-defence, because the danger is not only clear but also present, and the end justifying the means is immediate". Arendt made a clear distinction between violence and power. Most political theorists regarded violence as an extreme manifestation of power whereas Arendt regarded the two concepts as opposites.
In the 20th century in acts of democide governments may have killed more than 260 million of their own people through police brutality, execution, massacre, slave labour camps, and sometimes through intentional famine.
Violent acts that are not carried out by the military or police and that are not in self-defense are usually classified as crimes, although not all crimes are violent crimes. The Federal Bureau of Investigation (FBI) classifies violence resulting in homicide into criminal homicide and justifiable homicide (e.g. self-defense).
The criminal justice approach sees its main task as enforcing laws that proscribe violence and ensuring that "justice is done". The notions of individual blame, responsibility, guilt, and culpability are central to criminal justice's approach to violence and one of the criminal justice system's main tasks is to "do justice", i.e. to ensure that offenders are properly identified, that the degree of their guilt is as accurately ascertained as possible, and that they are punished appropriately. To prevent and respond to violence, the criminal justice approach relies primarily on deterrence, incarceration and the punishment and rehabilitation of perpetrators.
The criminal justice approach, beyond justice and punishment, has traditionally emphasized indicated interventions, aimed at those who have already been involved in violence, either as victims or as perpetrators. One of the main reasons offenders are arrested, prosecuted, and convicted is to prevent further crimes—through deterrence (threatening potential offenders with criminal sanctions if they commit crimes), incapacitation (physically preventing offenders from committing further crimes by locking them up) and through rehabilitation (using time spent under state supervision to develop skills or change one's psychological make-up to reduce the likelihood of future offences).
In recent decades in many countries in the world, the criminal justice system has taken an increasing interest in preventing violence before it occurs. For instance, much of community and problem-oriented policing aims to reduce crime and violence by altering the conditions that foster it—and not to increase the number of arrests. Indeed, some police leaders have gone so far as to say the police should primarily be a crime prevention agency. Juvenile justice systems—an important component of criminal justice systems—are largely based on the belief in rehabilitation and prevention. In the US, the criminal justice system has, for instance, funded school- and community-based initiatives to reduce children's access to guns and teach conflict resolution. Despite this, force is used routinely against juveniles by police. In 1974, the US Department of Justice assumed primary responsibility for delinquency prevention programmes and created the Office of Juvenile Justice and Delinquency Prevention, which has supported the "Blueprints for violence prevention" programme at the University of Colorado Boulder.
Public health
The public health approach is a science-driven, population-based, interdisciplinary, intersectoral approach based on the ecological model which emphasizes primary prevention. Rather than focusing on individuals, the public health approach aims to provide the maximum benefit for the largest number of people, and to extend better care and safety to entire populations. The public health approach is interdisciplinary, drawing upon knowledge from many disciplines including medicine, epidemiology, sociology, psychology, criminology, education and economics. Because all forms of violence are multi-faceted problems, the public health approach emphasizes a multi-sectoral response. It has been proved time and again that cooperative efforts from such diverse sectors as health, education, social welfare, and criminal justice are often necessary to solve what are usually assumed to be purely "criminal" or "medical" problems. The public health approach considers that violence, rather than being the result of any single factor, is the outcome of multiple risk factors and causes, interacting at four levels of a nested hierarchy (individual, close relationship/family, community and wider society) of the Social ecological model.
From a public health perspective, prevention strategies can be classified into three types:
Primary prevention – approaches that aim to prevent violence before it occurs.
Secondary prevention – approaches that focus on the more immediate responses to violence, such as pre-hospital care, emergency services or treatment for sexually transmitted infections following a rape.
Tertiary prevention – approaches that focus on long-term care in the wake of violence, such as rehabilitation and reintegration, and attempt to lessen trauma or reduce long-term disability associated with violence.
A public health approach emphasizes the primary prevention of violence, i.e. stopping them from occurring in the first place. Until recently, this approach has been relatively neglected in the field, with the majority of resources directed towards secondary or tertiary prevention. Perhaps the most critical element of a public health approach to prevention is the ability to identify underlying causes rather than focusing upon more visible "symptoms". This allows for the development and testing of effective approaches to address the underlying causes and so improve health.
The public health approach is an evidence-based and systematic process involving the following four steps:
Defining the problem conceptually and numerically, using statistics that accurately describe the nature and scale of violence, the characteristics of those most affected, the geographical distribution of incidents, and the consequences of exposure to such violence.
Investigating why the problem occurs by determining its causes and correlates, the factors that increase or decrease the risk of its occurrence (risk and protective factors) and the factors that might be modifiable through intervention.
Exploring ways to prevent the problem by using the above information and designing, monitoring and rigorously assessing the effectiveness of programmes through outcome evaluations.
Disseminating information on the effectiveness of programmes and increasing the scale of proven effective programmes. Approaches to prevent violence, whether targeted at individuals or entire communities, must be properly evaluated for their effectiveness and the results shared. This step also includes adapting programmes to local contexts and subjecting them to rigorous re-evaluation to ensure their effectiveness in the new setting.
In many countries, violence prevention is still a new or emerging field in public health. The public health community has started only recently to realize the contributions it can make to reducing violence and mitigating its consequences. In 1949, Gordon called for injury prevention efforts to be based on the understanding of causes, in a similar way to prevention efforts for communicable and other diseases. In 1962, Gomez, referring to the WHO definition of health, stated that it is obvious that violence does not contribute to "extending life" or to a "complete state of well-being". He defined violence as an issue that public health experts needed to address and stated that it should not be the primary domain of lawyers, military personnel, or politicians.
However, it is only in the last 30 years that public health has begun to address violence, and only in the last fifteen has it done so at the global level. This is a much shorter period of time than public health has been tackling other health problems of comparable magnitude and with similarly severe lifelong consequences.
The global public health response to interpersonal violence began in earnest in the mid-1990s. In 1996, the World Health Assembly adopted Resolution WHA49.25 which declared violence "a leading worldwide public health problem" and requested that the World Health Organization (WHO) initiate public health activities to (1) document and characterize the burden of violence, (2) assess the effectiveness of programmes, with particular attention to women and children and community-based initiatives, and (3) promote activities to tackle the problem at the international and national levels. The World Health Organization's initial response to this resolution was to create the Department of Violence and Injury Prevention and Disability and to publish the World report on violence and health (2002).
The case for the public health sector addressing interpersonal violence rests on four main arguments. First, the significant amount of time health care professionals dedicate to caring for victims and perpetrators of violence has made them familiar with the problem and has led many, particularly in emergency departments, to mobilize to address it. The information, resources, and infrastructures the health care sector has at its disposal are an important asset for research and prevention work. Second, the magnitude of the problem and its potentially severe lifelong consequences and high costs to individuals and wider society call for population-level interventions typical of the public health approach. Third, the criminal justice approach, the other main approach to addressing violence (link to entry above), has traditionally been more geared towards violence that occurs between male youths and adults in the street and other public places—which makes up the bulk of homicides in most countries—than towards violence occurring in private settings such as child maltreatment, intimate partner violence and elder abuse—which makes up the largest share of non-fatal violence. Fourth, evidence is beginning to accumulate that a science-based public health approach is effective at preventing interpersonal violence.
Human rights
The human rights approach is based on the obligations of states to respect, protect and fulfill human rights and therefore to prevent, eradicate and punish violence. It recognizes violence as a violation of many human rights: the rights to life, liberty, autonomy and security of the person; the rights to equality and non-discrimination; the rights to be free from torture and cruel, inhuman and degrading treatment or punishment; the right to privacy; and the right to the highest attainable standard of health. These human rights are enshrined in international and regional treaties and national constitutions and laws, which stipulate the obligations of states, and include mechanisms to hold states accountable. The Convention on the Elimination of All Forms of Discrimination Against Women, for example, requires that countries party to the Convention take all appropriate steps to end violence against women. The Convention on the Rights of the Child in its Article 19 states that States Parties shall take all appropriate legislative, administrative, social and educational measures to protect the child from all forms of physical or mental violence, injury or abuse, neglect or negligent treatment, maltreatment or exploitation, including sexual abuse, while in the care of parent(s), legal guardian(s) or any other person who has the care of the child.
Geographical context
Violence, as defined in the dictionary of human geography, "appears whenever power is in jeopardy" and "in and of itself stands emptied of strength and purpose: it is part of a larger matrix of socio-political power struggles". Violence can be broadly divided into three broad categories—direct violence, structural violence and cultural violence. Thus defined and delineated, it is of note, as Hyndman says, that "geography came late to theorizing violence" in comparison to other social sciences. Social and human geography, rooted in the humanist, Marxist, and feminist subfields that emerged following the early positivist approaches and subsequent behavioral turn, have long been concerned with social and spatial justice. Along with critical geographers and political geographers, it is these groupings of geographers that most often interact with violence. Keeping this idea of social/spatial justice via geography in mind, it is worthwhile to look at geographical approaches to violence in the context of politics.
Derek Gregory and Alan Pred assembled the influential edited collection Violent Geographies: Fear, Terror, and Political Violence, which demonstrates how place, space, and landscape are foremost factors in the real and imagined practices of organized violence both historically and in the present. Evidently, political violence often gives a part for the state to play. When "modern states not only claim a monopoly of the legitimate means of violence; they also routinely use the threat of violence to enforce the rule of law", the law not only becomes a form of violence but is violence. Philosopher Giorgio Agamben's concepts of state of exception and homo sacer are useful to consider within a geography of violence. The state, in the grip of a perceived, potential crisis (whether legitimate or not) takes preventative legal measures, such as a suspension of rights (it is in this climate, as Agamben demonstrates, that the formation of the Social Democratic and Nazi government's lager or concentration camp can occur). However, when this "in limbo" reality is designed to be in place "until further notice…the state of exception thus ceases to be referred to as an external and provisional state of factual danger and comes to be confused with juridical rule itself". For Agamben, the physical space of the camp "is a piece of land placed outside the normal juridical order, but it is nevertheless not simply an external space". At the scale of the body, in the state of exception, a person is so removed from their rights by "juridical procedures and deployments of power" that "no act committed against them could appear any longer as a crime"; in other words, people become only homo sacer. Guantanamo Bay could also be said to represent the physicality of the state of exception in space, and can just as easily draw man as homo sacer.
In the 1970s, genocides in Cambodia under the Khmer Rouge and Pol Pot resulted in the deaths of over two million Cambodians (which was 25% of the Cambodian population), forming one of the many contemporary examples of state-sponsored violence. About fourteen thousand of these murders occurred at Choeung Ek, which is the best-known of the extermination camps referred to as the Killing Fields. The killings were arbitrary; for example, a person could be killed for wearing glasses, since that was seen as associating them with intellectuals and therefore as making them part of the enemy. People were murdered with impunity because it was no crime; Cambodians were made homo sacer in a condition of bare life. The Killing Fields—manifestations of Agamben's concept of camps beyond the normal rule of law—featured the state of exception. As part of Pol Pot's "ideological intent…to create a purely agrarian society or cooperative", he "dismantled the country's existing economic infrastructure and depopulated every urban area". Forced movement, such as this forced movement applied by Pol Pot, is a clear display of structural violence. When "symbols of Cambodian society were equally disrupted, social institutions of every kind…were purged or torn down", cultural violence (defined as when "any aspect of culture such as language, religion, ideology, art, or cosmology is used to legitimize direct or structural violence") is added to the structural violence of forced movement and to the direct violence, such as murder, at the Killing Fields. Vietnam eventually intervened and the genocide officially ended. However, ten million landmines left by opposing guerillas in the 1970s continue to create a violent landscape in Cambodia.
Human geography, though coming late to the theorizing table, has tackled violence through many lenses, including anarchist geography, feminist geography, Marxist geography, political geography, and critical geography. However, Adriana Cavarero notes that, "as violence spreads and assumes unheard-of forms, it becomes difficult to name in contemporary language". Cavarero proposes that, in facing such a truth, it is prudent to reconsider violence as "horrorism"; that is, "as though ideally all the…victims, instead of their killers, ought to determine the name". With geography often adding the forgotten spatial aspect to theories of social science, rather than creating them solely within the discipline, it seems that the self-reflexive contemporary geography of today may have an extremely important place in this current (re)imaging of violence, exemplified by Cavarero.
Epidemiology
As of 2010, all forms of violence resulted in about 1.34 million deaths up from about 1 million in 1990. Suicide accounts for about 883,000, interpersonal violence for 456,000 and collective violence for 18,000. Deaths due to collective violence have decreased from 64,000 in 1990.
By way of comparison, the 1.5 millions deaths a year due to violence is greater than the number of deaths due to tuberculosis (1.34 million), road traffic injuries (1.21 million), and malaria (830'000), but slightly less than the number of people who die from HIV/AIDS (1.77 million).
For every death due to violence, there are numerous nonfatal injuries. In 2008, over 16 million cases of non-fatal violence-related injuries were severe enough to require medical attention. Beyond deaths and injuries, forms of violence such as child maltreatment, intimate partner violence, and elder maltreatment have been found to be highly prevalent.
Self-directed violence
In the last 45 years, suicide rates have increased by 60% worldwide. Suicide is among the three leading causes of death among those aged 15–44 years in some countries, and the second leading cause of death in the 10–24 years age group. These figures do not include suicide attempts which are up to 20 times more frequent than suicide. Suicide was the 16th leading cause of death worldwide in 2004 and is projected to increase to the 12th in 2030. Although suicide rates have traditionally been highest among the male elderly, rates among young people have been increasing to such an extent that they are now the group at highest risk in a third of countries, in both developed and developing countries.
Interpersonal violence
Rates and patterns of violent death vary by country and region. In recent years, homicide rates have been highest in developing countries in Sub-Saharan Africa and Latin America and the Caribbean and lowest in East Asia, the western Pacific, and some countries in northern Africa. Studies show a strong, inverse relationship between homicide rates and both economic development and economic equality. Poorer countries, especially those with large gaps between the rich and the poor, tend to have higher rates of homicide than wealthier countries. Homicide rates differ markedly by age and sex. Gender differences are least marked for children. For the 15 to 29 age group, male rates were nearly six times those for female rates; for the remaining age groups, male rates were from two to four times those for females.
Studies in a number of countries show that, for every homicide among young people age 10 to 24, 20 to 40 other young people receive hospital treatment for a violent injury.
Forms of violence such as child maltreatment and intimate partner violence are highly prevalent. Approximately 20% of women and 5–10% of men report being sexually abused as children, while 25–50% of all children report being physically abused. A WHO multi-country study found that between 15 and 71% of women reported experiencing physical and/or sexual violence by an intimate partner at some point in their lives.
Collective violence
Wars grab headlines, but the individual risk of dying violently in an armed conflict is today relatively low—much lower than the risk of violent death in many countries that are not suffering from an armed conflict. For example, between 1976 and 2008, African Americans were victims of 329,825 homicides. Although there is a widespread perception that war is the most dangerous form of armed violence in the world, the average person living in a conflict-affected country had a risk of dying violently in the conflict of about 2.0 per 100,000 population between 2004 and 2007. This can be compared to the average world homicide rate of 7.6 per 100,000 people. This illustration highlights the value of accounting for all forms of armed violence rather than an exclusive focus on conflict related violence. Certainly, there are huge variations in the risk of dying from armed conflict at the national and subnational level, and the risk of dying violently in a conflict in specific countries remains extremely high. In Iraq, for example, the direct conflict death rate for 2004–07 was 65 per 100,000 people per year and, in Somalia, 24 per 100,000 people. This rate even reached peaks of 91 per 100,000 in Iraq in 2006 and 74 per 100,000 in Somalia in 2007.
History
Scientific evidence for warfare has come from settled, sedentary communities. Some studies argue humans have a predisposition for violence (chimpanzees, also great apes, have been known to kill members of competing groups for resources like food). A comparison across mammal species found that humans have a paleolithic adult homicide rate of about 2%. This would be lower than some other animals, but still high. However, this study took into account the infanticide rate by some other animals such as meerkats, but not of humans, where estimates of children killed by infanticide in the Mesolithic and Neolithic eras vary from 15 to 50 percent. Other evidence suggests that organized, large-scale, militaristic, or regular human-on-human violence was absent for the vast majority of the human timeline, and is first documented to have started only relatively recently in the Holocene, an epoch that began about 11,700 years ago, probably with the advent of higher population densities due to sedentism. Social anthropologist Douglas P. Fry writes that scholars are divided on the origins of possible increase of violence—in other words, war-like behavior:
Jared Diamond in his books Guns, Germs and Steel and The Third Chimpanzee posits that the rise of large-scale warfare is the result of advances in technology and city-states. For instance, the rise of agriculture provided a significant increase in the number of individuals that a region could sustain over hunter-gatherer societies, allowing for development of specialized classes such as soldiers, or weapons manufacturers.
In academia, the idea of the peaceful pre-history and non-violent tribal societies gained popularity with the post-colonial perspective. The trend, starting in archaeology and spreading to anthropology reached its height in the late half of the 20th century. However, some newer research in archaeology and bioarchaeology may provide evidence that violence within and among groups is not a recent phenomenon. According to the book "The Bioarchaeology of Violence" violence is a behavior that is found throughout human history.
Lawrence H. Keeley at the University of Illinois writes in War Before Civilization that 87% of tribal societies were at war more than once per year, and that 65% of them were fighting continuously. He writes that the attrition rate of numerous close-quarter clashes, which characterize endemic warfare, produces casualty rates of up to 60%, compared to 1% of the combatants as is typical in modern warfare. "Primitive Warfare" of these small groups or tribes was driven by the basic need for sustenance and violent competition.
Fry explores Keeley's argument in depth and counters that such sources erroneously focus on the ethnography of hunters and gatherers in the present, whose culture and values have been infiltrated externally by modern civilization, rather than the actual archaeological record spanning some two million years of human existence. Fry determines that all present ethnographically studied tribal societies, "by the very fact of having been described and published by anthropologists, have been irrevocably impacted by history and modern colonial nation states" and that "many have been affected by state societies for at least 5000 years."
The relatively peaceful period since World War II is known as the Long Peace.
The Better Angels of Our Nature
Steven Pinker's 2011 book, The Better Angels of Our Nature, argued that modern society is less violent than in periods of the past, whether on the short scale of decades or long scale of centuries or millennia. He argues for a paleolithic homicide rate of 15%.
Steven Pinker argues that by every possible measure, every type of violence has drastically decreased since ancient and medieval times. A few centuries ago, for example, genocide was a standard practice in all kinds of warfare and was so common that historians did not even bother to mention it. Cannibalism and slavery have been greatly reduced in the last thousand years, and capital punishment is now banned in many countries. According to Pinker, rape, murder, warfare and animal cruelty have all seen drastic declines in the 20th century. Pinker's analyses have also been criticized, concerning the statistical question of how to measure violence and whether it is in fact declining.
Pinker's observation of the decline in interpersonal violence echoes the work of Norbert Elias, who attributes the decline to a "civilizing process", in which the state's monopolization of violence, the maintenance of socioeconomic interdependencies or "figurations", and the maintenance of behavioural codes in culture all contribute to the development of individual sensibilities, which increase the repugnance of individuals towards violent acts. According to a 2010 study, non-lethal violence, such as assaults or bullying appear to be declining as well.
Some scholars disagree with the argument that all violence is decreasing arguing that not all types of violent behaviour are lower now than in the past. They suggest that research typically focuses on lethal violence, often looks at homicide rates of death due to warfare, but ignore the less obvious forms of violence.
Society and culture
Beyond deaths and injuries, highly prevalent forms of violence (such as child maltreatment and intimate partner violence) have serious lifelong non-injury health consequences. Victims may engage in high-risk behaviours such as alcohol and substance misuse and smoking, which in turn can contribute to cardiovascular disorders, cancers, depression, diabetes and HIV/AIDS, resulting in premature death. The balances of prevention, mitigation, mediation and exacerbation are complex, and vary with the underpinnings of violence.
Economic effects
In countries with high levels of violence, economic growth can be slowed down, personal and collective security eroded, and social development impeded. Families edging out of poverty and investing in schooling their sons and daughters can be ruined through the violent death or severe disability of the main breadwinner. Communities can be caught in poverty traps where pervasive violence and deprivation form a vicious circle that stifles economic growth. For societies, meeting the direct costs of health, criminal justice, and social welfare responses to violence diverts many billions of dollars from more constructive societal spending. The much larger indirect costs of violence due to lost productivity and lost investment in education work together to slow economic development, increase socioeconomic inequality, and erode human and social capital.
Additionally, communities with high level of violence do not provide the level of stability and predictability vital for a prospering business economy. Individuals will be less likely to invest money and effort towards growth in such unstable and violent conditions. One of the possible proves might be the study of Baten and Gust that used "regicide" as measurement unit to approximate the influence of interpersonal violence and depict the influence of high interpersonal violence on economic development and level of investments. The results of the research prove the correlation of the human capital and the interpersonal violence.
In 2016, the Institute for Economics and Peace, released the Economic Value of Peace report, which estimates the economic impact of violence and conflict on the global economy, the total economic impact of violence on the world economy in 2015 was estimated to be $13.6 trillion in purchasing power parity terms.
Religion and politics
Religious and political ideologies have been the cause of interpersonal violence throughout history. Ideologues often falsely accuse others of violence, such as the ancient blood libel against Jews, the medieval accusations of casting witchcraft spells against women, and modern accusations of satanic ritual abuse against day care center owners and others.
Both supporters and opponents of the 21st-century War on terrorism regard it largely as an ideological and religious war. In 2007, US politician John Edwards said "the War on Terror was nothing more than a "slogan" and "a bumper sticker."" In 1992, former research fellow with the US Cato Institute, Leon Hadar, considered that it wasn't "in America's interest to launch a crusade for democracy, neither is it in her interest to be perceived as the guarantor of the status quo and the major obstacle to reform".
Vittorio Bufacchi describes two different modern concepts of violence, one the "minimalist conception" of violence as an intentional act of excessive or destructive force, the other the "comprehensive conception" which includes violations of rights, including a long list of human needs.
Anti-capitalists say that capitalism is violent, that private property and profit survive only because police violence defends them, and that capitalist economies need war to expand. In this view, capitalism results in a form of structural violence that stems from inequality, environmental damage, and the exploitation of women and people of colour.
Frantz Fanon critiqued the violence of colonialism and wrote about the counter violence of the "colonized victims."
Throughout history, most religions and individuals like Mahatma Gandhi have preached that humans are capable of eliminating individual violence and organizing societies through purely nonviolent means. Gandhi himself once wrote: "A society organized and run on the basis of complete non-violence would be the purest anarchy." Modern political ideologies which espouse similar views include pacifist varieties of voluntarism, mutualism, anarchism and libertarianism.
Luther Seminary Old Testament scholar Terence E. Fretheim wrote about the Old Testament:
See also
Aestheticization of violence
Aggression
Ahimsa
Alternatives to Violence Project
Communal violence
Corporal punishment
De-escalation
Domestic violence
Fight-or-flight response
Harm principle
Hunting
Legislative violence
Non Violent Resistance (psychological intervention)
Nonviolence
Nonviolent Communication
Nonviolent resistance
Nonviolent revolution
Pacifism
Parasitism
Predation
Religious violence
Resentment
Sectarian violence
Turning the other cheek
Violence begets violence
War
Notes
References
Sources
Barzilai, Gad (2003). Communities and Law: Politics and Cultures of Legal Identities. Ann Arbor: University of Michigan Press. .
Benjamin, Walter, Critique of Violence
Flannery, D.J., Vazsonyi, A.T. & Waldman, I.D. (Eds.) (2007). The Cambridge handbook of violent behavior and aggression. Cambridge University Press. .
Malešević, Siniša The Sociology of War and Violence. Cambridge University Press; 2010 [cited October 17, 2011]. .
Nazaretyan, A.P. (2007). Violence and Non-Violence at Different Stages of World History: A view from the hypothesis of techno-humanitarian balance. In: History & Mathematics. Moscow: KomKniga/URSS. pp. 127–48. .
External links
Violence prevention at World Health Organization
Violence prevention at Centers for Disease Control and Prevention
Violence prevention at American Psychological Association
World Report on Violence Against Children at Secretary-General of the United Nations
Hidden in Plain Sight: A statistical analysis of violence against children at UNICEF
Heat and Violence
Social conflict
Concepts in ethics
Crimes
Dispute resolution
Human behavior
Harassment and bullying | Violence | [
"Biology"
] | 13,258 | [
"Behavior",
"Violence",
"Harassment and bullying",
"Aggression",
"Human behavior"
] |
46,117 | https://en.wikipedia.org/wiki/Eratosthenes | Eratosthenes of Cyrene (; ; – ) was an Ancient Greek polymath: a mathematician, geographer, poet, astronomer, and music theorist. He was a man of learning, becoming the chief librarian at the Library of Alexandria. His work is comparable to what is now known as the study of geography, and he introduced some of the terminology still used today.
He is best known for being the first person known to calculate the Earth's circumference, which he did by using the extensive survey results he could access in his role at the Library. His calculation was remarkably accurate (his error margin turned out to be less than 1%). He was also the first person to calculate Earth's axial tilt, which similarly proved to have remarkable accuracy. He created the first global projection of the world, incorporating parallels and meridians based on the available geographic knowledge of his era.
Eratosthenes was the founder of scientific chronology; he used Egyptian and Persian records to estimate the dates of the main events of the Trojan War, dating the sack of Troy to 1183 BC. In number theory, he introduced the sieve of Eratosthenes, an efficient method of identifying prime numbers and composite numbers.
He was a figure of influence in many fields who yearned to understand the complexities of the entire world. His devotees nicknamed him Pentathlos after the Olympians who were well rounded competitors, for he had proven himself to be knowledgeable in every area of learning. Yet, according to an entry in the Suda (a 10th-century encyclopedia), some critics scorned him, calling him Number 2 because he always came in second in all his endeavours.
Life
The son of Aglaos, Eratosthenes was born in 276 BC in Cyrene. Now part of modern-day Libya, Cyrene had been founded by Greeks centuries earlier and became the capital of Pentapolis (North Africa), a country of five cities: Cyrene, Arsinoe, Berenice, Ptolemias, and Apollonia. Alexander the Great conquered Cyrene in 332 BC, and following his death in 323 BC, its rule was given to one of his generals, Ptolemy I Soter, the founder of the Ptolemaic Kingdom. Under Ptolemaic rule the economy prospered, based largely on the export of horses and silphium, a plant used for rich seasoning and medicine. Cyrene became a place of cultivation, where knowledge blossomed. Like any young Greek at the time, Eratosthenes would have studied in the local gymnasium, where he would have learned physical skills and social discourse as well as reading, writing, arithmetic, poetry, and music.
Eratosthenes went to Athens to further his studies. There he was taught Stoicism by its founder, Zeno of Citium, in philosophical lectures on living a virtuous life. He then studied under Aristo of Chios, who led a more cynical school of philosophy. He also studied under the head of the Platonic Academy, who was Arcesilaus of Pitane. His interest in Plato led him to write his first work at a scholarly level, Platonikos, inquiring into the mathematical foundation of Plato's philosophies. Eratosthenes was a man of many perspectives and investigated the art of poetry under Callimachus. He wrote poems: one in hexameters called Hermes, illustrating the god's life history; and another in elegiacs, called Erigone, describing the suicide of the Athenian maiden Erigone (daughter of Icarius). He wrote Chronographies, a text that scientifically depicted dates of importance, beginning with the Trojan War. This work was highly esteemed for its accuracy. George Syncellus was later able to preserve from Chronographies a list of 38 kings of the Egyptian Thebes. Eratosthenes also wrote Olympic Victors, a chronology of the winners of the Olympic Games. It is not known when he wrote his works, but they highlighted his abilities.
These works and his great poetic abilities led the king Ptolemy III Euergetes to seek to place him as a librarian at the Library of Alexandria in the year 245 BC. Eratosthenes, then thirty years old, accepted Ptolemy's invitation and traveled to Alexandria, where he lived for the rest of his life. Within about five years he became Chief Librarian, a position that the poet Apollonius Rhodius had previously held. As head of the library Eratosthenes tutored the children of Ptolemy, including Ptolemy IV Philopator who became the fourth Ptolemaic pharaoh. He expanded the library's holdings: in Alexandria all books had to be surrendered for duplication. It was said that these were copied so accurately that it was impossible to tell if the library had returned the original or the copy.
He sought to maintain the reputation of the Library of Alexandria against competition from the Library of Pergamum. Eratosthenes created a whole section devoted to the examination of Homer, and acquired original works of great tragic dramas of Aeschylus, Sophocles and Euripides.
Eratosthenes made several important contributions to mathematics and science, and was a friend of Archimedes. Around 255 BC, he invented the armillary sphere. In On the Circular Motions of the Celestial Bodies, Cleomedes credited him with having calculated the Earth's circumference around 240 BC, with high accuracy.
Eratosthenes believed there was both good and bad in every nation and criticized Aristotle for arguing that humanity was divided into Greeks and barbarians, as well as for arguing that the Greeks should keep themselves racially pure. As he aged, he contracted ophthalmia, becoming blind around 195 BC. Losing the ability to read and to observe nature plagued and depressed him, leading him to voluntarily starve himself to death. He died in 194 BC at the age of 82 in Alexandria.
Scholarly career
Measurement of Earth's circumference
The measurement of Earth's circumference is the most famous among the results obtained by Eratosthenes, who estimated that the meridian has a length of 252,000 stadia (), with an error on the real value between −2.4% and +0.8% (assuming a value for the stadion between ). Eratosthenes described his arc measurement technique, in a book entitled , which has not been preserved. However, a simplified version of the method has been preserved, as described by Cleomedes.
The simplified method works by considering two cities along the same meridian and measuring both the distance between them and the difference in angles of the shadows cast by the sun on a vertical rod (a gnomon) in each city at noon on the summer solstice. The two cities used were Alexandria and Syene (modern Aswan), and the distance between the cities was measured by professional bematists. A geometric calculation reveals that the circumference of the Earth is the distance between the two cities divided by the difference in shadow angles expressed as a fraction of one turn.
Geography
Eratosthenes now continued from his knowledge about the Earth. Using his discoveries and knowledge of its size and shape, he began to sketch it. In the Library of Alexandria he had access to various travel books, which contained various items of information and representations of the world that needed to be pieced together in some organized format. In his three-volume work Geography (), he described and mapped his entire known world, even dividing the Earth into five climate zones: two freezing zones around the poles, two temperate zones, and a zone encompassing the equator and the tropics. This book is the first recorded instance of many terms still in use today, including the name of the discipline geography. He placed grids of overlapping lines over the surface of the Earth. He used parallels and meridians to link together every place in the world. It was now possible to estimate one's distance from remote locations with this network over the surface of the Earth. In the Geography the names of over 400 cities and their locations were shown, which had never been achieved before. However, his Geography has been lost to history, although fragments of the work can be pieced together from other great historians like Pliny, Polybius, Strabo, and Marcianus. While this work is the earliest we can trace certain ideas, words, and concepts in the historical record, earlier contributions may have been lost to history.
The first book was something of an introduction and gave a review of his predecessors, recognizing their contributions that he compiled in the library. In this book Eratosthenes denounced Homer as not providing any insight into what he now described as geography. His disapproval of Homer's topography angered many who believed the world depicted in the Odyssey to be legitimate. He also commented on the ideas of the nature and origin of the Earth: he thought of Earth as an immovable globe while its surface was changing. He hypothesized that at one time the Mediterranean had been a vast lake that covered the countries that surrounded it and that it only became connected to the ocean to the west when a passage opened up sometime in its history.
The second book contains his calculation of the circumference of the Earth. This is where, according to Pliny, "The world was grasped." Here Eratosthenes described his famous story of the well in Syene, wherein at noon each summer solstice, the Sun's rays shone straight down into the city-center well. This book would now be considered a text on mathematical geography.
His third book of the Geography contained political geography. He cited countries and used parallel lines to divide the map into sections, to give accurate descriptions of the realms. This was a breakthrough and can be considered the beginning of geography. For this, Eratosthenes was named the "Father of Modern Geography."
According to Strabo, Eratosthenes argued against the Greek-Barbarian dichotomy. He says Alexander ignored his advisers by his regard for all people with law and government. Strabo says that Eratosthenes was wrong to claim that Alexander had disregarded the counsel of his advisers. Strabo argues it was Alexander's interpretation of their "real intent" in recognizing that "in some people there prevail the law-abiding and the political instinct, and the qualities associated with education and powers of speech."
Achievements
Eratosthenes was described by the Suda Lexicon as a Πένταθλος (Pentathlos) which can be translated as "All-Rounded", for he was skilled in a variety of things; he was a true polymath. His opponents nicknamed him "Number 2" because he was great at many things and tried to get his hands on every bit of information but never achieved the highest rank in anything; Strabo accounts Eratosthenes as a mathematician among geographers and a geographer among mathematicians.
Eusebius of Caesarea in his Preparatio Evangelica includes a brief chapter of three sentences on celestial distances (Book XV, Chapter 53). He states simply that Eratosthenes found the distance to the Sun to be "" (literally "of stadia myriads 400 and 80,000") and the distance to the Moon to be 780,000 stadia. The expression for the distance to the Sun has been translated either as 4,080,000 stadia (1903 translation by E. H. Gifford), or as 804,000,000 stadia (edition of Edouard des Places, dated 1974–1991). The meaning depends on whether Eusebius meant 400 myriad plus 80,000 or "400 and 80,000" myriad. With a stade of , 804,000,000 stadia is , approximately the distance from the Earth to the Sun.
Eratosthenes also calculated the Sun's diameter. According to Macrobius, Eratosthenes made the diameter of the Sun to be about 27 times that of the Earth. The actual figure is approximately 109 times.
During his time at the Library of Alexandria, Eratosthenes devised a calendar using his predictions about the ecliptic of the Earth. He calculated that there are 365 days in a year and that every fourth year there would be 366 days.
He was also very proud of his solution for Doubling the Cube. His motivation was that he wanted to produce catapults. Eratosthenes constructed a mechanical line drawing device to calculate the cube, called the mesolabio. He dedicated his solution to King Ptolemy, presenting a model in bronze with it a letter and an epigram. Archimedes was Eratosthenes' friend and he, too, worked on the war instrument with mathematics. Archimedes dedicated his book The Method to Eratosthenes, knowing his love for learning and mathematics.
Number theory
Eratosthenes proposed a simple algorithm for finding prime numbers. This algorithm is known in mathematics as the Sieve of Eratosthenes.
In mathematics, the sieve of Eratosthenes (Greek: κόσκινον Ἐρατοσθένους), one of a number of prime number sieves, is a simple, ancient algorithm for finding all prime numbers up to any given limit. It does so by iteratively marking as composite, i.e., not prime, the multiples of each prime, starting with the multiples of 2. The multiples of a given prime are generated starting from that prime, as a sequence of numbers with the same difference, equal to that prime, between consecutive numbers. This is the sieve's key distinction from using trial division to sequentially test each candidate number for divisibility by each prime.
Works
Eratosthenes was one of the most pre-eminent scholarly figures of his time, and produced works covering a vast area of knowledge before and during his time at the Library. He wrote on many topicsgeography, mathematics, philosophy, chronology, literary criticism, grammar, poetry, and even old comedies. There are no documents left of his work after the destruction of the Library of Alexandria.
Titles
Platonikos (lost, quoted by Theon of Smyrna)
Hermes
Erigone
Chronographies
Olympic Victors
Περὶ τῆς ἀναμετρήσεως τῆς γῆς (On the Measurement of the Earth) (lost, summarized by Cleomedes)
Гεωγραϕικά (Geographika) (lost, criticized by Strabo)
Arsinoe (a memoir of queen Arsinoe; lost; quoted by Athenaeus in the Deipnosophistae)
Ariston (concerning Aristo of Chios' addiction to luxury; lost; quoted by Athenaeus in the Deipnosophistae)
The Catasterismi (Katasterismoi), a lost collection of Hellenistic myths about the constellations
See also
Aristarchus of Samos (), a Greek mathematician who calculated the distance from the Earth to the Sun.
Eratosthenes (crater) on the Moon.
Eratosthenian period in the lunar geologic timescale.
Eratosthenes Seamount in the eastern Mediterranean Sea.
Eratosthenes Point in Antarctica.
Hipparchus (), a Greek mathematician who measured the radii of the Sun and the Moon as well as their distances from the Earth.
Posidonius (), a Greek astronomer and mathematician who calculated the circumference of the Earth.
Notes
References
Further reading
Aujac, G. (2001). Eratosthène de Cyrène, le pionnier de la géographie. Paris: Édition du CTHS. 224 p.
Fuentes González, P. P., "Ératosthène de Cyrène", in R. Goulet (ed.), Dictionnaire des Philosophes Antiques, vol. III, Paris, Centre National de la Recherche Scientifique, 2000, pp. 188–236.
Geus K. (2002). Eratosthenes von Kyrene. Studien zur hellenistischen Kultur- und Wissenschaftgeschichte. München: Verlag C.H. Beck. (Münchener Beiträge zur Papyrusforschung und antiken Rechtsgeschichte. Bd. 92) X, 412 S.
Honigmann, E. (1929). Die sieben Klimata und die πολεις επισημοι. Eine Untersuchung zur Geschichte der Geographie und Astrologie in Altertum und Mittelalter. Heidelberg: Carl Winter's Universitätsbuchhandlung. 247 S.
Marcotte, D. (1998). "La climatologie d'Ératosthène à Poséidonios: genèse d'une science humaine". G. Argoud, J.Y. Guillaumin (eds.). Sciences exactes et sciences appliquées à Alexandrie (IIIe siècle av J.C. – Ier ap J.C.). Saint Etienne: Publications de l'Université de Saint Etienne: 263–277.
McPhail, Cameron (2011). Reconstructing Eratosthenes' Map of the World: a Study in Source Analysis. A Thesis Submitted for the Degree of Master of Arts at the University of Otago. Dunedin, New Zealand.
Rosokoki, A. (1995), Die Erigone des Eratosthenes. Eine kommentierte Ausgabe der Fragmente, Heidelberg: C. Winter-Verlag
Shcheglov, D.A. (2004/2006). "Ptolemy's System of Seven Climata and Eratosthenes' Geography". Geographia Antiqua 13: 21–37.
Thalamas, A. (1921). La géographe d'Ératosthène. Versailles.
External links
English translation of the primary source for Eratosthenes and the size of the Earth at Roger Pearse.
Bernhardy, Gottfried: Eratosthenica Berlin, 1822 (PDF) (Latin/Greek), Reprinted Osnabruck 1968 (German)
Eratosthenes' sieve in Javascript
About Eratosthenes' methods, including a Java applet
How the Greeks estimated the distances to the Moon and Sun
Measuring the Earth with Eratosthenes' method
List of ancient Greek mathematicians and contemporaries of Eratosthenes
New Advent Encyclopedia article on the Library of Alexandria
Eratosthenes' sieve in classic BASIC all-web based interactive programming environment
International pedagogical project : project :fr:La main à la pâte.
Open source Physics Computer Model about Eratosthenes estimation of radius and circumference of Earth
Eratosthenes, video
Eratosthenes, Katasterismoi (or Astrothesiae), original text
270s BC births
190s BC deaths
276 BC births
3rd-century BC Egyptian people
3rd-century BC Greek people
3rd-century BC poets
3rd-century BC mathematicians
Ancient Greek astronomers
Ancient Greek geographers
Ancient Greek inventors
Ancient Greek music theorists
Ancient Greek geometers
Ancient Greek poets
Cyrenean Greeks
Deaths by starvation
Geodesists
Giftedness
Librarians of Alexandria
Number theorists
3rd-century BC geographers
3rd-century BC astronomers
Greek librarians
Ancient librarians | Eratosthenes | [
"Mathematics"
] | 4,046 | [
"Number theorists",
"Number theory"
] |
46,126 | https://en.wikipedia.org/wiki/Mustard%20gas | Mustard gas or sulfur mustard are names commonly used for the organosulfur chemical compound bis(2-chloroethyl) sulfide, which has the chemical structure S(CH2CH2Cl)2, as well as other species. In the wider sense, compounds with the substituents are known as sulfur mustards or nitrogen mustards, respectively, where X = Cl or Br. Such compounds are potent alkylating agents, making mustard gas acutely and severely toxic. Mustard gas is a carcinogen. There is no preventative agent against mustard gas, with protection depending entirely on skin and airways protection, and no antidote exists for mustard poisoning.
Also known as mustard agents, this family of compounds comprises infamous cytotoxins and blister agents with a long history of use as chemical weapons. The name mustard gas is technically incorrect; the substances, when dispersed, are often not gases but a fine mist of liquid droplets that can be readily absorbed through the skin and by inhalation. The skin can be affected by contact with either the liquid or vapor. The rate of penetration into skin is proportional to dose, temperature and humidity.
Sulfur mustards are viscous liquids at room temperature and have an odor resembling mustard plants, garlic, or horseradish, hence the name. When pure, they are colorless, but when used in impure forms, such as in warfare, they are usually yellow-brown. Mustard gases form blisters on exposed skin and in the lungs, often resulting in prolonged illness ending in death.
History as chemical weapons
Sulfur mustard is a type of chemical warfare agent. As a chemical weapon, mustard gas was first used in World War I, and has been used in several armed conflicts since then, including the Iran–Iraq War, resulting in more than 100,000 casualties. Sulfur-based and nitrogen-based mustard agents are regulated under Schedule 1 of the 1993 Chemical Weapons Convention, as substances with few uses other than in chemical warfare. Mustard agents can be deployed by means of artillery shells, aerial bombs, rockets, or by spraying from aircraft.
Adverse health effects
Mustard gases have powerful blistering effects on victims. They are also carcinogenic and mutagenic alkylating agents. Their high lipophilicity accelerates their absorption into the body. Because mustard agents often do not elicit immediate symptoms, contaminated areas may appear normal. Within 24 hours of exposure, victims experience intense itching and skin irritation. If this irritation goes untreated, blisters filled with pus can form wherever the agent contacted the skin. As chemical burns, these are severely debilitating.
If the victim's eyes were exposed, then they become sore, starting with conjunctivitis (also known as pink eye), after which the eyelids swell, resulting in temporary blindness. Extreme ocular exposure to mustard gas vapors may result in corneal ulceration, anterior chamber scarring, and neovascularization. In these severe and infrequent cases, corneal transplantation has been used as a treatment. Miosis, when the pupil constricts more than usual, may also occur, which may be the result of the cholinomimetic activity of mustard. If inhaled in high concentrations, mustard agents cause bleeding and blistering within the respiratory system, damaging mucous membranes and causing pulmonary edema. Depending on the level of contamination, mustard agent burns can vary between first and second degree burns. They can also be as severe, disfiguring, and dangerous as third degree burns. Some 80% of sulfur mustard in contact with the skin evaporates, while 10% stays in the skin and 10% is absorbed and circulated in the blood.
The carcinogenic and mutagenic effects of exposure to mustard gas increase the risk of developing cancer later in life. In a study of patients 25 years after wartime exposure to chemical weaponry, c-DNA microarray profiling indicated that 122 genes were significantly mutated in the lungs and airways of mustard gas victims. Those genes all correspond to functions commonly affected by mustard gas exposure, including apoptosis, inflammation, and stress responses. The long-term ocular complications include burning, tearing, itching, photophobia, presbyopia, pain, and foreign-body sensations.
Medical management
In a rinse-wipe-rinse sequence, skin is decontaminated of mustard gas by washing with liquid soap and water, or an absorbent powder. The eyes should be thoroughly rinsed using saline or clean water. A topical analgesic is used to relieve skin pain during decontamination.
The blistering effects of mustard gas can be neutralized by decontamination solutions such as "DS2" (2% NaOH, 70% diethylenetriamine, 28% 2-methoxyethanol). For skin lesions, topical treatments, such as calamine lotion, steroids, and oral antihistamines are used to relieve itching. Larger blisters are irrigated repeatedly with saline or soapy water, then treated with an antibiotic and petroleum gauze.
Mustard agent burns do not heal quickly and (as with other types of burns) present a risk of sepsis caused by pathogens such as Staphylococcus aureus and Pseudomonas aeruginosa. The mechanisms behind mustard gas's effect on endothelial cells are still being studied, but recent studies have shown that high levels of exposure can induce high rates of both necrosis and apoptosis. In vitro tests have shown that at low concentrations of mustard gas, where apoptosis is the predominant result of exposure, pretreatment with 50 mM N-acetyl-L-cysteine (NAC) was able to decrease the rate of apoptosis. NAC protects actin filaments from reorganization by mustard gas, demonstrating that actin filaments play a large role in the severe burns observed in victims.
A British nurse treating soldiers with mustard agent burns during World War I commented:
Mechanism of cellular toxicity
Sulfur mustards readily eliminate chloride ions by intramolecular nucleophilic substitution to form cyclic sulfonium ions. These very reactive intermediates tend to permanently alkylate nucleotides in DNA strands, which can prevent cellular division, leading to programmed cell death. Alternatively, if cell death is not immediate, the damaged DNA can lead to the development of cancer. Oxidative stress would be another pathology involved in mustard gas toxicity.
In the wider sense, compounds with the structural element BC2H4X, where X is any leaving group and B is a Lewis base, are known as mustards. Such compounds can form cyclic "onium" ions (sulfonium, ammonium, etc.) that are good alkylating agents. Other such compounds are bis(2-haloethyl)ethers (oxygen mustards), the (2-haloethyl)amines (nitrogen mustards), and sesquimustard, which has two α-chloroethyl thioether groups (ClC2H4S−) connected by an ethylene bridge (−C2H4−). These compounds have a similar ability to alkylate DNA, but their physical properties vary.
Formulations
In its history, various types and mixtures of mustard gas have been employed. These include:
H – Also known as HS ("Hun Stuff") or Levinstein mustard. This is named after the inventor of the "quick but dirty" Levinstein Process for manufacture, reacting dry ethylene with disulfur dichloride under controlled conditions. Undistilled mustard gas contains 20–30% impurities, which means it does not store as well as HD. Also, as it decomposes, it increases in vapor pressure, making the munition it is contained in likely to split, especially along a seam, releasing the agent to the atmosphere.
HD – Codenamed Pyro by the British, and Distilled Mustard by the US. Distilled mustard of 95% or higher purity. The term "mustard gas" usually refers to this variety of mustard.
HT – Codenamed Runcol by the British, and Mustard T- mixture by the US. A mixture of 60% HD mustard and 40% O-mustard, a related vesicant with lower freezing point, lower volatility and similar vesicant characteristics.
HL – A blend of distilled mustard (HD) and lewisite (L), originally intended for use in winter conditions due to its lower freezing point compared to the pure substances. The lewisite component of HL was used as a form of antifreeze.
HQ – A blend of distilled mustard (HD) and sesquimustard (Q) (Gates and Moore 1946).
Yellow Cross – any of several blends containing sulfur mustard and sometimes arsine agents, along with solvents and other additives.
Commonly-stockpiled mustard agents (class)
History
Development
Mustard gases were possibly developed as early as 1822 by César-Mansuète Despretz (1798–1863). Despretz described the reaction of sulfur dichloride and ethylene but never made mention of any irritating properties of the reaction product. In 1854, another French chemist, Alfred Riche (1829–1908), repeated this procedure, also without describing any adverse physiological properties. In 1860, the British scientist Frederick Guthrie synthesized and characterized the mustard agent compound and noted its irritating properties, especially in tasting. Also in 1860, chemist Albert Niemann, known as a pioneer in cocaine chemistry, repeated the reaction, and recorded blister-forming properties. In 1886, Viktor Meyer published a paper describing a synthesis that produced good yields. He combined 2-chloroethanol with aqueous potassium sulfide, and then treated the resulting thiodiglycol with phosphorus trichloride. The purity of this compound was much higher and consequently the adverse health effects upon exposure were much more severe. These symptoms presented themselves in his assistant, and in order to rule out the possibility that his assistant was suffering from a mental illness (psychosomatic symptoms), Meyer had this compound tested on laboratory rabbits, most of which died. In 1913, the English chemist Hans Thacher Clarke (known for the Eschweiler-Clarke reaction) replaced the phosphorus trichloride with hydrochloric acid in Meyer's formulation while working with Emil Fischer in Berlin. Clarke was hospitalized for two months for burns after one of his flasks broke. According to Meyer, Fischer's report on this accident to the German Chemical Society sent the German Empire on the road to chemical weapons.
Mustard gas can have the effect of turning a patient's skin different colors, including shades of red, orange, pink, and in unusual cases, blue. The German Empire during World War I relied on the Meyer-Clarke method because 2-chloroethanol was readily available from the German dye industry of that time.
Use
Mustard gas was first used in World War I by the German army against British and Canadian soldiers near Ypres, Belgium, on July 12, 1917, and later also against the French Second Army. Yperite is "a name used by the French, because the compound was first used at Ypres." The Allies did not use mustard gas until November 1917 at Cambrai, France, after the armies had captured a stockpile of German mustard shells. It took the British more than a year to develop their own mustard agent weapon, with production of the chemicals centred on Avonmouth Docks (the only option available to the British was the Despretz–Niemann–Guthrie process). This was used first in September 1918 during the breaking of the Hindenburg Line.
Mustard gas was originally assigned the name LOST, after the scientists Wilhelm Lommel and Wilhelm Steinkopf, who developed a method of large-scale production for the Imperial German Army in 1916.
Mustard gas was dispersed as an aerosol in a mixture with other chemicals, giving it a yellow-brown color. Mustard agent has also been dispersed in such munitions as aerial bombs, land mines, mortar rounds, artillery shells, and rockets. Exposure to mustard agent was lethal in about 1% of cases. Its effectiveness was as an incapacitating agent. The early countermeasures against mustard agent were relatively ineffective, since a soldier wearing a gas mask was not protected against absorbing it through his skin and being blistered. A common countermeasure was using a urine-soaked mask or facecloth to prevent or reduce injury, a readily available remedy attested by soldiers in documentaries (e.g. They Shall Not Grow Old in 2018) and others (such as forward aid nurses) interviewed between 1947 and 1981 by the British Broadcasting Corporation for various World War One history programs; however, the effectiveness of this measure is unclear.
Mustard gas can remain in the ground for weeks, and it continues to cause ill effects. If mustard agent contaminates one's clothing and equipment while cold, then other people with whom they share an enclosed space could become poisoned as contaminated items warm up enough material to become an airborne toxic agent. An example of this was depicted in a British and Canadian documentary about life in the trenches, particularly once the "sousterrain" (subways and berthing areas underground) were completed in Belgium and France. Towards the end of World War I, mustard agent was used in high concentrations as an area-denial weapon that forced troops to abandon heavily contaminated areas.
Since World War I, mustard gas has been used in several wars and other conflicts, usually against people who cannot retaliate in kind:
United Kingdom against the Red Army in 1919
Alleged British use in Mesopotamia in 1920
Spain against the Rifian resistance in Morocco during the Rif War of 1921–27 (see also: Spanish use of chemical weapons in the Rif War)
Italy in Libya in 1930
The Soviet Union in Xinjiang, Republic of China, during the Soviet Invasion of Xinjiang against the 36th Division (National Revolutionary Army) in 1934, and also in the Xinjiang War (1937) in 1936–37
Italy against Abyssinia (now Ethiopia) in 1935-1936
The Japanese Empire against China in 1937–1945
The US military conducted experiments with chemical weapons like lewisite and mustard gas on Japanese American, Puerto Rican and African Americans in the US military in World War II to see how non-white races would react to being mustard gassed, with Rollin Edwards describing it as "It felt like you were on fire, guys started screaming and hollering and trying to break out. And then some of the guys fainted. And finally they opened the door and let us out, and the guys were just, they were in bad shape." and "It took all the skin off your hands. Your hands just rotted".
After World War II, stockpiled mustard gas was dumped by South African military personnel under the command of William Bleloch off Port Elizabeth, resulting in several cases of burns among local trawler crews.
The United States Government tested effectiveness on US Naval recruits in a laboratory setting at The Great Lakes Naval Base, June 3, 1945
The 2 December 1943 air raid on Bari destroyed an Allied stockpile of mustard gas on the SS John Harvey, killing 83 and hospitalizing 628.
Egypt against North Yemen in 1963–1967
Iraq against Kurds in the town of Halabja during the Halabja chemical attack in 1988
Iraq against Iranians in 1983–1988
Possibly in Sudan against insurgents in the civil war, in 1995 and 1997.
In the Iraq War, abandoned stockpiles of mustard gas shells were destroyed in the open air, and were used against Coalition forces in roadside bombs.
By ISIS forces against Kurdish forces in Iraq in August 2015.
By ISIS against another rebel group in the town of Mare' in 2015.
According to Syrian state media, by ISIS against the Syrian Army during the battle in Deir ez-Zor in 2016.
The use of toxic gases or other chemicals, including mustard gas, during warfare is known as chemical warfare, and this kind of warfare was prohibited by the Geneva Protocol of 1925, and also by the later Chemical Weapons Convention of 1993. The latter agreement also prohibits the development, production, stockpiling, and sale of such weapons.
In September 2012, a US official stated that the rebel militant group ISIS was manufacturing and using mustard gas in Syria and Iraq, which was allegedly confirmed by the group's head of chemical weapons development, Sleiman Daoud al-Afari, who has since been captured.
Development of the first chemotherapy drug
As early as 1919 it was known that mustard agent was a suppressor of hematopoiesis. In addition, autopsies performed on 75 soldiers who had died of mustard agent during World War I were done by researchers from the University of Pennsylvania who reported decreased counts of white blood cells. This led the American Office of Scientific Research and Development (OSRD) to finance the biology and chemistry departments at Yale University to conduct research on the use of chemical warfare during World War II.
As a part of this effort, the group investigated nitrogen mustard as a therapy for Hodgkin's lymphoma and other types of lymphoma and leukemia, and this compound was tried out on its first human patient in December 1942. The results of this study were not published until 1946, when they were declassified. In a parallel track, after the air raid on Bari in December 1943, the doctors of the U.S. Army noted that white blood cell counts were reduced in their patients. Some years after World War II was over, the incident in Bari and the work of the Yale University group with nitrogen mustard converged, and this prompted a search for other similar chemical compounds. Due to its use in previous studies, the nitrogen mustard called "HN2" became the first cancer chemotherapy drug, chlormethine (also known as mechlorethamine, mustine) to be used. Chlormethine and other mustard gas molecules are still used to this day as an chemotherapy agent albeit they have largely been replaced with more safe chemotherapy drugs like cisplatin and carboplatin.
Disposal
In the United States, storage and incineration of mustard gas and other chemical weapons were carried out by the U.S. Army Chemical Materials Agency. Disposal projects at the two remaining American chemical weapons sites were carried out near Richmond, Kentucky, and Pueblo, Colorado.
New detection techniques are being developed in order to detect the presence of mustard gas and its metabolites. The technology is portable and detects small quantities of the hazardous waste and its oxidized products, which are notorious for harming unsuspecting civilians. The immunochromatographic assay would eliminate the need for expensive, time-consuming lab tests and enable easy-to-read tests to protect civilians from sulfur-mustard dumping sites.
In 1946, 10,000 drums of mustard gas (2,800 tonnes) stored at the production facility of Stormont Chemicals in Cornwall, Ontario, Canada, were loaded onto 187 boxcars for the journey to be buried at sea on board a long barge south of Sable Island, southeast of Halifax, at a depth of . The dump location is 42 degrees, 50 minutes north by 60 degrees, 12 minutes west.
A large British stockpile of old mustard agent that had been made and stored since World War I at M. S. Factory, Valley near Rhydymwyn in Flintshire, Wales, was destroyed in 1958.
Most of the mustard gas found in Germany after World War II was dumped into the Baltic Sea. Between 1966 and 2002, fishermen have found about 700 chemical weapons in the region of Bornholm, most of which contain mustard gas. One of the more frequently dumped weapons was "Sprühbüchse 37" (SprüBü37, Spray Can 37, 1937 being the year of its fielding with the German Army). These weapons contain mustard gas mixed with a thickener, which gives it a tar-like viscosity. When the content of the SprüBü37 comes in contact with water, only the mustard gas in the outer layers of the lumps of viscous mustard hydrolyzes, leaving behind amber-colored residues that still contain most of the active mustard gas. On mechanically breaking these lumps (e.g., with the drag board of a fishing net or by the human hand) the enclosed mustard gas is still as active as it had been at the time the weapon was dumped. These lumps, when washed ashore, can be mistaken for amber, which can lead to severe health problems. Artillery shells containing mustard gas and other toxic ammunition from World War I (as well as conventional explosives) can still be found in France and Belgium. These were formerly disposed of by explosion undersea, but since the current environmental regulations prohibit this, the French government is building an automated factory to dispose of the accumulation of chemical shells.
In 1972, the U.S. Congress banned the practice of disposing of chemical weapons into the ocean by the United States. 29,000 tons of nerve and mustard agents had already been dumped into the ocean off the United States by the U.S. Army. According to a report created in 1998 by William Brankowitz, a deputy project manager in the U.S. Army Chemical Materials Agency, the army created at least 26 chemical weapons dumping sites in the ocean offshore from at least 11 states on both the East Coast and the West Coast (in Operation CHASE, Operation Geranium, etc.). In addition, due to poor recordkeeping, about one-half of the sites have only their rough locations known.
In June 1997, India declared its stock of chemical weapons of of mustard gas. By the end of 2006, India had destroyed more than 75 percent of its chemical weapons/material stockpile and was granted extension for destroying the remaining stocks by April 2009 and was expected to achieve 100 percent destruction within that time frame. India informed the United Nations in May 2009 that it had destroyed its stockpile of chemical weapons in compliance with the international Chemical Weapons Convention. With this India has become the third country after South Korea and Albania to do so. This was cross-checked by inspectors of the United Nations.
Producing or stockpiling mustard gas is prohibited by the Chemical Weapons Convention. When the convention entered force in 1997, the parties declared worldwide stockpiles of 17,440 tonnes of mustard gas. As of December 2015, 86% of these stockpiles had been destroyed.
A significant portion of the United States' mustard agent stockpile was stored at the Edgewood Area of Aberdeen Proving Ground in Maryland. Approximately 1,621 tons of mustard agents were stored in one-ton containers on the base under heavy guard. A chemical neutralization plant was built on the proving ground and neutralized the last of this stockpile in February 2005. This stockpile had priority because of the potential for quick reduction of risk to the community. The nearest schools were fitted with overpressurization machinery to protect the students and faculty in the event of a catastrophic explosion and fire at the site. These projects, as well as planning, equipment, and training assistance, were provided to the surrounding community as a part of the Chemical Stockpile Emergency Preparedness Program (CSEPP), a joint program of the Army and the Federal Emergency Management Agency (FEMA). Unexploded shells containing mustard gases and other chemical agents are still present in several test ranges in proximity to schools in the Edgewood area, but the smaller amounts of poison gas () present considerably lower risks. These remnants are being detected and excavated systematically for disposal. The U.S. Army Chemical Materials Agency oversaw disposal of several other chemical weapons stockpiles located across the United States in compliance with international chemical weapons treaties. These include the complete incineration of the chemical weapons stockpiled in Alabama, Arkansas, Indiana, and Oregon. Earlier, this agency had also completed destruction of the chemical weapons stockpile located on Johnston Atoll located south of Hawaii in the Pacific Ocean. The largest mustard agent stockpile, at approximately 6,200 short tons, was stored at the Deseret Chemical Depot in northern Utah. The incineration of this stockpile began in 2006. In May 2011, the last of the mustard agents in the stockpile were incinerated at the Deseret Chemical Depot, and the last artillery shells containing mustard gas were incinerated in January 2012.
In 2008, many empty aerial bombs that contained mustard gas were found in an excavation at the Marrangaroo Army Base just west of Sydney, Australia. In 2009, a mining survey near Chinchilla, Queensland, uncovered 144 105-millimeter howitzer shells, some containing "Mustard H", that had been buried by the U.S. Army during World War II.
In 2014, a collection of 200 bombs was found near the Flemish villages of Passendale and Moorslede. The majority of the bombs were filled with mustard agents. The bombs were left over from the German army and were meant to be used in the Battle of Passchendaele in World War I. It was the largest collection of chemical weapons ever found in Belgium.
A large amount of chemical weapons, including mustard gas, was found in a neighborhood of Washington, D.C. The cleanup was completed in 2021.
Post-war accidental exposure
In 2002, an archaeologist at the Presidio Trust archaeology lab in San Francisco was exposed to mustard gas, which had been dug up at the Presidio of San Francisco, a former military base.
In 2010, a clamming boat pulled up some old artillery shells of World War I from the Atlantic Ocean south of Long Island, New York. Multiple fishermen suffered from blistering and respiratory irritation severe enough to require hospitalization.
WWII-era tests on men
From 1943 to 1944, mustard agent experiments were performed on Australian service volunteers in tropical Queensland, Australia, by Royal Australian Engineers, British Army and American experimenters, resulting in some severe injuries. One test site, the Brook Islands National Park, was chosen to simulate Pacific islands held by the Imperial Japanese Army. These experiments were the subject of the documentary film Keen as Mustard.
The United States tested sulfur mustards and other chemical agents including nitrogen mustards and lewisite on up to 60,000 servicemen during and after WWII. The experiments were classified secret and as with Agent Orange, claims for medical care and compensation were routinely denied, even after the WWII-era tests were declassified in 1993. The Department of Veterans Affairs stated that it would contact 4,000 surviving test subjects but failed to do so, eventually only contacting 600. Skin cancer, severe eczema, leukemia, and chronic breathing problems plagued the test subjects, some of whom were as young as 19 at the time of the tests, until their deaths, but even those who had previously filed claims with the VA went without compensation.
African American servicemen were tested alongside white men in separate trials to determine whether their skin color would afford them a degree of immunity to the agents, and Nisei servicemen, some of whom had joined after their release from Japanese American Internment Camps were tested to determine susceptibility of Japanese military personnel to these agents. These tests also included Puerto Rican subjects.
Detection in biological fluids
Concentrations of thiodiglycol in urine have been used to confirm a diagnosis of chemical poisoning in hospitalized victims. The presence in urine of 1,1'-sulfonylbismethylthioethane (SBMTE), a conjugation product with glutathione, is considered a more specific marker, since this metabolite is not found in specimens from unexposed persons. In one case, intact mustard gas was detected in postmortem fluids and tissues of a man who died one week post-exposure.
See also
Bis(chloromethyl) ether
Chlorine gas
Keen as Mustard
Phosgene oxime
Poison gas in World War I
Rawalpindi experiments
Selenium mustard
References
Notes
Further reading
Cook, Tim. "‘Against God-Inspired Conscience’: The Perception of Gas Warfare as a Weapon of Mass Destruction, 1915–1939." War & Society 18.1 (2000): 47-69.
Dorsey, M. Girard. Holding Their Breath: How the Allies Confronted the Threat of Chemical Warfare in World War II (Cornell UP, 2023) online.
Duchovic, Ronald J., and Joel A. Vilensky. "Mustard gas: its pre-World War I history." Journal of chemical education 84.6 (2007): 944. online
Feister, Alan J. Medical defense against mustard gas: toxic mechanisms and pharmacological implications (1991). online
Fitzgerald, Gerard J. "Chemical warfare and medical response during World War I." American journal of public health 98.4 (2008): 611-625. online
*
Geraci, Matthew J. "Mustard gas: imminent danger or eminent threat?." Annals of Pharmacotherapy 42.2 (2008): 237-246. online
Ghabili, Kamyar, et al. "Mustard gas toxicity: the acute and chronic pathological effects." Journal of applied toxicology 30.7 (2010): 627-643. online
Jones, Edgar. "Terror weapons: The British experience of gas and its treatment in the First World War." War in History 21.3 (2014): 355-375. online
Padley, Anthony Paul. "Gas: the greatest terror of the Great War." Anaesthesia and intensive care 44.1_suppl (2016): 24-30. online
Rall, David P., and Constance M. Pechura, eds. Veterans at risk: The health effects of mustard gas and lewisite (1993). online
Schummer, Joachim. "Ethics of chemical weapons research: Poison gas in World War One." Ethics of Chemistry: From Poison Gas to Climate Engineering (2021) pp. 55-83. online
Smith, Susan I. Toxic Exposures: Mustard Gas and the Health Consequences of World War II in the United States (Rutgers University Press, 2017) online book review
Wattana, Monica, and Tareg Bey. "Mustard gas or sulfur mustard: an old chemical agent as a new terrorist threat." Prehospital and disaster medicine 24.1 (2009): 19-29. online
External links
Mustard gas (Sulphur Mustard) (IARC Summary & Evaluation, Supplement7, 1987). Inchem.org (1998-02-09). Retrieved on 2011-05-29.
Textbook of Military Medicine – Intensive overview of mustard gas Includes many references to scientific literature
Detailed information on physical effects and suggested treatments
Shows photographs taken in 1996 showing people with mustard gas burns.
UMDNJ-Rutgers University CounterACT Research Center of Excellence A research center studying mustard gas, includes searchable reference library with many early references on mustard gas.
surgical treatment of mustard gas burns
UK Ministry of Defence Report on disposal of weapons at sea and incidents arising
Thioethers
Organochlorides
Blister agents
World War I chemical weapons
IARC Group 1 carcinogens
Chloroethyl compounds
Sulfur mustards
Dermatoxins | Mustard gas | [
"Chemistry"
] | 6,422 | [
"Blister agents",
"World War I chemical weapons",
"Chemical weapons"
] |
46,127 | https://en.wikipedia.org/wiki/Robert%20Tarjan | Robert Endre Tarjan (born April 30, 1948) is an American computer scientist and mathematician. He is the discoverer of several graph theory algorithms, including his strongly connected components algorithm, and co-inventor of both splay trees and Fibonacci heaps. Tarjan is currently the James S. McDonnell Distinguished University Professor of Computer Science at Princeton University.
Personal life and education
He was born in Pomona, California. His father, George Tarjan (1912-1991), raised in Hungary, was a child psychiatrist, specializing in mental retardation, and ran a state hospital. Robert Tarjan's younger brother James became a chess grandmaster. As a child, Robert Tarjan read a lot of science fiction, and wanted to be an astronomer. He became interested in mathematics after reading Martin Gardner's mathematical games column in Scientific American. He became seriously interested in math in the eighth grade, thanks to a "very stimulating" teacher.
While he was in high school, Tarjan got a job, where he worked with IBM punch card collators. He first worked with real computers while studying astronomy at the Summer Science Program in 1964.
Tarjan obtained a Bachelor's degree in mathematics from the California Institute of Technology in 1969. At Stanford University, he received his master's degree in computer science in 1971 and a Ph.D. in computer science (with a minor in mathematics) in 1972. At Stanford, he was supervised by Robert Floyd and Donald Knuth, both highly prominent computer scientists, and his Ph.D. dissertation was An Efficient Planarity Algorithm. Tarjan selected computer science as his area of interest because he believed that computer science was a way of doing mathematics that could have a practical impact.
Tarjan now lives in Princeton, NJ, and Silicon Valley. He is married to Nayla Rizk.
He has three daughters: Alice Tarjan, Sophie Zawacki, and Maxine Tarjan.
Computer science career
Tarjan has been teaching at Princeton University since 1985. He has also held academic positions at Cornell University (1972–73), University of California, Berkeley (1973–1975), Stanford University (1974–1980), and New York University (1981–1985). He has also been a fellow of the NEC Research Institute (1989–1997). In April 2013 he joined Microsoft Research Silicon Valley in addition to the position at Princeton. In October 2014 he rejoined Intertrust Technologies as chief scientist.
Tarjan has worked at AT&T Bell Labs (1980–1989), Intertrust Technologies (1997–2001, 2014–present), Compaq (2002) and Hewlett Packard (2006–2013).
Algorithms and data structures
Tarjan is known for his pioneering work on graph theory algorithms and data structures. Some of his well-known algorithms include Tarjan's off-line least common ancestors algorithm, Tarjan's strongly connected components algorithm, and Tarjan's bridge-finding algorithm, and he was one of five co-authors of the median of medians linear-time selection algorithm. The Hopcroft–Tarjan planarity testing algorithm was the first linear-time algorithm for planarity testing.
Tarjan has also developed important data structures such as the Fibonacci heap (a heap data structure consisting of a forest of trees), and the splay tree (a self-adjusting binary search tree; co-invented by Tarjan and Daniel Sleator). Another significant contribution was the analysis of the disjoint-set data structure; he was the first to prove the optimal runtime involving the inverse Ackermann function.
Awards
Tarjan received the Turing Award jointly with John Hopcroft in 1986. The citation for the award states that it was:
Tarjan was also elected an ACM Fellow in 1994. The citation for this award states:
Some of the other awards for Tarjan include:
Nevanlinna Prize in Information Science (1983) – first recipient
Member of the American Academy of Arts and Sciences, elected 1985
National Academy of Sciences Award for Initiatives in Research (1984)
Member of the National Academy of Sciences, elected 1987
Member of the National Academy of Engineering, elected 1988
Member of the American Philosophical Society, elected 1990
Paris Kanellakis Award in Theory and Practice, ACM (1999)
Caltech Distinguished Alumni Award, California Institute of Technology (2010)
Selected publications
Tarjan's papers have been collectively cited over 94,000 times. Among the most cited are:
1972: Depth-first search and linear graph algorithms, R Tarjan, SIAM Journal on Computing 1 (2), 146-160
1987: Fibonacci heaps and their uses in improved network optimization algorithms, ML Fredman, RE Tarjan, Journal of the ACM (JACM) 34 (3), 596-615
1983: Data structures and network algorithms, RE Tarjan, Society for industrial and Applied Mathematics
1988: A new approach to the maximum-flow problem, V Goldberg, RE Tarjan, Journal of the ACM (JACM) 35 (4), 921-940
Patents
Tarjan holds at least 18 U.S. patents. These include:
J. Bentley, D. Sleator, and R. E. Tarjan, U. S. Patent 4,796,003, Data Compaction, 1989
N. Mishra, R. Schreiber, and R. E. Tarjan, U. S. Patent 7,818,272, Method for discovery of clusters of objects in an arbitrary undirected graph using a difference between a fraction of internal connections and maximum fraction of connections by an outside object, 2010
B. Pinkas, S. Haber, R. E. Tarjan, and T. Sander, U. S. Patent 8220036, Establishing a secure channel with a human user, 2012
Notes
References
OCLC entries for Robert E Tarjan
External links
List of Robert Tarjan's patents on IPEXL's Patent Directory
Robert Tarjan's home page at Princeton.
1948 births
Living people
Members of the United States National Academy of Sciences
American computer scientists
American theoretical computer scientists
Turing Award laureates
Nevanlinna Prize laureates
Scientists at Bell Labs
California Institute of Technology alumni
Stanford University School of Engineering alumni
Princeton University faculty
People from Pomona, California
20th-century American Jews
21st-century American Jews
Fellows of the Society for Industrial and Applied Mathematics
Summer Science Program
Members of the United States National Academy of Engineering
Graph theorists
Members of the American Philosophical Society
1994 fellows of the Association for Computing Machinery | Robert Tarjan | [
"Mathematics"
] | 1,340 | [
"Mathematical relations",
"Graph theory",
"Graph theorists"
] |
46,143 | https://en.wikipedia.org/wiki/Planner%20%28programming%20language%29 | Planner (often seen in publications as "PLANNER" although it is not an acronym) is a programming language designed by Carl Hewitt at MIT, and first published in 1969. First, subsets such as Micro-Planner and Pico-Planner were implemented, and then essentially the whole language was implemented as Popler by Julian Davies at the University of Edinburgh in the POP-2 programming language. Derivations such as QA4, Conniver, QLISP and Ether (see scientific community metaphor) were important tools in artificial intelligence research in the 1970s, which influenced commercial developments such as Knowledge Engineering Environment (KEE) and Automated Reasoning Tool (ART).
Procedural approach versus logical approach
The two major paradigms for constructing semantic software systems were procedural and logical. The procedural paradigm was epitomized by
Lisp which featured recursive procedures that operated on list structures.
The logical paradigm was epitomized by uniform proof procedure resolution-based derivation (proof) finders. According to the logical paradigm it was “cheating” to incorporate procedural knowledge.
Procedural embedding of knowledge
Planner was invented for the purposes of the procedural embedding of knowledge and was a rejection of the resolution uniform proof procedure paradigm, which
Converted everything to clausal form. Converting all information to clausal form is problematic because it hides the underlying structure of the information.
Then used resolution to attempt to obtain a proof by contradiction by adding the clausal form of the negation of the theorem to be proved. Using only resolution as the rule of inference is problematical because it hides the underlying structure of proofs. Also, using proof by contradiction is problematical because the axiomatizations of all practical domains of knowledge are inconsistent in practice.
Planner was a kind of hybrid between the procedural and logical paradigms because it combined programmability with logical reasoning. Planner featured a procedural interpretation of logical sentences where an implication of the form can be procedurally interpreted in the following ways using pattern-directed invocation:
Forward chaining (antecedently):
Backward chaining (consequently)
In this respect, the development of Planner was influenced by natural deductive logical systems (especially the one by Frederic Fitch [1952]).
Micro-planner implementation
A subset called Micro-Planner was implemented by Gerry Sussman, Eugene Charniak and Terry Winograd and was used in Winograd's natural-language understanding program SHRDLU, Eugene Charniak's story understanding work, Thorne McCarty's work on legal reasoning, and some other projects. This generated a great deal of excitement in the field of AI. It also generated controversy because it proposed an alternative to the logic approach that had been one of the mainstay paradigms for AI.
At SRI International, Jeff Rulifson, Jan Derksen, and Richard Waldinger developed QA4 which built on the constructs in Planner and introduced a context mechanism to provide modularity for expressions in the database. Earl Sacerdoti and Rene Reboh developed QLISP, an extension of QA4 embedded in INTERLISP, providing Planner-like reasoning embedded in a procedural language and developed in its rich programming environment. QLISP was used by Richard Waldinger and Karl Levitt for program verification, by Earl Sacerdoti for planning and execution monitoring, by Jean-Claude Latombe for computer-aided design, by Nachum Dershowitz for program synthesis, by Richard Fikes for deductive retrieval, and by Steven Coles for an early expert system that guided use of an econometric model.
Computers were expensive. They had only a single slow processor and their memories were very small by comparison with today. So Planner adopted some efficiency expedients including the following:
Backtracking was adopted to economize on the use of time and storage by working on and storing only one possibility at a time in exploring alternatives.
A unique name assumption was adopted to save space and time by assuming that different names referred to different objects. For example, names like Peking (previous PRC capital name) and Beijing (current PRC capital transliteration) were assumed to refer to different objects.
A closed-world assumption could be implemented by conditionally testing whether an attempt to prove a goal exhaustively failed. Later this capability was given the misleading name "negation as failure" because for a goal it was possible to say: "if attempting to achieve exhaustively fails then assert ."
The genesis of Prolog
Gerry Sussman, Eugene Charniak, Seymour Papert and Terry Winograd visited the University of Edinburgh in 1971, spreading the news about Micro-Planner and SHRDLU and casting doubt on the resolution uniform proof procedure approach that had been the mainstay of the Edinburgh Logicists. At the University of Edinburgh, Bruce Anderson implemented a subset of Micro-Planner called PICO-PLANNER, and Julian Davies (1973) implemented essentially all of Planner.
According to Donald MacKenzie, Pat Hayes recalled the impact of a visit from Papert to Edinburgh, which had become the "heart of artificial intelligence's Logicland," according to Papert's MIT colleague, Carl Hewitt. Papert eloquently voiced his critique of the resolution approach dominant at Edinburgh "…and at least one person upped sticks and left because of Papert."
The above developments generated tension among the Logicists at Edinburgh. These tensions were exacerbated when the UK Science Research Council commissioned Sir James Lighthill to write a report on the AI research situation in the UK. The resulting report [Lighthill 1973; McCarthy 1973] was highly critical although SHRDLU was favorably mentioned.
Pat Hayes visited Stanford where he learned about Planner. When he returned to Edinburgh, he tried to influence his friend Bob Kowalski to take Planner into account in their joint work on automated theorem proving. "Resolution theorem-proving was demoted from a hot topic to a relic of the misguided past. Bob Kowalski doggedly stuck to his faith in the potential of resolution theorem proving. He carefully studied Planner.”. Kowalski [1988] states "I can recall trying to convince Hewitt that Planner was similar to SL-resolution." But Planner was invented for the purposes of the procedural embedding of knowledge and was a rejection of the resolution uniform proof procedure paradigm. Colmerauer and Roussel recalled their reaction to learning about Planner in the following way:
"While attending an IJCAI convention in September ‘71 with Jean Trudel, we met Robert Kowalski again and heard a lecture by Terry Winograd on natural language processing. The fact that he did not use a unified formalism left us puzzled. It was at this time that we learned of the existence of Carl Hewitt’s programming language, Planner. The lack of formalization of this language, our ignorance of Lisp and, above all, the fact that we were absolutely devoted to logic meant that this work had little influence on our later research."
In the fall of 1972, Philippe Roussel implemented a language called Prolog (an abbreviation for PROgrammation en LOGique – French for "programming in logic"). Prolog programs are generically of the following form (which is a special case of the backward-chaining in Planner):
Prolog duplicated the following aspects of Micro-Planner:
Pattern directed invocation of procedures from goals (i.e. backward chaining)
An indexed data base of pattern-directed procedures and ground sentences.
Giving up on the completeness paradigm that had characterized previous work on theorem proving and replacing it with the programming language procedural embedding of knowledge paradigm.
Prolog also duplicated the following capabilities of Micro-Planner which were pragmatically useful for the computers of the era because they saved space and time:
Backtracking control structure
Unique Name Assumption by which different names are assumed to refer to distinct entities, e.g., Peking and Beijing are assumed to be different.
Reification of Failure. The way that Planner established that something was provable was to successfully attempt it as a goal and the way that it establish that something was unprovable was to attempt it as a goal and explicitly fail. Of course the other possibility is that the attempt to prove the goal runs forever and never returns any value. Planner also had a construct which succeeded if failed, which gave rise to the “Negation as Failure” terminology in Planner.
Use of the Unique Name Assumption and Negation as Failure became more questionable when attention turned to Open Systems.
The following capabilities of Micro-Planner were omitted from Prolog:
Pattern-directed invocation of procedural plans from assertions (i.e., forward chaining)
Logical negation, e.g., .
Prolog did not include negation in part because it raises implementation issues. Consider for example if negation were included in the following Prolog program:
The above program would be unable to prove even though it follows by the rules of mathematical logic. This is an illustration of the fact that Prolog (like Planner) is intended to be a programming language and so does not (by itself) prove many of the logical consequences that follow from a declarative reading of its programs.
The work on Prolog was valuable in that it was much simpler than Planner. However, as the need arose for greater expressive power in the language, Prolog began to include many of the capabilities of Planner that were left out of the original version of Prolog.
References
Bibliography
Bruce Anderson. Documentation for LIB PICO-PLANNER School of Artificial Intelligence, Edinburgh University. 1972
Bruce Baumgart. Micro-Planner Alternate Reference Manual Stanford AI Lab Operating Note No. 67, April 1972.
.
.
.
.
.
Carl Hewitt. "The Challenge of Open Systems" Byte Magazine. April 1985
Carl Hewitt and Jeff Inman. "DAI Betwixt and Between: From ‘Intelligent Agents’ to Open Systems Science" IEEE Transactions on Systems, Man, and Cybernetics. Nov/Dec 1991.
Carl Hewitt and Gul Agha. "Guarded Horn clause languages: are they deductive and Logical?" International Conference on Fifth Generation Computer Systems, Ohmsha 1988. Tokyo. Also in Artificial Intelligence at MIT, Vol. 2. MIT Press 1991.
.
William Kornfeld and Carl Hewitt. The Scientific Community Metaphor MIT AI Memo 641. January 1981.
Bill Kornfeld and Carl Hewitt. "The Scientific Community Metaphor" IEEE Transactions on Systems, Man, and Cybernetics. January 1981.
Bill Kornfeld. "The Use of Parallelism to Implement a Heuristic Search" IJCAI 1981.
Bill Kornfeld. "Parallelism in Problem Solving" MIT EECS Doctoral Dissertation. August 1981.
Bill Kornfeld. "Combinatorially Implosive Algorithms" CACM. 1982
Robert Kowalski. "The Limitations of Logic" Proceedings of the 1986 ACM fourteenth annual conference on Computer science.
Robert Kowalski. "The Early Years of Logic Programming" CACM January 1988.
.
.
.
Gerry Sussman and Terry Winograd. Micro-planner Reference Manual AI Memo No, 203, MIT Project MAC, July 1970.
Terry Winograd. Procedures as a Representation for Data in a Computer Program for Understanding Natural Language MIT AI TR-235. January 1971.
Gerry Sussman, Terry Winograd and Eugene Charniak. Micro-Planner Reference Manual (Update) AI Memo 203A, MIT AI Lab, December 1971.
Carl Hewitt. Description and Theoretical Analysis (Using Schemata) of Planner, A Language for Proving Theorems and Manipulating Models in a Robot AI Memo No. 251, MIT Project MAC, April 1972.
Eugene Charniak. Toward a Model of Children's Story Comprehension MIT AI TR-266. December 1972.
Julian Davies. Popler 1.6 Reference Manual University of Edinburgh, TPU Report No. 1, May 1973.
Jeff Rulifson, Jan Derksen, and Richard Waldinger. "QA4, A Procedural Calculus for Intuitive Reasoning" SRI AI Center Technical Note 73, November 1973.
Scott Fahlman. "A Planning System for Robot Construction Tasks" MIT AI TR-283. June 1973
James Lighthill. "Artificial Intelligence: A General Survey Artificial Intelligence: a paper symposium." UK Science Research Council. 1973.
John McCarthy. "Review of ‘Artificial Intelligence: A General Survey Artificial Intelligence: a paper symposium." UK Science Research Council. 1973.
Robert Kowalski "Predicate Logic as Programming Language" Memo 70, Department of Artificial Intelligence, Edinburgh University. 1973
Pat Hayes. Computation and Deduction Mathematical Foundations of Computer Science: Proceedings of Symposium and Summer School, Štrbské Pleso, High Tatras, Czechoslovakia, September 3–8, 1973.
Carl Hewitt, Peter Bishop and Richard Steiger. "A Universal Modular Actor Formalism for Artificial Intelligence" IJCAI 1973.
L. Thorne McCarty. "Reflections on TAXMAN: An Experiment on Artificial Intelligence and Legal Reasoning" Harvard Law Review. Vol. 90, No. 5, March 1977
Drew McDermott and Gerry Sussman. The Conniver Reference Manual MIT AI Memo 259A. January 1974.
Earl Sacerdoti, et al., "QLISP A Language for the Interactive Development of Complex Systems" AFIPS. 1976
.
.
External links
History of artificial intelligence
Automated planning and scheduling
Logic programming languages
Robot programming languages
Theorem proving software systems
Programming languages created in 1969 | Planner (programming language) | [
"Mathematics"
] | 2,736 | [
"Theorem proving software systems",
"Automated theorem proving",
"Mathematical software"
] |
46,149 | https://en.wikipedia.org/wiki/GLONASS | GLONASS (, ; ) is a Russian satellite navigation system operating as part of a radionavigation-satellite service. It provides an alternative to Global Positioning System (GPS) and is the second navigational system in operation with global coverage and of comparable precision.
Satellite navigation devices supporting both GPS and GLONASS have more satellites available, meaning positions can be fixed more quickly and accurately, especially in built-up areas where buildings may obscure the view to some satellites. Owing to its higher orbital inclination, GLONASS supplementation of GPS systems also improves positioning in high latitudes (near the poles).
Development of GLONASS began in the Soviet Union in 1976. Beginning on 12 October 1982, numerous rocket launches added satellites to the system until the completion of the constellation in 1995. In 2001, after a decline in capacity during the late 1990s, the restoration of the system was made a government priority, and funding increased substantially. GLONASS is the most expensive program of the Roscosmos, consuming a third of its budget in 2010.
By 2010, GLONASS had achieved full coverage of Russia's territory. In October 2011, the full orbital constellation of 24 satellites was restored, enabling full global coverage. The GLONASS satellites' designs have undergone several upgrades, with the latest version, GLONASS-K2, launched in 2023.
System description
GLONASS is a global navigation satellite system, providing real time position and velocity determination for military and civilian users. The satellites are located in middle circular orbit at altitude with a 64.8° inclination and an orbital period of 11 hours and 16 minutes (every 17 revolutions, done in 8 sidereal days, a satellite passes over the same location). GLONASS's orbit makes it especially suited for usage in high latitudes (north or south), where getting a GPS signal can be problematic.
The constellation operates in three orbital planes, with eight evenly spaced satellites on each. A fully operational constellation with global coverage consists of 24 satellites, while 18 satellites are necessary for covering the territory of Russia. To get a position fix the receiver must be in the range of at least four satellites.
Signal
FDMA
GLONASS satellites transmit two types of signals: open standard-precision signal L1OF/L2OF, and obfuscated high-precision signal L1SF/L2SF.
The signals use similar DSSS encoding and binary phase-shift keying (BPSK) modulation as in GPS signals. All GLONASS satellites transmit the same code as their standard-precision signal; however each transmits on a different frequency using a 15-channel frequency-division multiple access (FDMA) technique spanning either side from 1602.0 MHz, known as the L1 band. The center frequency is 1602 MHz + n × 0.5625 MHz, where n is a satellite's frequency channel number (n=−6,...,0,...,6, previously n=0,...,13). Signals are transmitted in a 38° cone, using right-hand circular polarization, at an EIRP between 25 and 27 dBW (316 to 500 watts). Note that the 24-satellite constellation is accommodated with only 15 channels by using identical frequency channels to support antipodal (opposite side of planet in orbit) satellite pairs, as these satellites are never both in view of an Earth-based user at the same time.
The L2 band signals use the same FDMA as the L1 band signals, but transmit straddling 1246 MHz with the center frequency 1246 MHz + n × 0.4375 MHz, where n spans the same range as for L1. In the original GLONASS design, only obfuscated high-precision signal was broadcast in the L2 band, but starting with GLONASS-M, an additional civil reference signal L2OF is broadcast with an identical standard-precision code to the L1OF signal.
The open standard-precision signal is generated with modulo-2 addition (XOR) of 511 kbit/s pseudo-random ranging code, 50 bit/s navigation message, and an auxiliary 100 Hz meander sequence (Manchester code), all generated using a single time/frequency oscillator. The pseudo-random code is generated with a 9-stage shift register operating with a period of 1 milliseconds.
The navigational message is modulated at 50 bits per second. The superframe of the open signal is 7500 bits long and consists of 5 frames of 30 seconds, taking 150 seconds (2.5 minutes) to transmit the continuous message. Each frame is 1500 bits long and consists of 15 strings of 100 bits (2 seconds for each string), with 85 bits (1.7 seconds) for data and check-sum bits, and 15 bits (0.3 seconds) for time mark. Strings 1-4 provide immediate data for the transmitting satellite, and are repeated every frame; the data include ephemeris, clock and frequency offsets, and satellite status. Strings 5-15 provide non-immediate data (i.e. almanac) for each satellite in the constellation, with frames I-IV each describing five satellites, and frame V describing remaining four satellites.
The ephemerides are updated every 30 minutes using data from the Ground Control segment; they use Earth Centred Earth Fixed (ECEF) Cartesian coordinates in position and velocity, and include lunisolar acceleration parameters. The almanac uses modified orbital elements (Keplerian elements) and is updated daily.
The more accurate high-precision signal is available for authorized users, such as the Russian military, yet unlike the United States P(Y) code, which is modulated by an encrypting W code, the GLONASS restricted-use codes are broadcast in the clear using only security through obscurity. The details of the high-precision signal have not been disclosed. The modulation (and therefore the tracking strategy) of the data bits on the L2SF code has recently changed from unmodulated to 250 bit/s burst at random intervals. The L1SF code is modulated by the navigation data at 50 bit/s without a Manchester meander code.
The high-precision signal is broadcast in phase quadrature with the standard-precision signal, effectively sharing the same carrier wave, but with a ten-times-higher bandwidth than the open signal. The message format of the high-precision signal remains unpublished, although attempts at reverse-engineering indicate that the superframe is composed of 72 frames, each containing 5 strings of 100 bits and taking 10 seconds to transmit, with total length of 36 000 bits or 720 seconds (12 minutes) for the whole navigational message. The additional data are seemingly allocated to critical Lunisolar acceleration parameters and clock correction terms.
Accuracy
At peak efficiency, the standard-precision signal offers horizontal positioning accuracy within 5–10 metres, vertical positioning within , a velocity vector measuring within , and timing within 200 nanoseconds, all based on measurements from four first-generation satellites simultaneously; newer satellites such as GLONASS-M improve on this.
GLONASS uses a coordinate datum named "PZ-90" (Earth Parameters 1990 – Parametry Zemli 1990), in which the precise location of the North Pole is given as an average of its position from 1990 to 1995. This is in contrast to the GPS's coordinate datum, WGS 84, which uses the location of the North Pole in 1984. As of 17 September 2007, the PZ-90 datum has been updated to version PZ-90.02 which differ from WGS 84 by less than in any given direction. Since 31 December 2013, version PZ-90.11 is being broadcast, which is aligned to the International Terrestrial Reference System and Frame 2008 at epoch 2011.0 at the centimetre level, but ideally a conversion to ITRF2008 should be done.
CDMA
Since 2008, new CDMA signals are being researched for use with GLONASS.
The interface control documents for GLONASS CDMA signals was published in August 2016.
According to GLONASS developers, there will be three open and two restricted CDMA signals. The open signal L3OC is centered at 1202.025 MHz and uses BPSK(10) modulation for both data and pilot channels; the ranging code transmits at 10.23 million chips per second, modulated onto the carrier frequency using QPSK with in-phase data and quadrature pilot. The data is error-coded with 5-bit Barker code and the pilot with 10-bit Neuman-Hoffman code.
Open L1OC and restricted L1SC signals are centered at 1600.995 MHz, and open L2OC and restricted L2SC signals are centered at 1248.06 MHz, overlapping with GLONASS FDMA signals. Open signals L1OC and L2OC use time-division multiplexing to transmit pilot and data signals, with BPSK(1) modulation for data and BOC(1,1) modulation for pilot; wide-band restricted signals L1SC and L2SC use BOC (5, 2.5) modulation for both data and pilot, transmitted in quadrature phase to the open signals; this places peak signal strength away from the center frequency of narrow-band open signals.
Binary phase-shift keying (BPSK) is used by standard GPS and GLONASS signals. Binary offset carrier (BOC) is the modulation used by Galileo, modernized GPS, and BeiDou-2.
The navigational message of CDMA signals is transmitted as a sequence of text strings. The message has variable size - each pseudo-frame usually includes six strings and contains ephemerides for the current satellite (string types 10, 11, and 12 in a sequence) and part of the almanac for three satellites (three strings of type 20). To transmit the full almanac for all current 24 satellites, a superframe of 8 pseudo-frames is required. In the future, the superframe will be expanded to 10 pseudo-frames of data to cover full 30 satellites.
The message can also contain Earth's rotation parameters, ionosphere models, long-term orbit parameters for GLONASS satellites, and COSPAS-SARSAT messages. The system time marker is transmitted with each string; UTC leap second correction is achieved by shortening or lengthening (zero-padding) the final string of the day by one second, with abnormal strings being discarded by the receiver.
The strings have a version tag to facilitate forward compatibility: future upgrades to the message format will not break older equipment, which will continue to work by ignoring new data (as long as the constellation still transmits old string types), but up-to-date equipment will be able to use additional information from newer satellites.
The navigational message of the L3OC signal is transmitted at 100 bit/s, with each string of symbols taking 3 seconds (300 bits). A pseudo-frame of 6 strings takes 18 seconds (1800 bits) to transmit. A superframe of 8 pseudo-frames is 14,400 bits long and takes 144 seconds (2 minutes 24 seconds) to transmit the full almanac.
The navigational message of the L1OC signal is transmitted at 100 bit/s. The string is 250 bits long and takes 2.5 seconds to transmit. A pseudo-frame is 1500 bits (15 seconds) long, and a superframe is 12,000 bits or 120 seconds (2 minutes).
L2OC signal does not transmit any navigational message, only the pseudo-range codes:
Glonass-K1 test satellite launched in 2011 introduced L3OC signal. Glonass-M satellites produced since 2014 (s/n 755+) will also transmit L3OC signal for testing purposes.
Enhanced Glonass-K1 and Glonass-K2 satellites, to be launched from 2023, will feature a full suite of modernized CDMA signals in the existing L1 and L2 bands, which includes L1SC, L1OC, L2SC, and L2OC, as well as the L3OC signal. Glonass-K2 series should gradually replace existing satellites starting from 2023, when Glonass-M launches will cease.
Glonass-KM satellites will be launched by 2025. Additional open signals are being studied for these satellites, based on frequencies and formats used by existing GPS, Galileo, and Beidou/COMPASS signals:
open signal L1OCM using BOC(1,1) modulation centered at 1575.42 MHz, similar to modernized GPS signal L1C, Galileo signal E1, and Beidou/COMPASS signal B1C;
open signal L5OCM using BPSK(10) modulation centered at 1176.45 MHz, similar to the GPS "Safety of Life" (L5), Galileo signal E5a, and Beidou/COMPASS signal B2a;
open signal L3OCM using BPSK(10) modulation centered at 1207.14 MHz, similar to Galileo signal E5b and Beidou/COMPASS signal B2b.
Such an arrangement will allow easier and cheaper implementation of multi-standard GNSS receivers.
With the introduction of CDMA signals, the constellation will be expanded to 30 active satellites by 2025; this may require eventual deprecation of FDMA signals. The new satellites will be deployed into three additional planes, bringing the total to six planes from the current three—aided by System for Differential Correction and Monitoring (SDCM), which is a GNSS augmentation system based on a network of ground-based control stations and communication satellites Luch 5A and Luch 5B.
Six additional Glonass-V satellites, using Tundra orbit in three orbital planes, will be launched starting in 2025; this regional high-orbit segment will offer increased regional availability and 25% improvement in precision over Eastern Hemisphere, similar to Japanese QZSS system and Beidou-1. The new satellites will form two ground traces with inclination of 64.8°, eccentricity of 0.072, period of 23.9 hours, and ascending node longitude of 60° and 120°. Glonass-V vehicles are based on Glonass-K platform and will broadcast new CDMA signals only. Previously Molniya orbit, geosynchronous orbit, or inclined orbit were also under consideration for the regional segment.
Navigational message
L1OC
L3OC
Common properties of open CDMA signals
Satellites
The main contractor of the GLONASS program is Joint Stock Company Information Satellite Systems Reshetnev (ISS Reshetnev, formerly called NPO-PM). The company, located in Zheleznogorsk, is the designer of all GLONASS satellites, in cooperation with the Institute for Space Device Engineering (:ru:РНИИ КП) and the Russian Institute of Radio Navigation and Time. Serial production of the satellites is accomplished by the company Production Corporation Polyot in Omsk.
Over the three decades of development, the satellite designs have gone through numerous improvements, and can be divided into three generations: the original GLONASS (since 1982), GLONASS-M (since 2003) and GLONASS-K (since 2011). Each GLONASS satellite has a GRAU designation 11F654, and each of them also has the military "Cosmos-NNNN" designation.
First generation
The true first generation of GLONASS (also called Uragan) satellites were all three-axis stabilized vehicles, generally weighing and were equipped with a modest propulsion system to permit relocation within the constellation. Over time they were upgraded to Block IIa, IIb, and IIv vehicles, with each block containing evolutionary improvements.
Six Block IIa satellites were launched in 1985–1986 with improved time and frequency standards over the prototypes, and increased frequency stability. These spacecraft also demonstrated a 16-month average operational lifetime. Block IIb spacecraft, with a two-year design lifetimes, appeared in 1987, of which a total of 12 were launched, but half were lost in launch vehicle accidents. The six spacecraft that made it to orbit worked well, operating for an average of nearly 22 months.
Block IIv was the most prolific of the first generation. Used exclusively from 1988 to 2000, and continued to be included in launches through 2005, a total of 56 satellites were launched. The design life was three years, however numerous spacecraft exceeded this, with one late model lasting 68 months, nearly double.
Block II satellites were typically launched three at a time from the Baikonur Cosmodrome using Proton-K Blok-DM2 or Proton-K Briz-M boosters. The only exception was when, on two launches, an Etalon geodetic reflector satellite was substituted for a GLONASS satellite.
Second generation
The second generation of satellites, known as Glonass-M, were developed beginning in 1990 and first launched in 2003. These satellites possess a substantially increased lifetime of seven years and weigh slightly more at . They are approximately in diameter and high, with a solar array span of for an electrical power generation capability of 1600 watts at launch. The aft payload structure houses 12 primary antennas for L-band transmissions. Laser corner-cube reflectors are also carried to aid in precise orbit determination and geodetic research. On-board cesium clocks provide the local clock source. 52 Glonass-M have been produced and launched.
A total of 41 second generation satellites were launched through the end of 2013. As with the previous generation, the second generation spacecraft were launched three at a time using Proton-K Blok-DM2 or Proton-K Briz-M boosters. Some were launched alone with Soyuz-2-1b/Fregat.
In July 2015, ISS Reshetnev announced that it had completed the last GLONASS-M (No. 61) spacecraft and it was putting it in storage waiting for launch, along with eight previously built satellites.
As on 22 September 2017, GLONASS-M No.52 satellite went into operation and the orbital grouping has again increased to 24 space vehicles.
Third generation
GLONASS-K is a substantial improvement of the previous generation: it is the first unpressurised GLONASS satellite with a much reduced mass of versus the of GLONASS-M. It has an operational lifetime of 10 years, compared to the 7-year lifetime of the second generation GLONASS-M. It will transmit more navigation signals to improve the system's accuracy — including new CDMA signals in the L3 and L5 bands, which will use modulation similar to modernized GPS, Galileo, and BeiDou. Glonass-K consist of 26 satellites having satellite index 65-98 and widely used in Russian Military space.
The new satellite's advanced equipment—made solely from Russian components — will allow the doubling of GLONASS' accuracy. As with the previous satellites, these are 3-axis stabilized, nadir pointing with dual solar arrays. The first GLONASS-K satellite was successfully launched on 26 February 2011.
Due to their weight reduction, GLONASS-K spacecraft can be launched in pairs from the Plesetsk Cosmodrome launch site using the substantially lower cost Soyuz-2.1b boosters or in six-at-once from the Baikonur Cosmodrome using Proton-K Briz-M launch vehicles.
Ground control
The ground control segment of GLONASS is almost entirely located within former Soviet Union territory, except for several in Brazil and one in Nicaragua.
The GLONASS ground segment consists of:
a system control centre;
five Telemetry, Tracking and Command centers;
two Laser Ranging Stations; and
ten Monitoring and Measuring Stations.
Receivers
Companies producing GNSS receivers making use of GLONASS:
Furuno
JAVAD GNSS, Inc
Septentrio
Topcon
C-Nav
Magellan Navigation
Novatel
ComNav technology Ltd.
Leica Geosystems
Hemisphere GNSS
Trimble Inc
u-blox
NPO Progress describes a receiver called GALS-A1, which combines GPS and GLONASS reception.
SkyWave Mobile Communications manufactures an Inmarsat-based satellite communications terminal that uses both GLONASS and GPS.
, some of the latest receivers in the Garmin eTrex line also support GLONASS (along with GPS). Garmin also produce a standalone Bluetooth receiver, the GLO for Aviation, which combines GPS, WAAS and GLONASS.
Various smartphones from 2011 onwards have integrated GLONASS capability in addition to their pre-existing GPS receivers, with the intention of reducing signal acquisition periods by allowing the device to pick up more satellites than with a single-network receiver, including devices from:
Xiaomi
Sony Ericsson
ZTE
Huawei
Samsung
Apple (since iPhone 4S, concurrently with GPS)
HTC
LG
Motorola
Nokia
Status
Availability
, the GLONASS constellation status is:
The system requires 18 satellites for continuous navigation services covering all of Russia, and 24 satellites to provide services worldwide. The GLONASS system covers 100% of worldwide territory.
On 2 April 2014, the system experienced a technical failure that resulted in practical unavailability of the navigation signal for around 12 hours.
On 14–15 April 2014, nine GLONASS satellites experienced a technical failure due to software problems.
On 19 February 2016, three GLONASS satellites experienced a technical failure: the batteries of GLONASS-738 exploded, the batteries of GLONASS-737 were depleted, and GLONASS-736 experienced a stationkeeping failure due to human error during maneuvering. GLONASS-737 and GLONASS-736 were expected to be operational again after maintenance, and one new satellite (GLONASS-751) to replace GLONASS-738 was expected to complete commissioning in early March 2016. The full capacity of the satellite group was expected to be restored in the middle of March 2016.
After the launching of two new satellites and maintenance of two others, the full capacity of the satellite group was restored.
Accuracy
According to Russian System of Differentional Correction and Monitoring's data, , precision of GLONASS navigation definitions (for p=0.95) for latitude and longitude were with mean number of navigation space vehicles (NSV) equals 7—8 (depending on station). In comparison, the same time precision of GPS navigation definitions were with mean number of NSV equals 6—11 (depending on station).
Some modern receivers are able to use both GLONASS and GPS satellites together, providing greatly improved coverage in urban canyons and giving a very fast time to fix due to over 50 satellites being available. In indoor, urban canyon or mountainous areas, accuracy can be greatly improved over using GPS alone. For using both navigation systems simultaneously, precision of GLONASS/GPS navigation definitions were with mean number of NSV equals 14—19 (depends on station).
In May 2009, Anatoly Perminov, then director of the Roscosmos, stated that actions were undertaken to expand GLONASS's constellation and to improve the ground segment to increase the navigation definition of GLONASS to an accuracy of by 2011. In particular, the latest satellite design, GLONASS-K has the ability to double the system's accuracy once introduced. The system's ground segment is also to undergo improvements. As of early 2012, sixteen positioning ground stations are under construction in Russia and in the Antarctic at the Bellingshausen and Novolazarevskaya bases. New stations will be built around the southern hemisphere from Brazil to Indonesia. Together, these improvements are expected to bring GLONASS' accuracy to 0.6 m or better by 2020. The setup of a GLONASS receiving station in the Philippines is also now under negotiation.
History
See also
Aviaconversiya – a Russian satellite navigation firm
BeiDou – Chinese counterpart
Era-glonass – GLONASS-based system of emergency response
Galileo – European Union's counterpart
Global Positioning System - American counterpart
List of GLONASS satellites
Multilateration – the mathematical technique used for positioning
NAVIC – Indian counterpart
Tsikada – a Russian satellite navigation system
Notes
References
Standards
Bibliography
GLONASS Interface Control Document, Edition 5.1, 2008 (backup)
GLONASS Interface Control Document, Version 4.0, 1998
External links
Official GLONASS web page
Navigation satellite constellations
Space program of Russia
Space program of the Soviet Union
Soviet inventions
Wireless locating
Earth observation satellites of the Soviet Union
Military equipment introduced in the 1980s | GLONASS | [
"Technology"
] | 5,071 | [
"GLONASS",
"Wireless locating"
] |
46,150 | https://en.wikipedia.org/wiki/Lua%20%28programming%20language%29 | Lua is a lightweight, high-level, multi-paradigm programming language designed mainly for embedded use in applications. Lua is cross-platform software, since the interpreter of compiled bytecode is written in ANSI C, and Lua has a relatively simple C application programming interface (API) to embed it into applications.
Lua originated in 1993 as a language for extending software applications to meet the increasing demand for customization at the time. It provided the basic facilities of most procedural programming languages, but more complicated or domain-specific features were not included; rather, it included mechanisms for extending the language, allowing programmers to implement such features. As Lua was intended to be a general embeddable extension language, the designers of Lua focused on improving its speed, portability, extensibility and ease-of-use in development.
History
Lua was created in 1993 by Roberto Ierusalimschy, Luiz Henrique de Figueiredo and Waldemar Celes, members of the Computer Graphics Technology Group (Tecgraf) at the Pontifical Catholic University of Rio de Janeiro, in Brazil.
From 1977 until 1992, Brazil had a policy of strong trade barriers (called a market reserve) for computer hardware and software, believing that Brazil could and should produce its own hardware and software. In that climate, Tecgraf's clients could not afford, either politically or financially, to buy customized software from abroad; under the market reserve, clients would have to go through a complex bureaucratic process to prove their needs couldn't be met by Brazilian companies. Those reasons led Tecgraf to implement the basic tools it needed from scratch.
Lua's predecessors were the data-description/configuration languages Simple Object Language (SOL) and data-entry language (DEL). They had been independently developed at Tecgraf in 1992–1993 to add some flexibility into two different projects (both were interactive graphical programs for engineering applications at Petrobras company). There was a lack of any flow-control structures in SOL and DEL, and Petrobras felt a growing need to add full programming power to them.
In The Evolution of Lua, the language's authors wrote:
Lua 1.0 was designed in such a way that its object constructors, being then slightly different from the current light and flexible style, incorporated the data-description syntax of SOL (hence the name Lua: Sol meaning "Sun" in Portuguese, and Lua meaning "Moon"). Lua syntax for control structures was mostly borrowed from Modula (if, while, repeat/until), but also had taken influence from CLU (multiple assignments and multiple returns from function calls, as a simpler alternative to reference parameters or explicit pointers), C++ ("neat idea of allowing a local variable to be declared only where we need it"), SNOBOL and AWK (associative arrays). In an article published in Dr. Dobb's Journal, Lua's creators also state that LISP and Scheme with their single, ubiquitous data-structure mechanism (the list) were a major influence on their decision to develop the table as the primary data structure of Lua.
Lua semantics have been increasingly influenced by Scheme over time, especially with the introduction of anonymous functions and full lexical scoping. Several features were added in new Lua versions.
Versions of Lua prior to version 5.0 were released under a license similar to the BSD license. From version 5.0 onwards, Lua has been licensed under the MIT License. Both are permissive free software licences and are almost identical.
Features
Lua is commonly described as a "multi-paradigm" language, providing a small set of general features that can be extended to fit different problem types. Lua does not contain explicit support for inheritance, but allows it to be implemented with metatables. Similarly, Lua allows programmers to implement namespaces, classes and other related features using its single table implementation; first-class functions allow the employment of many techniques from functional programming and full lexical scoping allows fine-grained information hiding to enforce the principle of least privilege.
In general, Lua strives to provide simple, flexible meta-features that can be extended as needed, rather than supply a feature-set specific to one programming paradigm. As a result, the base language is light; the full reference interpreter is only about 247 kB compiled and easily adaptable to a broad range of applications.
As a dynamically typed language intended for use as an extension language or scripting language, Lua is compact enough to fit on a variety of host platforms. It supports only a small number of atomic data structures such as Boolean values, numbers (double-precision floating point and 64-bit integers by default) and strings. Typical data structures such as arrays, sets, lists and records can be represented using Lua's single native data structure, the table, which is essentially a heterogeneous associative array.
Lua implements a small set of advanced features such as first-class functions, garbage collection, closures, proper tail calls, coercion (automatic conversion between string and number values at run time), coroutines (cooperative multitasking) and dynamic module loading.
Syntax
The classic "Hello, World!" program can be written as follows, with or without parentheses:
print("Hello, World!")
print "Hello, World!"
The declaration of a variable, without a value.
local variable
The declaration of a variable with a value of 10.
local students = 10
A comment in Lua starts with a double-hyphen and runs to the end of the line, similar to Ada, Eiffel, Haskell, SQL and VHDL. Multi-line strings and comments are marked with double square brackets.
-- Single line comment
--[[
Multi-line comment
--]]
The factorial function is implemented in this example:
function factorial(n)
local x = 1
for i = 2, n do
x = x * i
end
return x
end
Control flow
Lua has one type of conditional test: if then end with optional else and elseif then execution control constructs.
The generic if then end statement requires all three keywords:
if condition then
--statement body
end
An example of an if statement
if x ~= 10 then
print(x)
end
The else keyword may be added with an accompanying statement block to control execution when the if condition evaluates to false:
if condition then
--statement body
else
--statement body
end
An example of an if else statement
if x == 10 then
print(10)
else
print(x)
end
Execution may also be controlled according to multiple conditions using the elseif then keywords:
if condition then
--statement body
elseif condition then
--statement body
else -- optional
--optional default statement body
end
An example of an if elseif else statement
if x == y then
print("x = y")
elseif x == z then
print("x = z")
else -- optional
print("x does not equal any other variable")
end
Lua has four types of conditional loops: the while loop, the repeat loop (similar to a do while loop), the numeric for loop and the generic for loop.
--condition = true
while condition do
--statements
end
repeat
--statements
until condition
for i = first, last, delta do --delta may be negative, allowing the for loop to count down or up
--statements
--example: print(i)
end
This generic for loop would iterate over the table _G using the standard iterator function pairs, until it returns nil:
for key, value in pairs(_G) do
print(key, value)
end
Loops can also be nested (put inside of another loop).
local grid = {
{ 11, 12, 13 },
{ 21, 22, 23 },
{ 31, 32, 33 }
}
for y, row in pairs(grid) do
for x, value in pairs(row) do
print(x, y, value)
end
end
Functions
Lua's treatment of functions as first-class values is shown in the following example, where the print function's behavior is modified:
do
local oldprint = print
-- Store current print function as oldprint
function print(s)
--[[ Redefine print function. The usual print function can still be used
through oldprint. The new one has only one argument.]]
oldprint(s == "foo" and "bar" or s)
end
end
Any future calls to print will now be routed through the new function, and because of Lua's lexical scoping, the old print function will only be accessible by the new, modified print.
Lua also supports closures, as demonstrated below:
function addto(x)
-- Return a new function that adds x to the argument
return function(y)
--[[ When we refer to the variable x, which is outside the current
scope and whose lifetime would be shorter than that of this anonymous
function, Lua creates a closure.]]
return x + y
end
end
fourplus = addto(4)
print(fourplus(3)) -- Prints 7
--This can also be achieved by calling the function in the following way:
print(addto(4)(3))
--[[ This is because we are calling the returned function from 'addto(4)' with the argument '3' directly.
This also helps to reduce data cost and up performance if being called iteratively.]]
A new closure for the variable x is created every time addto is called, so that each new anonymous function returned will always access its own x parameter. The closure is managed by Lua's garbage collector, just like any other object.
Tables
Tables are the most important data structures (and, by design, the only built-in composite data type) in Lua and are the foundation of all user-created types. They are associative arrays with addition of automatic numeric key and special syntax.
A table is a set of key and data pairs, where the data is referenced by key; in other words, it is a hashed heterogeneous associative array.
Tables are created using the {} constructor syntax.
a_table = {} -- Creates a new, empty table
Tables are always passed by reference (see Call by sharing).
A key (index) can be any value except nil and NaN, including functions.
a_table = {x = 10} -- Creates a new table, with one entry mapping "x" to the number 10.
print(a_table["x"]) -- Prints the value associated with the string key, in this case 10.
b_table = a_table
b_table["x"] = 20 -- The value in the table has been changed to 20.
print(b_table["x"]) -- Prints 20.
print(a_table["x"]) -- Also prints 20, because a_table and b_table both refer to the same table.
A table is often used as structure (or record) by using strings as keys. Because such use is very common, Lua features a special syntax for accessing such fields.
point = { x = 10, y = 20 } -- Create new table
print(point["x"]) -- Prints 10
print(point.x) -- Has exactly the same meaning as line above. The easier-to-read dot notation is just syntactic sugar.
By using a table to store related functions, it can act as a namespace.
Point = {}
Point.new = function(x, y)
return {x = x, y = y} -- return {["x"] = x, ["y"] = y}
end
Point.set_x = function(point, x)
point.x = x -- point["x"] = x;
end
Tables are automatically assigned a numerical key, enabling them to be used as an array data type. The first automatic index is 1 rather than 0 as it is for many other programming languages (though an explicit index of 0 is allowed).
A numeric key 1 is distinct from a string key "1".
array = { "a", "b", "c", "d" } -- Indices are assigned automatically.
print(array[2]) -- Prints "b". Automatic indexing in Lua starts at 1.
print(#array) -- Prints 4. # is the length operator for tables and strings.
array[0] = "z" -- Zero is a legal index.
print(#array) -- Still prints 4, as Lua arrays are 1-based.
The length of a table t is defined to be any integer index n such that t[n] is not nil and t[n+1] is nil; moreover, if t[1] is nil, n can be zero. For a regular array, with non-nil values from 1 to a given n, its length is exactly that n, the index of its last value. If the array has "holes" (that is, nil values between other non-nil values), then #t can be any of the indices that directly precedes a nil value (that is, it may consider any such nil value as the end of the array).
ExampleTable =
{
{1, 2, 3, 4},
{5, 6, 7, 8}
}
print(ExampleTable[1][3]) -- Prints "3"
print(ExampleTable[2][4]) -- Prints "8"
A table can be an array of objects.
function Point(x, y) -- "Point" object constructor
return { x = x, y = y } -- Creates and returns a new object (table)
end
array = { Point(10, 20), Point(30, 40), Point(50, 60) } -- Creates array of points
-- array = { { x = 10, y = 20 }, { x = 30, y = 40 }, { x = 50, y = 60 } };
print(array[2].y) -- Prints 40
Using a hash map to emulate an array is normally slower than using an actual array; however, Lua tables are optimized for use as arrays to help avoid this issue.
Metatables
Extensible semantics is a key feature of Lua, and the metatable concept allows powerful customization of tables. The following example demonstrates an "infinite" table. For any n, fibs[n] will give the n-th Fibonacci number using dynamic programming and memoization.
fibs = { 1, 1 } -- Initial values for fibs[1] and fibs[2].
setmetatable(fibs, {
__index = function(values, n) --[[__index is a function predefined by Lua,
it is called if key "n" does not exist.]]
values[n] = values[n - 1] + values[n - 2] -- Calculate and memoize fibs[n].
return values[n]
end
})
Object-oriented programming
Although Lua does not have a built-in concept of classes, object-oriented programming can be emulated using functions and tables. An object is formed by putting methods and fields in a table. Inheritance (both single and multiple) can be implemented with metatables, delegating nonexistent methods and fields to a parent object.
There is no such concept as "class" with these techniques; rather, prototypes are used, similar to Self or JavaScript. New objects are created either with a factory method (that constructs new objects from scratch) or by cloning an existing object.
Creating a basic vector object:
local Vector = {}
local VectorMeta = { __index = Vector}
function Vector.new(x, y, z) -- The constructor
return setmetatable({x = x, y = y, z = z}, VectorMeta)
end
function Vector.magnitude(self) -- Another method
return math.sqrt(self.x^2 + self.y^2 + self.z^2)
end
local vec = Vector.new(0, 1, 0) -- Create a vector
print(vec.magnitude(vec)) -- Call a method (output: 1)
print(vec.x) -- Access a member variable (output: 0)
Here, tells Lua to look for an element in the table if it is not present in the table. , which is equivalent to , first looks in the table for the element. The table does not have a element, but its metatable delegates to the table for the element when it's not found in the table.
Lua provides some syntactic sugar to facilitate object orientation. To declare member functions inside a prototype table, one can use , which is equivalent to . Calling class methods also makes use of the colon: is equivalent to .
That in mind, here is a corresponding class with syntactic sugar:
local Vector = {}
Vector.__index = Vector
function Vector:new(x, y, z) -- The constructor
-- Since the function definition uses a colon,
-- its first argument is "self" which refers
-- to "Vector"
return setmetatable({x = x, y = y, z = z}, self)
end
function Vector:magnitude() -- Another method
-- Reference the implicit object using self
return math.sqrt(self.x^2 + self.y^2 + self.z^2)
end
local vec = Vector:new(0, 1, 0) -- Create a vector
print(vec:magnitude()) -- Call a method (output: 1)
print(vec.x) -- Access a member variable (output: 0)
Inheritance
Lua supports using metatables to give Lua class inheritance. In this example, we allow vectors to have their values multiplied by a constant in a derived class.
local Vector = {}
Vector.__index = Vector
function Vector:new(x, y, z) -- The constructor
-- Here, self refers to whatever class's "new"
-- method we call. In a derived class, self will
-- be the derived class; in the Vector class, self
-- will be Vector
return setmetatable({x = x, y = y, z = z}, self)
end
function Vector:magnitude() -- Another method
-- Reference the implicit object using self
return math.sqrt(self.x^2 + self.y^2 + self.z^2)
end
-- Example of class inheritance
local VectorMult = {}
VectorMult.__index = VectorMult
setmetatable(VectorMult, Vector) -- Make VectorMult a child of Vector
function VectorMult:multiply(value)
self.x = self.x * value
self.y = self.y * value
self.z = self.z * value
return self
end
local vec = VectorMult:new(0, 1, 0) -- Create a vector
print(vec:magnitude()) -- Call a method (output: 1)
print(vec.y) -- Access a member variable (output: 1)
vec:multiply(2) -- Multiply all components of vector by 2
print(vec.y) -- Access member again (output: 2)
Lua also supports multiple inheritance; can either be a function or a table. Operator overloading can also be done; Lua metatables can have elements such as , and so on.
Implementation
Lua programs are not interpreted directly from the textual Lua file, but are compiled into bytecode, which is then run on the Lua virtual machine (VM). The compiling process is typically invisible to the user and is performed during run-time, especially when a just-in-time compilation (JIT) compiler is used, but it can be done offline to increase loading performance or reduce the memory footprint of the host environment by leaving out the compiler. Lua bytecode can also be produced and executed from within Lua, using the dump function from the string library and the load/loadstring/loadfile functions. Lua version 5.3.4 is implemented in approximately 24,000 lines of C code.
Like most CPUs, and unlike most virtual machines (which are stack-based), the Lua VM is register-based, and therefore more closely resembles most hardware design. The register architecture both avoids excessive copying of values, and reduces the total number of instructions per function. The virtual machine of Lua 5 is one of the first register-based pure VMs to have a wide use. Parrot and Android's Dalvik are two other well-known register-based VMs. PCScheme's VM was also register-based.
This example is the bytecode listing of the factorial function defined above (as shown by the luac 5.1 compiler):
function <factorial.lua:1,7> (9 instructions, 36 bytes at 0x8063c60)
1 param, 6 slots, 0 upvalues, 6 locals, 2 constants, 0 functions
1 [2] LOADK 1 -1 ; 1
2 [3] LOADK 2 -2 ; 2
3 [3] MOVE 3 0
4 [3] LOADK 4 -1 ; 1
5 [3] FORPREP 2 1 ; to 7
6 [4] MUL 1 1 5
7 [3] FORLOOP 2 -2 ; to 6
8 [6] RETURN 1 2
9 [7] RETURN 0 1
C API
Lua is intended to be embedded into other applications, and provides a C API for this purpose. The API is divided into two parts: the Lua core and the Lua auxiliary library. The Lua API's design eliminates the need for manual reference counting (management) in C code, unlike Python's API. The API, like the language, is minimalist. Advanced functions are provided by the auxiliary library, which consists largely of preprocessor macros which assist with complex table operations.
The Lua C API is stack based. Lua provides functions to push and pop most simple C data types (integers, floats, etc.) to and from the stack, and functions to manipulate tables through the stack. The Lua stack is somewhat different from a traditional stack; the stack can be indexed directly, for example. Negative indices indicate offsets from the top of the stack. For example, −1 is the top (most recently pushed value), while positive indices indicate offsets from the bottom (oldest value). Marshalling data between C and Lua functions is also done using the stack. To call a Lua function, arguments are pushed onto the stack, and then the lua_call is used to call the actual function. When writing a C function to be directly called from Lua, the arguments are read from the stack.
Here is an example of calling a Lua function from C:
#include <stdio.h>
#include <lua.h> // Lua main library (lua_*)
#include <lauxlib.h> // Lua auxiliary library (luaL_*)
int main(void)
{
// create a Lua state
lua_State *L = luaL_newstate();
// load and execute a string
if (luaL_dostring(L, "function foo (x,y) return x+y end")) {
lua_close(L);
return -1;
}
// push value of global "foo" (the function defined above)
// to the stack, followed by integers 5 and 3
lua_getglobal(L, "foo");
lua_pushinteger(L, 5);
lua_pushinteger(L, 3);
lua_call(L, 2, 1); // call a function with two arguments and one return value
printf("Result: %d\n", lua_tointeger(L, -1)); // print integer value of item at stack top
lua_pop(L, 1); // return stack to original state
lua_close(L); // close Lua state
return 0;
}
Running this example gives:
$ cc -o example example.c -llua
$ ./example
Result: 8
The C API also provides some special tables, located at various "pseudo-indices" in the Lua stack. At LUA_GLOBALSINDEX prior to Lua 5.2 is the globals table, _G from within Lua, which is the main namespace. There is also a registry located at LUA_REGISTRYINDEX where C programs can store Lua values for later retrieval.
Modules
Besides standard library (core) modules it is possible to write extensions using the Lua API. Extension modules are shared objects which can be used to extend the functions of the interpreter by providing native facilities to Lua scripts. Lua scripts may load extension modules using require, just like modules written in Lua itself, or with package.loadlib. When a C library is loaded via Lua will look for the function luaopen_foo and call it, which acts as any C function callable from Lua and generally returns a table filled with methods. A growing set of modules termed rocks are available through a package management system named LuaRocks, in the spirit of CPAN, RubyGems and Python eggs. Prewritten Lua bindings exist for most popular programming languages, including other scripting languages. For C++, there are a number of template-based approaches and some automatic binding generators.
Applications
In video game development, Lua is widely used as a scripting language, mainly due to its perceived easiness to embed, fast execution, and short learning curve. Notable games which use Lua include Roblox, Garry's Mod, World of Warcraft, Payday 2, Phantasy Star Online 2, Dota 2, Crysis, and many others. Some games that do not natively support Lua programming or scripting, have this function added by mods, as ComputerCraft does for Minecraft. Also, Lua is used in non-video game software, such as Adobe Lightroom, Moho, iClone, Aerospike, and some system software in FreeBSD and NetBSD, and used as a template scripting language on MediaWiki using the Scribunto extension.
In 2003, a poll conducted by GameDev.net showed Lua was the most popular scripting language for game programming. On 12 January 2012, Lua was announced as a winner of the Front Line Award 2011 from the magazine Game Developer in the category Programming Tools.
Many non-game applications also use Lua for extensibility, such as LuaTeX, an implementation of the TeX type-setting language, Redis, a key-value database, ScyllaDB, a wide-column store, Neovim, a text editor, Nginx, a web server, Wireshark, a network packet analyzer and Pure Data, a visual audio programming language (through the pdlua extension).
Through the Scribunto extension, Lua is available as a server-side scripting language in the MediaWiki software that runs Wikipedia and other wikis. Among its uses are allowing the integration of data from Wikidata into articles, and powering the .
Derived languages
Languages that compile to Lua
MoonScript is a dynamic, whitespace-sensitive scripting language inspired by CoffeeScript, which is compiled into Lua. This means that instead of using do and end (or { and }) to delimit sections of code it uses line breaks and indentation style. A notable use of MoonScript is the video game distribution website Itch.io.
Haxe supports compiling to some Lua targets, including Lua 5.1-5.3 and LuaJIT 2.0 and 2.1.
Fennel, a Lisp dialect that targets Lua.
Urn, a Lisp dialect built on Lua.
Amulet, an ML-like functional programming language, which compiler emits Lua files.
Dialects
LuaJIT, a just-in-time compiler of Lua 5.1.
Luau developed by Roblox Corporation, a derivative of Lua 5.1 with gradual typing, additional features and a focus on performance.
Ravi, a JIT-enabled Lua 5.3 language with optional static typing. JIT is guided by type information.
Shine, a fork of LuaJIT with many extensions, including a module system and a macro system.
Glua, a modified version embedded into the game Garry's Mod as its scripting language.
Teal, a statically typed Lua dialect written in Lua.
In addition, the Lua users community provides some power patches on top of the reference C implementation.
See also
Comparison of programming languages
Notes
References
Further reading
(The 1st ed. is available online.)
Chapters 6 and 7 are dedicated to Lua, while others look at software in Brazil more broadly.
Interview with Roberto Ierusalimschy.
How the embeddability of Lua impacted its design.
Lua papers and theses
External links
Lua Users , Community
Lua Forum
LuaDist
Lua Rocks - Package manager
Projects in Lua
Articles with example C code
Brazilian inventions
Cross-platform free software
Cross-platform software
Dynamic programming languages
Dynamically typed programming languages
Embedded systems
Free and open source interpreters
Free computer libraries
Free software programmed in C
Object-oriented programming languages
Pontifical Catholic University of Rio de Janeiro
Programming languages
Programming languages created in 1993
Prototype-based programming languages
Register-based virtual machines
Scripting languages
Software using the MIT license | Lua (programming language) | [
"Technology",
"Engineering"
] | 6,361 | [
"Embedded systems",
"Computer science",
"Computer engineering",
"Computer systems"
] |
46,174 | https://en.wikipedia.org/wiki/National%20Institutes%20of%20Health | The National Institutes of Health (NIH) is the primary agency of the United States government responsible for biomedical and public health research. It was founded in 1887 and is now part of the United States Department of Health and Human Services. Many NIH facilities are located in Bethesda, Maryland, and other nearby suburbs of the Washington metropolitan area, with other primary facilities in the Research Triangle Park in North Carolina and smaller satellite facilities located around the United States. The NIH conducts its own scientific research through the NIH Intramural Research Program (IRP) and provides major biomedical research funding to non-NIH research facilities through its Extramural Research Program.
, the IRP had 1,200 principal investigators and more than 4,000 postdoctoral fellows in basic, translational, and clinical research, being the largest biomedical research institution in the world, while, as of 2003, the extramural arm provided 28% of biomedical research funding spent annually in the U.S., or about US$26.4 billion.
The NIH comprises 27 separate institutes and centers of different biomedical disciplines and is responsible for many scientific accomplishments, including the discovery of fluoride to prevent tooth decay, the use of lithium to manage bipolar disorder, and the creation of vaccines against hepatitis, Haemophilus influenzae (HIB), and human papillomavirus (HPV).
In 2019, the NIH was ranked number two in the world, behind Harvard University, for biomedical sciences in the Nature Index, which measured the largest contributors to papers published in a subset of leading journals from 2015 to 2018.
History
Origins
In 1887, a laboratory for the study of bacteria, the Hygienic Laboratory, was established within the Marine Hospital Service, which at the time was expanding its functions beyond the system of Marine Hospitals into quarantine and research programs. It was initially located at the New York Marine Hospital on Staten Island. In 1891, it moved to the top floor of the Butler Building in Washington, D.C. In 1904, it moved again to a new campus at the Old Naval Observatory, which grew to include five major buildings.
In 1901, the Division of Scientific Research was formed, which included the Hygienic Laboratory as well as other research offices of the Marine Hospital Service. In 1912, the Marine Hospital Service became the Public Health Service (PHS). In 1922, PHS established a Special Cancer Investigations laboratory at Harvard Medical School. This marked the beginning of a partnership with universities.
In 1930, the Hygienic Laboratory was re-designated as the National Institute of Health by the Ransdell Act, and was given $750,000 to construct two NIH buildings at the Old Naval Observatory campus. In 1937, the NIH absorbed the rest of the Division of Scientific Research, of which it was formerly part.
In 1938, the NIH moved to its current campus in Bethesda, Maryland. Over the next few decades, Congress would markedly increase funding of the NIH, and various institutes and centers within the NIH were created for specific research programs. In 1944, the Public Health Service Act was approved, and the National Cancer Institute became a division of the NIH. In 1948, the name changed from National Institute of Health to National Institutes of Health.
Later history
In the 1960s, virologist and cancer researcher Chester M. Southam injected HeLa cancer cells into patients at the Jewish Chronic Disease Hospital. When three doctors resigned after refusing to inject patients without their consent, the experiment gained considerable media attention. The NIH was a major source of funding for Southam's research and had required all research involving human subjects to obtain their consent prior to any experimentation. Upon investigating all of their grantee institutions, the NIH discovered that the majority of them did not protect the rights of human subjects. From then on, the NIH has required all grantee institutions to approve any research proposals involving human experimentation with review boards.
In 1967, the Division of Regional Medical Programs was created to administer grants for research for heart disease, cancer, and strokes. That same year, the NIH director lobbied the White House for increased federal funding in order to increase research and the speed with which health benefits could be brought to the people. An advisory committee was formed to oversee the further development of the NIH and its research programs. By 1971 cancer research was in full force and President Nixon signed the National Cancer Act, initiating a National Cancer Program, President's Cancer Panel, National Cancer Advisory Board, and 15 new research, training, and demonstration centers.
Funding for the NIH has often been a source of contention in Congress, serving as a proxy for the political currents of the time. In 1992, the NIH encompassed nearly 1 percent of the federal government's operating budget and controlled more than 50 percent of all funding for health research, and 85 percent of all funding for health studies in universities. While government funding for research in other disciplines has been increasing at a rate similar to inflation since the 1970s, research funding for the NIH nearly tripled through the 1990s and early 2000s, but has remained relatively stagnant since then.
By the 1990s, the NIH committee focus had shifted to DNA research and launched the Human Genome Project.
Leadership
The NIH Office of the Director is the central office responsible for setting policy for the NIH, and for planning, managing, and coordinating the programs and activities of all NIH components. The NIH Director plays an active role in shaping the agency's activities and outlook. The Director is responsible for providing leadership to the Institutes and Centers by identifying needs and opportunities, especially in efforts involving multiple Institutes. Within the Director's Office is the Division of Program Coordination, Planning and Strategic Initiatives with 12 divisions including:
Office of AIDS Research
Office of Research on Women's Health
Office of Disease Prevention
Sexual and Gender Minority Research Office
Tribal Health Research Office
Office of Program Evaluation and Performance
The Agency Intramural Research Integrity Officer "is directly responsible for overseeing the resolution of all research misconduct allegations involving intramural research, and for promoting research integrity within the NIH Office of Intramural Research (OIR)." There is a Division of Extramural Activities, which has its own Director. The Office of Ethics has its own Director, as does the Office of Global Research.
Locations and campuses
Intramural research is primarily conducted at the main campus in Bethesda, Maryland, and Rockville, Maryland, and the surrounding communities.
The Bayview Campus in Baltimore, Maryland houses the research programs of the National Institute on Aging, National Institute on Drug Abuse, and National Human Genome Research Institute with nearly 1,000 scientists and support staff. The Frederick National Laboratory in Frederick, MD and the nearby Riverside Research Park, houses many components of the National Cancer Institute, including the Center for Cancer Research, Office of Scientific Operations, Management Operations Support Branch, the division of Cancer Epidemiology and Genetics and the division of Cancer Treatment and Diagnosis.
The National Institute of Environmental Health Sciences is located in the Research Triangle region of North Carolina.
Other ICs have satellite locations in addition to operations at the main campus. The National Institute of Allergy and Infectious Diseases maintains its Rocky Mountain Labs in Hamilton, Montana, with an emphasis on BSL3 and BSL4 laboratory work. NIDDK operates the Phoenix Epidemiology and Clinical Research Branch in Phoenix, Arizona.
Research
As of 2017, 153 scientists receiving financial support from the NIH have been awarded a Nobel Prize and 195 have been awarded a Lasker Award.
Intramural and extramural research
The NIH devotes 10% of its funding to research within its own facilities (intramural research), and gives >80% of its funding in research grants to extramural (outside) researchers. Of this extramural funding, a certain percentage (2.8% in 2014) must be granted to small businesses under the SBIR/STTR program. , the extramural funding consisted of about 50,000 grants to more than 325,000 researchers at more than 3000 institutions. , this rate of granting remained reasonably steady, at 47,000 grants to 2,700 organizations. , the NIH spent (not including temporary funding from the American Recovery and Reinvestment Act of 2009) on clinical research, on genetics-related research, on prevention research, on cancer, and on biotechnology.
Public Access Policy
In 2008 a Congressional mandate called for investigators funded by the NIH to submit an electronic version of their final manuscripts to the National Library of Medicine's research repository, PubMed Central (PMC), no later than 12 months after the official date of publication. The NIH Public Access Policy was the first public access mandate for a U.S. public funding agency.
Economic return
In 2000, the Joint Economic Committee of Congress reported NIH research, which was funded at $16 billion a year in 2000, that some econometric studies had given a rate of return of 25 to 40 percent per year by reducing the economic cost of illness in the US. It found that of the 21 drugs with the highest therapeutic impact on society introduced between 1965 and 1992, public funding was "instrumental" for 15. As of 2011 NIH-supported research helped to discover 153 new FDA-approved drugs, vaccines, and new indications for drugs in the 40 years prior. One study found NIH funding aided either directly or indirectly in developing the drugs or drug targets for all of the 210 FDA-approved drugs from 2010 to 2016. In 2015, Pierre Azoulay et al. estimated $10 million invested in research generated two to three new patents.
Notable discoveries and developments
Since its inception, the NIH intramural research program has been a source of many pivotal scientific and medical discoveries. Some of these include:
1908: George W. McCoy's discovery that rodents were a reservoir of bubonic plague.
1911: George W. McCoy, Charles W. Chapin, William B. Wherry, and B. H. Lamb described the previously unknown tularemia.
1924: Roscoe R. Spencer and Ralph R. Parker developed a vaccine against Rocky Mountain spotted fever.
1930: Sanford M. Rosenthal developed a treatment for mercury poisoning used widely before the development of dimercaptoethanol.
1943: Wilton R. Earle pioneered the cell culture process and published a paper describing the production of malignancy in vitro, Katherine K. Sanford developed the first clone from an isolated cancer cell, and Virginia J. Evans devised a medium that supported growth of cells in vitro.
1940s–1950s: Bernard Horecker and colleagues described the pentose phosphate pathway.
1950s: Julius Axelrod discovered a new class of enzymes, cytochrome P450 monooxygenases, a fundamental of drug metabolism.
1950: Earl Stadtman discovered phosphotransacetylose, elucidating the role of acetyl CoA in fatty acid metabolism.
1960s: Discovered the first human slow virus disease, kuru, which is a degenerative, fatal infection of the central nervous system. This discovery of a new mechanism for infectious diseases revolutionized thinking in microbiology and neurology.
1960s: Defined the mechanisms that regulate noradrenaline, one of the most important neurotransmitters in the brain.
1960s: Developed the first licensed rubella vaccine and the first test for rubella antibodies for large scale testing.
1960s: Developed an effective combination drug regimen for Hodgkin's lymphoma.
1960s: Discovery that tooth decay is caused by bacteria.
1970s: Developed the assay for human chorionic gonadotropin that evolved into the home pregnancy tests.
1970s: Described the hormonal cycle involved in menstruation.
1980s: Determined the complete structure of the IgE receptor that is involved in allergic reactions.
1990s: Hari Reddi's identification and purification of bone morphogenetic proteins
1990s: First trial of gene therapy in humans.
NIH Toolbox
In September 2006, the NIH Blueprint for Neuroscience Research started a contract for the NIH Toolbox for the Assessment of Neurological and Behavioral Function to develop a set of state-of-the-art measurement tools to enhance collection of data in large cohort studies. Scientists from more than 100 institutions nationwide contributed. In September 2012, the NIH Toolbox was rolled out to the research community. NIH Toolbox assessments are based, where possible, on Item Response Theory and adapted for testing by computer.
Database of Genotypes and Phenotypes
NIH sponsors the Database of Genotypes and Phenotypes (dbGaP), a repository of information produced by studies investigating the interaction of genotype and phenotype. The information includes phenotypes, molecular assay data, analyses and documents. Summary-level data is available to the general public whereas the individual-level data is accessible to researchers. According to the City Journal NIH denies access to such attributes as intelligence, education and health on the grounds that studying their genetic basis would be stigmatizing.
Coronavirus vaccine
The NIH partnered with Moderna in 2020 during the COVID-19 pandemic to develop a vaccine. The final phase of testing began on July 27 with up to 30,000 volunteers assigned to one of two groups—one receiving the mRNA-1273 vaccine and the other receiving salt water injections—and continued until there had been approximately 100 cases of COVID-19 among the participants. In 2021, the NIH contributed $4,395,399 towards the Accelerating COVID-19 Therapeutic Interventions and Vaccines (ACTIV) program.
Collaboration with Wuhan Institute of Virology
Following the outbreak of the COVID-19 pandemic, the NIH-funded EcoHealth Alliance has been the subject of controversy and increased scrutiny due to its ties to the Wuhan Institute of Virology (WIV)—which has been at the center of speculation since early 2020 that SARS-CoV-2 may have escaped in a lab incident. Under political pressure, the NIH withdrew funding to EcoHealth Alliance in July 2020.
NIH Interagency Pain Research Coordinating Committee
On February 13, 2012, the National Institutes of Health (NIH) announced a new group of individuals assigned to research pain. This committee is composed of researchers from different organizations and will focus to "coordinate pain research activities across the federal government with the goals of stimulating pain research collaboration… and providing an important avenue for public involvement" ("Members of new", 2012). With a committee such as this research will not be conducted by each individual organization or person but instead a collaborating group which will increase the information available. With this hopefully more pain management will be available including techniques for those with arthritis. In 2020 Beth Darnall, American scientist and pain psychologist, was appointed as scientific member of the group.
Funding
Budget and politics
To allocate funds, the NIH must first obtain its budget from Congress. This process begins with institute and center (IC) leaders collaborating with scientists to determine the most important and promising research areas within their fields. IC leaders discuss research areas with NIH management who then develops a budget request for continuing projects, new research proposals, and new initiatives from the Director. The NIH submits its budget request to the Department of Health and Human Services (HHS), and the HHS considers this request as a portion of its budget. Many adjustments and appeals occur between the NIH and HHS before the agency submits NIH's budget request to the Office of Management and Budget (OMB). OMB determines what amounts and research areas are approved for incorporation into the President's final budget. The President then sends the NIH's budget request to Congress in February for the next fiscal year's allocations. The House and Senate Appropriations Subcommittees deliberate and by fall, Congress usually appropriates funding. This process takes approximately 18 months before the NIH can allocate any actual funds.
When a government shutdown occurs, the NIH continues to treat people who are already enrolled in clinical trials, but does not start any new clinical trials and does not admit new patients who are not already enrolled in a clinical trial, except for the most critically ill, as determined by the NIH Director.
Historical funding
Over the last century, the responsibility to allocate funding has shifted from the OD and Advisory Committee to the individual ICs and Congress increasingly set apart funding for particular causes. In the 1970s, Congress began to earmark funds specifically for cancer research, and in the 1980s there was a significant amount allocated for AIDS/HIV research.
Funding for the NIH has often been a source of contention in Congress, serving as a proxy for the political currents of the time. During the 1980s, President Reagan repeatedly tried to cut funding for research, only to see Congress partly restore funding. The political contention over NIH funding slowed the nation's response to the AIDS epidemic; while AIDS was reported in newspaper articles from 1981, no funding was provided for research on the disease. In 1984 National Cancer Institute scientists found implications that "variants of a human cancer virus called HTLV-III are the primary cause of acquired immunodeficiency syndrome (AIDS)," a new epidemic that gripped the nation.
In 1992, the NIH encompassed nearly 1 percent of the federal government's operating budget and controlled more than 50 percent of all funding for health research and 85 percent of all funding for health studies in universities. From 1993 to 2001 the NIH budget doubled. For a time, funding essentially remained flat, and for seven years following the financial crisis, the NIH budget struggled to keep up with inflation.
In 1999 Congress increased the NIH's budget by $2.3 billion to $17.2 billion in 2000. In 2009 Congress again increased the NIH budget to $31 billion in 2010. In 2017 and 2018, Congress passed laws with bipartisan support that substantially increasing appropriations for the NIH, which was 37.3 billion dollars annually in FY2018.
Extramural research
Researchers at universities or other institutions outside of the NIH can apply for research project grants (RPGs) from the NIH. There are numerous funding mechanisms for different project types (e.g., basic research, clinical research, etc.) and career stages (e.g., early career, postdoc fellowships, etc.). The NIH regularly issues "requests for applications" (RFAs), e.g., on specific programmatic priorities or timely medical problems (such as Zika virus research in early 2016). In addition, researchers can apply for "investigator-initiated grants" whose subject is determined by the scientist.
The total number of applicants has increased substantially, from about 60,000 investigators who had applied during the period from 1999 to 2003 to slightly less than 90,000 in who had applied during the period from 2011 to 2015. Due to this, the "cumulative investigator rate", that is, the likelihood that unique investigators are funded over a 5-year window, has declined from 43% to 31%.
R01 grants are the most common funding mechanism and include investigator-initiated projects. The roughly 27,000 to 29,000 R01 applications had a funding success of 17-19% during 2012 though 2014. Similarly, the 13,000 to 14,000 R21 applications had a funding success of 13-14% during the same period. In FY 2016, the total number of grant applications received by the NIH was 54,220, with approximately 19% being awarded funding. Institutes have varying funding rates. The National Cancer Institute awarded funding to 12% of applicants, while the National Institute for General Medical Science awarded funding to 30% of applicants.
Funding criteria
The NIH employs five broad decision criteria in its funding policy. First, ensure the highest quality of scientific research by employing an arduous peer review process. Second, seize opportunities that have the greatest potential to yield new knowledge and that will lead to better prevention and treatment of disease. Third, maintain a diverse research portfolio in order to capitalize on major discoveries in a variety of fields such as cell biology, genetics, physics, engineering, and computer science. Fourth, address public health needs according to the disease burden (e.g., prevalence and mortality). And fifth, construct and support the scientific infrastructure (e.g., well-equipped laboratories and safe research facilities) necessary to conduct research.
Advisory committee members advise the institute on policy and procedures affecting the external research programs and provide a second level of review for all grant and cooperative agreement applications considered by the Institute for funding.
Gender and sex bias
In 2014, it was announced that the NIH is directing scientists to perform their experiments with both female and male animals, or cells derived from females as well as males if they are studying cell cultures, and that the NIH would take the balance of each study design into consideration when awarding grants. The announcement also stated that this rule would probably not apply when studying sex-specific diseases (for example, ovarian or testicular cancer).
Stakeholders
General public
One of the goals of the NIH is to "expand the base in medical and associated sciences in order to ensure a continued high return on the public investment in research." Taxpayer dollars funding the NIH are from the taxpayers, making them the primary beneficiaries of advances in research. Thus, the general public is a key stakeholder in the decisions resulting from the NIH funding policy. However, some in the general public do not feel their interests are being represented, and individuals have formed patient advocacy groups to represent their own interests.
Extramural researchers and scientists
Important stakeholders of the NIH funding policy include researchers and scientists. Extramural researchers differ from intramural researchers in that they are not employed by the NIH but may apply for funding. Throughout the history of the NIH, the amount of funding received has increased, but the proportion to each IC remains relatively constant. The individual ICs then decide who will receive the grant money and how much will be allotted.
Policy changes on who receives funding significantly affect researchers. For example, the NIH has recently attempted to approve more first-time NIH R01 applicants or the research grant applications of young scientists. To encourage the participation of young scientists, the application process has been shortened and made easier. In addition, first-time applicants are being offered more funding for their research grants than those who have received grants in the past.
Commercial partnerships
In 2011 and 2012, the Department of Health and Human Services Office of Inspector General published a series of audit reports revealing that throughout the fiscal years 2000–2010, institutes under the aegis of the NIH did not comply with the time and amount requirements specified in appropriations statutes, in awarding federal contracts to commercial partners, committing the federal government to tens of millions of dollars of expenditure ahead of appropriation of funds from Congress.
Institutes and centers
The NIH is composed of 27 separate institutes and centers that conduct and coordinate biomedical research. These are:
National Cancer Institute (NCI)
National Eye Institute (NEI)
National Heart, Lung, and Blood Institute (NHLBI)
National Human Genome Research Institute (NHGRI)
National Institute on Aging (NIA)
National Institute on Alcohol Abuse and Alcoholism (NIAAA)
National Institute of Allergy and Infectious Diseases (NIAID)
National Institute of Arthritis and Musculoskeletal and Skin Diseases (NIAMS)
National Institute of Biomedical Imaging and Bioengineering (NIBIB)
National Institute of Child Health and Human Development (NICHD)
National Institute on Deafness and Other Communication Disorders (NIDCD)
National Institute of Dental and Craniofacial Research (NIDCR)
National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK)
National Institute on Drug Abuse (NIDA)
National Institute of Environmental Health Sciences (NIEHS)
National Institute of General Medical Sciences (NIGMS)
National Institute of Mental Health (NIMH)
National Institute on Minority Health and Health Disparities (NIMHD)
National Institute of Neurological Disorders and Stroke (NINDS)
National Institute of Nursing Research (NINR)
National Library of Medicine (NLM)
Center for Information Technology (CIT)
Center for Scientific Review (CSR)
Fogarty International Center (FIC)
National Center for Advancing Translational Sciences (NCATS)
National Center for Complementary and Integrative Health (NCCIH)
NIH Clinical Center (NIH CC)
In addition, the National Center for Research Resources operated from April 13, 1962, to December 23, 2011.
ARPA-H
The Advanced Research Projects Agency for Health (ARPA-H) is an entity formerly within the Office of the United States Secretary of Health and Human Services, which was created by Congress in the Consolidated Appropriations Act, 2022. Modeled after DARPA, HSARPA, IARPA, and ARPA-E, it is intended to pursue unconventional research projects through methods not typically used by federal agencies or private sector companies. Secretary Xavier Becerra delegated ARPA-H to the NIH on May 24, 2022. It received $1 billion in appropriations in 2022, and $1.5 billion in 2023, and it is requesting $2.5 billion for 2024.
Consensus Development Program
The Consensus Development Program is an initiative focused on gathering expert opinions to establish standards and guidelines in various fields, especially in health and medicine. Developed as a collaborative effort by organizations such as the NIH, the program assembles panels of specialists who assess available evidence on critical topics and form recommendations to guide clinical practice and policy. This method helps ensure that healthcare decisions are informed by the latest scientific research and expert consensus.
List of previous directors
See also
List of institutes and centers of the National Institutes of Health
United States Public Health Service
Foundation for the National Institutes of Health
National Institutes of Health Stroke Scale
Heads of International Research Organizations
NIH Toolbox
National Institute of Food and Agriculture
Biomedical Engineering and Instrumentation Program (BEIP)
References
External links
National Institutes of Health in the Federal Register
Regional Medical Programs Collection of information on NIH's Regional Medical Programs, from the National Library of Medicine
Medical research institutes in Maryland
International research institutes
Life sciences industry
Nursing research
Cancer organizations based in the United States
Buildings and structures in Bethesda, Maryland
Hospitals in Maryland
Science and technology in Maryland
Government agencies established in 1887
Hospitals established in 1887
1887 establishments in Maryland
Tourist attractions in Montgomery County, Maryland | National Institutes of Health | [
"Biology"
] | 5,401 | [
"Life sciences industry"
] |
46,177 | https://en.wikipedia.org/wiki/Epidemic%20typhus | Epidemic typhus, also known as louse-borne typhus, is a form of typhus so named because the disease often causes epidemics following wars and natural disasters where civil life is disrupted. Epidemic typhus is spread to people through contact with infected body lice, in contrast to endemic typhus which is usually transmitted by fleas.
Though typhus has been responsible for millions of deaths throughout history, it is still considered a rare disease that occurs mainly in populations that suffer unhygienic extreme overcrowding. Typhus is most rare in industrialized countries. It occurs primarily in the colder, mountainous regions of central and east Africa, as well as Central and South America. The causative organism is Rickettsia prowazekii, transmitted by the human body louse (Pediculus humanus corporis). Untreated typhus cases have a fatality rate of approximately 40%.
Epidemic typhus should not be confused with murine typhus, which is more endemic to the United States, particularly Southern California and Texas. This form of typhus has similar symptoms but is caused by Rickettsia typhi, is less deadly, and has different vectors for transmission.
Signs and symptoms
Symptoms of this disease typically begin within 2 weeks of contact with the causative organism. Signs/Symptoms may include:
Fever
Chills
Headache
Confusion
Cough
Rapid Breathing
Body/Muscle Aches
Rash
Nausea
Vomiting
After 5–6 days, a macular skin eruption develops: first on the upper trunk and spreading to the rest of the body (rarely to the face, palms, or soles of the feet, however).
Brill–Zinsser disease, first described by Nathan Brill in 1913 at Mount Sinai Hospital in New York City, is a mild form of epidemic typhus that recurs in someone after a long period of latency (similar to the relationship between chickenpox and shingles). This recurrence often arises in times of relative immunosuppression, which is often in the context of a person suffering malnutrition or other illnesses. In combination with poor sanitation and hygiene in times of social chaos and upheaval, which enable a greater density of lice, this reactivation is why typhus generates epidemics in such conditions.
Complications
Complications are as follows
Myocarditis
Endocarditis
Mycotic aneurysm
Pneumonia
Pancreatitis
Kidney or bladder infections
Acute renal failure
Meningitis
Encephalitis
Myelitis
Septic shock
Transmission
Feeding on a human who carries the bacterium infects the louse. R. prowazekii grows in the louse's gut and is excreted in its feces. The louse transmits the disease by biting an uninfected human, who scratches the louse bite (which itches) and rubs the feces into the wound. The incubation period is one to two weeks. R. prowazekii can remain viable and virulent in the dried louse feces for many days. Typhus will eventually kill the louse, though the disease will remain viable for many weeks in the dead louse.
Epidemic typhus has historically occurred during times of war and deprivation. For example, typhus killed millions of prisoners in German Nazi concentration camps during World War II. The unhygenic conditions in camps such as Auschwitz, Theresienstadt, and Bergen-Belsen allowed diseases such as typhus to flourish. Situations in the twenty-first century with potential for a typhus epidemic would include refugee camps during a major famine or natural disaster. In the periods between outbreaks, when human to human transmission occurs less often, the flying squirrel serves as a zoonotic reservoir for the Rickettsia prowazekii bacterium.
In 1916, Henrique da Rocha Lima proved that the bacterium Rickettsia prowazekii was the agent responsible for typhus. He named it after his colleague Stanislaus von Prowazek, who had along with himself become infected with typhus while investigating an outbreak, subsequently dying, and H. T. Ricketts, another zoologist who had died from typhus while investigating it. Once these crucial facts were recognized, Rudolf Weigl in 1930 was able to fashion a practical and effective vaccine production method. He ground up the insides of infected lice that had been drinking blood. It was, however, very dangerous to produce, and carried a high likelihood of infection to those who were working on it.
A safer mass-production-ready method using egg yolks was developed by Herald R. Cox in 1938. This vaccine was widely available and used extensively by 1943.
Diagnosis
IFA, ELISA or PCR positive after 10 days.
Treatment
The infection is treated with antibiotics. Intravenous fluids and oxygen may be needed to stabilize the patient. There is a significant disparity between the untreated mortality and treated mortality rates: 10-60% untreated versus close to 0% treated with antibiotics within 8 days of initial infection. Tetracycline, chloramphenicol, and doxycycline are commonly used.
Some of the simplest methods of prevention and treatment focus on preventing infestation of body lice. Completely changing the clothing, washing the infested clothing in hot water, and in some cases also treating recently used bedsheets all help to prevent typhus by removing potentially infected lice. Clothes left unworn and unwashed for 7 days also result in the death of both lice and their eggs, as they have no access to a human host. Another form of lice prevention requires dusting infested clothing with a powder consisting of 10% DDT, 1% malathion, or 1% permethrin, which kill lice and their eggs.
Other preventive measures for individuals are to avoid unhygienic, extremely overcrowded areas where the causative organisms can jump from person to person. In addition, they are warned to keep a distance from larger rodents that carry lice, such as rats, squirrels, or opossums.
History
History of outbreaks
Before 19th century
During the second year of the Peloponnesian War (430 BC), the city-state of Athens in ancient Greece had an epidemic, known as the Plague of Athens, which killed, among others, Pericles and his two elder sons. The plague returned twice more, in 429 BC and in the winter of 427/6 BC. Epidemic typhus is proposed as a strong candidate for the cause of this disease outbreak, supported by both medical and scholarly opinions.
The first description of typhus was probably given in 1083 at La Cava abbey near Salerno, Italy. In 1546, Girolamo Fracastoro, a Florentine physician, described typhus in his famous treatise on viruses and contagion, De Contagione et Contagiosis Morbis.
Typhus was carried to mainland Europe by soldiers who had been fighting on Cyprus. The first reliable description of the disease appears during the siege of the Emirate of Granada by the Catholic Monarchs in 1489 during the Granada War. These accounts include descriptions of fever and red spots over arms, back and chest, progressing to delirium, gangrenous sores, and the stench of rotting flesh. During the siege, the Catholics lost 3,000 men to enemy action, but an additional 17,000 died of typhus.
Typhus was also common in prisons (and in crowded conditions where lice spread easily), where it was known as Gaol fever or Jail fever. Gaol fever often occurs when prisoners are frequently huddled together in dark, filthy rooms. Imprisonment until the next term of court was often equivalent to a death sentence. Typhus was so infectious that prisoners brought before the court sometimes infected the court itself. Following the Black Assize of Oxford 1577, over 510 died from epidemic typhus, including Speaker Robert Bell, Lord Chief Baron of the Exchequer. The outbreak that followed, between 1577 and 1579, killed about 10% of the English population.
During the Lent assize held at Taunton (1730), typhus caused the death of the Lord Chief Baron of the Exchequer, the High Sheriff of Somerset, the sergeant, and hundreds of other persons. During a time when there were 241 capital offences, more prisoners died from 'gaol fever' than were put to death by all the public executioners in the realm. In 1759 an English authority estimated that each year a quarter of the prisoners had died from gaol fever. In London, typhus frequently broke out among the ill-kept prisoners of Newgate Gaol and moved into the general city population.
19th century
Epidemics occurred in the British Isles and throughout Europe, for instance, during the English Civil War, the Thirty Years' War, and the Napoleonic Wars. Many historians believe that the typhus outbreak among Napoleon's troops is the real reason why he stalled his military campaign into Russia, rather than starvation or the cold. A major epidemic occurred in Ireland between 1816 and 1819, and again in the late 1830s. Another major typhus epidemic occurred during the Great Irish Famine between 1846 and 1849. The Irish typhus spread to England, where it was sometimes called "Irish fever" and was noted for its virulence. It killed people of all social classes since lice were endemic and inescapable, but it hit particularly hard in the lower or "unwashed" social strata. It was carried to North America by the many Irish refugees who fled the famine. In Canada, the 1847 North American typhus epidemic killed more than 20,000 people, mainly Irish immigrants in fever sheds and other forms of quarantine, who had contracted the disease aboard coffin ships. As many as 900,000 deaths have been attributed to the typhus fever during the Crimean War in 1853–1856, and 270,000 to the 1866 Finnish typhus epidemic.
In the United States, a typhus epidemic struck Philadelphia in 1837. The son of Franklin Pierce died in 1843 of a typhus epidemic in Concord, New Hampshire. Several epidemics occurred in Baltimore, Memphis, and Washington, D.C. between 1865 and 1873. Typhus fever was also a significant killer during the American Civil War, although typhoid fever was the more prevalent cause of US Civil War "camp fever." Typhoid is a completely different disease from typhus. Typically more men died on both sides of disease than wounds.
Rudolph Carl Virchow, a physician, anthropologist, and historian attempted to control an outbreak of typhus in Upper Silesia and wrote a 190-page report about it. He concluded that the solution to the outbreak did not lie in individual treatment or by providing small changes in housing, food or clothing, but rather in widespread structural changes to directly address the issue of poverty. Virchow's experience in Upper Silesia led to his observation that "Medicine is a social science". His report led to changes in German public health policy.
20th century
Typhus was endemic in Poland and several neighboring countries prior to World War I (1914–1918). During and shortly after the war, epidemic typhus caused up to three million deaths in Russia, and several million citizens also died in Poland and Romania. Since 1914, many troops, prisoners and even doctors were infected, and at least 150,000 died from typhus in Serbia, 50,000 of whom were prisoners. Delousing stations were established for troops on the Western Front, but the disease ravaged the armies of the Eastern Front. Fatalities were generally between 10 and 40 percent of those infected, and the disease was a major cause of death for those nursing the sick. During World War I and the Russian Civil War between the White and Red, the typhus epidemic caused 2–3 million deaths out of 20–30 million cases in Russia between 1918 and 1922.
Typhus caused hundreds of thousands of deaths during World War II. It struck the German Army during Operation Barbarossa, the invasion of Russia, in 1941. In 1942 and 1943 typhus hit French North Africa, Egypt and Iran particularly hard. Typhus epidemics killed inmates in the Nazi concentration camps and death camps such as Auschwitz, Dachau, Theresienstadt, and Bergen-Belsen. Footage shot at Bergen-Belsen concentration camp shows the mass graves for typhus victims. Anne Frank, at age 15, and her sister Margot both died of typhus in the camps. Even larger epidemics in the post-war chaos of Europe were averted only by the widespread use of the newly discovered DDT to kill lice on the millions of refugees and displaced persons.
Following the development of a vaccine during World War II, Western Europe and North America have been able to prevent epidemics. These have usually occurred in Eastern Europe, the Middle East, and parts of Africa, particularly Ethiopia. Naval Medical Research Unit Five worked there with the government on research to attempt to eradicate the disease.
In one of its first major outbreaks since World War II, epidemic typhus reemerged in 1995 in a jail in N'Gozi, Burundi. This outbreak followed the start of the Burundian Civil War in 1993, which caused the displacement of 760,000 people. Refugee camps were crowded and unsanitary, and often far from towns and medical services.
21st century
A 2005 study found seroprevalence of R. prowazekii antibodies in homeless populations in two shelters in Marseille, France. The study noted the "hallmarks of epidemic typhus and relapsing fever".
History of vaccines
Major developments for typhus vaccines started during World War I, as typhus caused high mortality, and threatened the health and readiness for soldiers on the battlefield. Vaccines for typhus, like other vaccines of the time, were classified as either living or killed vaccines. Live vaccines were typically an injection of live agent, and killed vaccines are live cultures of an agent that are chemically inactivated prior to use.
Attempts to create a living vaccine of classical, louse-borne, typhus were attempted by French researchers but these proved unsuccessful. Researchers turned to murine typhus to develop a live vaccine. At the time, murine vaccine was viewed as a less severe alternative to classical typhus. Four versions of a live vaccine cultivated from murine typhus were tested, on a large scale, in 1934.
While the French were making advancements with live vaccines, other European countries were working to develop killed vaccines. During World War II, there were three kinds of potentially useful killed vaccines. All three killed vaccines relied on the cultivation of Rickettsia prowazekii, the organism responsible for typhus. The first attempt at a killed vaccine was developed by Germany, using the Rickettsia prowazekii found in louse feces. The vaccine was tested extensively in Poland between the two world wars and used by the Germans for their troops during their attacks on the Soviet Union.
A second method of growing Rickettsia prowazekii was discovered using the yolk sac of chick embryos. Germans tried several times to use this technique of growing Rickettsia prowazekii but no effort was pushed very far.
The last technique was an extended development of the previously known method of growing murine typhus in rodents. It was discovered that rabbits could be infected, by a similar process, and contract classical typhus instead of murine typhus. Again, while proven to produce suitable Rickettsia prowazekii for vaccine development, this method was not used to produce wartime vaccines.
During WWII, the two major vaccines available were the killed vaccine grown in lice and the live vaccine from France. Neither was used much during the war. The killed, louse-grown vaccine was difficult to manufacture in large enough quantities, and the French vaccine was not believed to be safe enough for use.
The Germans worked to develop their own live vaccine from the urine of typhus victims. While developing a live vaccine, Germany used live Rickettsia prowazekii to test multiple possible vaccines' capabilities. They gave live Rickettsia prowazekii to concentration camp prisoners, using them as a control group for the vaccine tests.
The use of DDT as an effective means of killing lice, the main carrier of typhus, was discovered in Naples.
Society and culture
Biological weapon
Typhus was one of more than a dozen agents that the United States researched as potential biological weapons before President Richard Nixon suspended all non-defensive aspects of the U.S. biological weapons program in 1969.
Poverty and displacement
The CDC lists the following areas as active foci of human epidemic typhus: Andean regions of South America, some parts of Africa; on the other hand, the CDC only recognizes an active enzootic cycle in the United States involving flying squirrels (CDC). Though epidemic typhus is commonly thought to be restricted to areas of the developing world, serological examination of homeless persons in Houston found evidence for exposure to the bacterial pathogens that cause epidemic typhus and murine typhus. A study involving 930 homeless people in Marseille, France, found high rates of seroprevalence to R. prowazekii and a high prevalence of louse-borne infections in the homeless.
Typhus has been increasingly discovered in homeless populations in developed nations. Typhus among homeless populations is especially prevalent as these populations tend to migrate across states and countries, spreading the risk of infection with their movement. The same risk applies to refugees, who travel across country lines, often living in close proximity and unable to maintain necessary hygienic standards to avoid being at risk for catching lice possibly infected with typhus.
Because the typhus-infected lice live in clothing, the prevalence of typhus is also affected by weather, humidity, poverty and lack of hygiene. Lice, and therefore typhus, are more prevalent during colder months, especially winter and early spring. In these seasons, people tend to wear multiple layers of clothing, giving lice more places to go unnoticed by their hosts. This is particularly a problem for poverty-stricken populations as they often do not have multiple sets of clothing, preventing them from practicing good hygiene habits that could prevent louse infestation.
Due to fear of an outbreak of epidemic typhus, the US Government put a typhus quarantine in place in 1917 across the entirety of the US-Mexican border. Sanitation plants were constructed that required immigrants to be thoroughly inspected and bathed before crossing the border. Those who routinely crossed back and forth across the border for work were required to go through the sanitation process weekly, updating their quarantine card with the date of the next week's sanitation. These sanitation border stations remained active over the next two decades, regardless of the disappearance of the typhus threat. This fear of typhus and resulting quarantine and sanitation protocols dramatically hardened the border between the US and Mexico, fostering scientific and popular prejudices against Mexicans. This ultimately intensified racial tensions and fueled efforts to ban immigrants to the US from the Southern Hemisphere because the immigrants were associated with the disease.
Literature
(1847) In Jane Eyre by Charlotte Brontë, an outbreak of typhus occurs in Jane's school Lowood, highlighting the unsanitary conditions the girls live in.
(1862) In Fathers and Sons by Ivan Turgenev, Evgeny Bazarov dissects a local peasant and dies after contracting typhus.
(1886) In the short story "Excellent People" by Anton Chekhov, typhus kills a Russian provincial.
(1886) In The Strange Adventures of Captain Dangerous by George Augustus Henry Sala: "We Convicts were all had to the Grate, for the Knight and Alderman would not venture further in, for fear of the Gaol Fever;"
(1890) In How the Other Half Lives by Jacob Riis, the effects of typhus fever and smallpox on "Jewtown" are described.
(1935) Hans Zinsser's Rats, Lice and History, although a touch outdated on the science, contains many useful cross-references to classical and historical impact of typhus.
(1940) in The Don Flows Home to the Sea by Mikhail Sholokhov, numerous characters contract typhus during the Russian Civil War.
(1946) In Viktor Frankl's Man's Search for Meaning, Frankl, a Nazi concentration camp prisoner and trained psychiatrist, treats fellow prisoners for delirium due to typhus, while being occasionally affected with the disease himself.
(1955) In Vladimir Nabokov's Lolita, Humbert Humbert's childhood sweetheart, Annabel Leigh, dies of typhus.
(1956) In Doctor Zhivago by Boris Pasternak, the main character contracts epidemic typhus in the winter following the Russian Revolution, while living in Moscow.
(1964) In Nacht (novel) by Edgar Hilsenrath, characters imprisoned in a ghetto in Transnistria during World War II are portrayed infected with and dying of epidemic typhus.
(1978) In Patrick O'Brian's novel Desolation Island, an outbreak of "gaol-fever" strikes the crew while sailing aboard the Leopard.
(1980–1991) In Maus by Art Spiegelman, Vladek Spiegelman contracts typhus during his imprisonment at the Dachau concentration camp.
(1982) There is a typhus epidemic in Chile graphically described in The House of the Spirits by Isabel Allende
(1996) In Andrea Barrett's novella Ship Fever, the characters struggle with a typhus outbreak at the Canadian Grosse Isle Quarantine Station during 1847.
(2001) Lynn and Gilbert Morris' novel Where Two Seas Met portrays an outbreak of typhus on the island of Bequia in the Grenadines, in 1869.
(2004) In Neal Stephenson's The System Of The World, a fictionalized Sir Isaac Newton dies of "gaol fever" before being resurrected by Daniel Waterhouse.
See also
Globalization and disease
List of epidemics
Weil-Felix test
References
55. ↑ Alice S. Chapman (2006). "Cluster of Sylvatic Epidemic Typhus Cases Associated with Flying Squirrels, 2004 - 2006" MedscapeCME Epidemic Typhus Associated with Flying Squirrels – United States
Bacterium-related cutaneous conditions
Zoonoses
Insect-borne diseases
Biological agents
Rodent-carried diseases
Typhus | Epidemic typhus | [
"Biology",
"Environmental_science"
] | 4,566 | [
"Biological agents",
"Toxicology",
"Biological warfare"
] |
46,178 | https://en.wikipedia.org/wiki/SQUID | A SQUID (superconducting quantum interference device) is a very sensitive magnetometer used to measure extremely weak magnetic fields, based on superconducting loops containing Josephson junctions.
SQUIDs are sensitive enough to measure fields as low as 5×10−18 T with a few days of averaged measurements. Their noise levels are as low as 3 fT·Hz−. For comparison, a typical refrigerator magnet produces 0.01 tesla (10−2 T), and some processes in animals produce very small magnetic fields between 10−9 T and 10−6 T. SERF atomic magnetometers, invented in the early 2000s are potentially more sensitive and do not require cryogenic refrigeration but are orders of magnitude larger in size (~1 cm3) and must be operated in a near-zero magnetic field.
History and design
There are two main types of SQUID: direct current (DC) and radio frequency (RF). RF SQUIDs can work with only one Josephson junction (superconducting tunnel junction), which might make them cheaper to produce, but are less sensitive.
DC SQUID
The DC SQUID was invented in 1964 by Robert Jaklevic, John J. Lambe, James Mercereau, and Arnold Silver of Ford Research Labs after Brian Josephson postulated the Josephson effect in 1962, and the first Josephson junction was made by John Rowell and Philip Anderson at Bell Labs in 1963. It has two Josephson junctions in parallel in a superconducting loop. It is based on the DC Josephson effect. In the absence of any external magnetic field, the input current splits into the two branches equally. If a small external magnetic field is applied to the superconducting loop, a screening current, , begins to circulate the loop that generates the magnetic field canceling the applied external flux, and creates an additional Josephson phase which is proportional to this external magnetic flux. The induced current is in the same direction as in one of the branches of the superconducting loop, and is opposite to in the other branch; the total current becomes in one branch and in the other. As soon as the current in either branch exceeds the critical current, , of the Josephson junction, a voltage appears across the junction.
Now suppose the external flux is further increased until it exceeds , half the magnetic flux quantum. Since the flux enclosed by the superconducting loop must be an integer number of flux quanta, instead of screening the flux the SQUID now energetically prefers to increase it to . The current now flows in the opposite direction, opposing the difference between the admitted flux and the external field of just over . The current decreases as the external field is increased, is zero when the flux is exactly , and again reverses direction as the external field is further increased. Thus, the current changes direction periodically, every time the flux increases by additional half-integer multiple of , with a change at maximum amperage every half-plus-integer multiple of and at zero amps every integer multiple.
If the input current is more than , then the SQUID always operates in the resistive mode. The voltage, in this case, is thus a function of the applied magnetic field and the period equal to . Since the current-voltage characteristic of the DC SQUID is hysteretic, a shunt resistance, is connected across the junction to eliminate the hysteresis (in the case of copper oxide based high-temperature superconductors the junction's own intrinsic resistance is usually sufficient). The screening current is the applied flux divided by the self-inductance of the ring. Thus can be estimated as the function of (flux to voltage converter) as follows:
, where is the self inductance of the superconducting ring
The discussion in this section assumed perfect flux quantization in the loop. However, this is only true for big loops with a large self-inductance. According to the relations, given above, this implies also small current and voltage variations. In practice the self-inductance of the loop is not so large. The general case can be evaluated by introducing a parameter
where is the critical current of the SQUID. Usually is of order one.
RF SQUID
The RF SQUID was invented in 1967 by Robert Jaklevic, John J. Lambe, Arnold Silver, and James Edward Zimmerman at Ford. It is based on the AC Josephson effect and uses only one Josephson junction. It is less sensitive compared to DC SQUID but is cheaper and easier to manufacture in smaller quantities. Most fundamental measurements in biomagnetism, even of extremely small signals, have been made using RF SQUIDS.
The RF SQUID is inductively coupled to a resonant tank circuit. Depending on the external magnetic field, as the SQUID operates in the resistive mode, the effective inductance of the tank circuit changes, thus changing the resonant frequency of the tank circuit. These frequency measurements can be easily taken, and thus the losses which appear as the voltage across the load resistor in the circuit are a periodic function of the applied magnetic flux with a period of . For a precise mathematical description refer to the original paper by Erné et al.
Materials used
The traditional superconducting materials for SQUIDs are pure niobium or a lead alloy with 10% gold or indium, as pure lead is unstable when its temperature is repeatedly changed. To maintain superconductivity, the entire device needs to operate within a few degrees of absolute zero, cooled with liquid helium.
High-temperature SQUID sensors were developed in the late 1980s. They are made of high-temperature superconductors, particularly YBCO, and are cooled by liquid nitrogen which is cheaper and more easily handled than liquid helium. They are less sensitive than conventional low temperature SQUIDs but good enough for many applications.
In 2006, A proof of concept was shown for CNT-SQUID sensors built with an aluminium loop and a single walled carbon nanotube Josephson junction. The sensors are a few 100 nm in size and operate at 1K or below. Such sensors allow to count spins.
In 2022 a SQUID was constructed on magic angle twisted bilayer graphene (MATBG)
Uses
The extreme sensitivity of SQUIDs makes them ideal for studies in biology. Magnetoencephalography (MEG), for example, uses measurements from an array of SQUIDs to make inferences about neural activity inside brains. Because SQUIDs can operate at acquisition rates much higher than the highest temporal frequency of interest in the signals emitted by the brain (kHz), MEG achieves good temporal resolution. Another area where SQUIDs are used is magnetogastrography, which is concerned with recording the weak magnetic fields of the stomach. A novel application of SQUIDs is the magnetic marker monitoring method, which is used to trace the path of orally applied drugs. In the clinical environment SQUIDs are used in cardiology for magnetic field imaging (MFI), which detects the magnetic field of the heart for diagnosis and risk stratification.
Probably the most common commercial use of SQUIDs is in magnetic property measurement systems (MPMS). These are turn-key systems, made by several manufacturers, that measure the magnetic properties of a material sample which typically has a temperature between 300 mK and 400 K. With the decreasing size of SQUID sensors since the last decade, such sensor can equip the tip of an AFM probe. Such device allows simultaneous measurement of roughness of the surface of a sample and the local magnetic flux.
For example, SQUIDs are being used as detectors to perform magnetic resonance imaging (MRI). While high-field MRI uses precession fields of one to several teslas, SQUID-detected MRI uses measurement fields that lie in the microtesla range. In a conventional MRI system, the signal scales as the square of the measurement frequency (and hence precession field): one power of frequency comes from the thermal polarization of the spins at ambient temperature, while the second power of field comes from the fact that the induced voltage in the pickup coil is proportional to the frequency of the precessing magnetization. In the case of untuned SQUID detection of prepolarized spins, however, the NMR signal strength is independent of precession field, allowing MRI signal detection in extremely weak fields, on the order of Earth's magnetic field. SQUID-detected MRI has advantages over high-field MRI systems, such as the low cost required to build such a system, and its compactness. The principle has been demonstrated by imaging human extremities, and its future application may include tumor screening.
Another application is the scanning SQUID microscope, which uses a SQUID immersed in liquid helium as the probe. The use of SQUIDs in oil prospecting, mineral exploration, earthquake prediction and geothermal energy surveying is becoming more widespread as superconductor technology develops; they are also used as precision movement sensors in a variety of scientific applications, such as the detection of gravitational waves.
A SQUID is the sensor in each of the four gyroscopes employed on Gravity Probe B in order to test the limits of the theory of general relativity.
A modified RF SQUID was used to observe the dynamical Casimir effect for the first time.
SQUIDs constructed from super-cooled niobium wire loops are used as the basis for D-Wave Systems 2000Q quantum computer.
Transition-edge sensors
One of the largest uses of SQUIDs is to read out superconducting Transition-edge sensors. Hundreds of thousands of multiplexed SQUIDs coupled to transition-edge sensors are presently being deployed to study the Cosmic microwave background, for X-ray astronomy, to search for dark matter made up of Weakly interacting massive particles, and for spectroscopy at Synchrotron light sources.
Cold dark matter
Advanced SQUIDS called near quantum-limited SQUID amplifiers form the basis of the Axion Dark Matter Experiment (ADMX) at the University of Washington. Axions are a prime candidate for cold dark matter.
Proposed uses
A potential military application exists for use in anti-submarine warfare as a magnetic anomaly detector (MAD) fitted to maritime patrol aircraft.
SQUIDs are used in superparamagnetic relaxometry (SPMR), a technology that utilizes the high magnetic field sensitivity of SQUID sensors and the superparamagnetic properties of magnetite nanoparticles. These nanoparticles are paramagnetic; they have no magnetic moment until exposed to an external field where they become ferromagnetic. After removal of the magnetizing field, the nanoparticles decay from a ferromagnetic state to a paramagnetic state, with a time constant that depends upon the particle size and whether they are bound to an external surface. Measurement of the decaying magnetic field by SQUID sensors is used to detect and localize the nanoparticles. Applications for SPMR may include cancer detection.
See also
Aharonov–Bohm effect
Electromagnetism
Geophysics
Macroscopic quantum phenomena
Notes
References
American inventions
Measuring instruments
Superconductivity
Josephson effect
Magnetometers | SQUID | [
"Physics",
"Materials_science",
"Technology",
"Engineering"
] | 2,268 | [
"Josephson effect",
"Physical quantities",
"Superconductivity",
"Materials science",
"Measuring instruments",
"Condensed matter physics",
"Magnetometers",
"Electrical resistance and conductance"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.