id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
13777
https://en.wikipedia.org/wiki/Hard%20disk%20drive
Hard disk drive
A hard disk drive (HDD), hard disk, hard drive, or fixed disk is an electro-mechanical data storage device that stores and retrieves digital data using magnetic storage with one or more rigid rapidly rotating platters coated with magnetic material. The platters are paired with magnetic heads, usually arranged on a moving actuator arm, which read and write data to the platter surfaces. Data is accessed in a random-access manner, meaning that individual blocks of data can be stored and retrieved in any order. HDDs are a type of non-volatile storage, retaining stored data when powered off. Modern HDDs are typically in the form of a small rectangular box. Hard disk drives were introduced by IBM in 1956, and were the dominant secondary storage device for general-purpose computers beginning in the early 1960s. HDDs maintained this position into the modern era of servers and personal computers, though personal computing devices produced in large volume, like mobile phones and tablets, rely on flash memory storage devices. More than 224 companies have produced HDDs historically, though after extensive industry consolidation, most units are manufactured by Seagate, Toshiba, and Western Digital. HDDs dominate the volume of storage produced (exabytes per year) for servers. Though production is growing slowly (by exabytes shipped), sales revenues and unit shipments are declining, because solid-state drives (SSDs) have higher data-transfer rates, higher areal storage density, somewhat better reliability, and much lower latency and access times. The revenues for SSDs, most of which use NAND flash memory, slightly exceeded those for HDDs in 2018. Flash storage products had more than twice the revenue of hard disk drives . Though SSDs have four to nine times higher cost per bit, they are replacing HDDs in applications where speed, power consumption, small size, high capacity and durability are important. , the cost per bit of SSDs is falling, and the price premium over HDDs has narrowed. The primary characteristics of an HDD are its capacity and performance. Capacity is specified in unit prefixes corresponding to powers of : a 1-terabyte (TB) drive has a capacity of gigabytes, where 1 gigabyte = 1 000 megabytes = 1 000 000 kilobytes (1 million) = 1 000 000 000 bytes (1 billion). Typically, some of an HDD's capacity is unavailable to the user because it is used by the file system and the computer operating system, and possibly inbuilt redundancy for error correction and recovery. There can be confusion regarding storage capacity since capacities are stated in decimal gigabytes (powers of 1000) by HDD manufacturers, whereas the most commonly used operating systems report capacities in powers of 1024, which results in a smaller number than advertised. Performance is specified as the time required to move the heads to a track or cylinder (average access time), the time it takes for the desired sector to move under the head (average latency, which is a function of the physical rotational speed in revolutions per minute), and finally, the speed at which the data is transmitted (data rate). The two most common form factors for modern HDDs are 3.5-inch, for desktop computers, and 2.5-inch, primarily for laptops. HDDs are connected to systems by standard interface cables such as SATA (Serial ATA), USB, SAS (Serial Attached SCSI), or PATA (Parallel ATA) cables. History The first production IBM hard disk drive, the 350 disk storage, shipped in 1957 as a component of the IBM 305 RAMAC system. It was approximately the size of two large refrigerators and stored five million six-bit characters (3.75 megabytes) on a stack of 52 disks (100 surfaces used). The 350 had a single arm with two read/write heads, one facing up and the other down, that moved both horizontally between a pair of adjacent platters and vertically from one pair of platters to a second set. Variants of the IBM 350 were the IBM 355, IBM 7300 and IBM 1405. In 1961, IBM announced, and in 1962 shipped, the IBM 1301 disk storage unit, which superseded the IBM 350 and similar drives. The 1301 consisted of one (for Model 1) or two (for model 2) modules, each containing 25 platters, each platter about thick and in diameter. While the earlier IBM disk drives used only two read/write heads per arm, the 1301 used an array of 48 heads (comb), each array moving horizontally as a single unit, one head per surface used. Cylinder-mode read/write operations were supported, and the heads flew about 250 micro-inches (about 6 μm) above the platter surface. Motion of the head array depended upon a binary adder system of hydraulic actuators which assured repeatable positioning. The 1301 cabinet was about the size of three large refrigerators placed side by side, storing the equivalent of about 21 million eight-bit bytes per module. Access time was about a quarter of a second. Also in 1962, IBM introduced the model 1311 disk drive, which was about the size of a washing machine and stored two million characters on a removable disk pack. Users could buy additional packs and interchange them as needed, much like reels of magnetic tape. Later models of removable pack drives, from IBM and others, became the norm in most computer installations and reached capacities of 300 megabytes by the early 1980s. Non-removable HDDs were called "fixed disk" drives. In 1963, IBM introduced the 1302, with twice the track capacity and twice as many tracks per cylinder as the 1301. The 1302 had one (for Model 1) or two (for Model 2) modules, each containing a separate comb for the first 250 tracks and the last 250 tracks. Some high-performance HDDs were manufactured with one head per track, e.g., Burroughs B-475 in 1964, IBM 2305 in 1970, so that no time was lost physically moving the heads to a track and the only latency was the time for the desired block of data to rotate into position under the head. Known as fixed-head or head-per-track disk drives, they were very expensive and are no longer in production. In 1973, IBM introduced a new type of HDD code-named "Winchester". Its primary distinguishing feature was that the disk heads were not withdrawn completely from the stack of disk platters when the drive was powered down. Instead, the heads were allowed to "land" on a special area of the disk surface upon spin-down, "taking off" again when the disk was later powered on. This greatly reduced the cost of the head actuator mechanism but precluded removing just the disks from the drive as was done with the disk packs of the day. Instead, the first models of "Winchester technology" drives featured a removable disk module, which included both the disk pack and the head assembly, leaving the actuator motor in the drive upon removal. Later "Winchester" drives abandoned the removable media concept and returned to non-removable platters. In 1974, IBM introduced the swinging arm actuator, made feasible because the Winchester recording heads function well when skewed to the recorded tracks. The simple design of the IBM GV (Gulliver) drive, invented at IBM's UK Hursley Labs, became IBM's most licensed electro-mechanical invention of all time, the actuator and filtration system being adopted in the 1980s eventually for all HDDs, and still universal nearly 40 years and 10 billion arms later. Like the first removable pack drive, the first "Winchester" drives used platters in diameter. In 1978, IBM introduced a swing arm drive, the IBM 0680 (Piccolo), with eight-inch platters, exploring the possibility that smaller platters might offer advantages. Other eight-inch drives followed, then drives, sized to replace the contemporary floppy disk drives. The latter were primarily intended for the then fledgling personal computer (PC) market. Over time, as recording densities were greatly increased, further reductions in disk diameter to 3.5" and 2.5" were found to be optimum. Powerful rare earth magnet materials became affordable during this period and were complementary to the swing arm actuator design to make possible the compact form factors of modern HDDs. As the 1980s began, HDDs were a rare and very expensive additional feature in PCs, but by the late 1980s, their cost had been reduced to the point where they were standard on all but the cheapest computers. Most HDDs in the early 1980s were sold to PC end users as an external, add-on subsystem. The subsystem was not sold under the drive manufacturer's name but under the subsystem manufacturer's name such as Corvus Systems and Tallgrass Technologies, or under the PC system manufacturer's name such as the Apple ProFile. The IBM PC/XT in 1983 included an internal 10 MB HDD, and soon thereafter, internal HDDs proliferated on personal computers. External HDDs remained popular for much longer on the Apple Macintosh. Many Macintosh computers made between 1986 and 1998 featured a SCSI port on the back, making external expansion simple. Older compact Macintosh computers did not have user-accessible hard drive bays (indeed, the Macintosh 128K, Macintosh 512K, and Macintosh Plus did not feature a hard drive bay at all), so on those models, external SCSI disks were the only reasonable option for expanding upon any internal storage. HDD improvements have been driven by increasing areal density, listed in the table above. Applications expanded through the 2000s, from the mainframe computers of the late 1950s to most mass storage applications including computers and consumer applications such as storage of entertainment content. In the 2000s and 2010s, NAND began supplanting HDDs in applications requiring portability or high performance. NAND performance is improving faster than HDDs, and applications for HDDs are eroding. In 2018, the largest hard drive had a capacity of 15 TB, while the largest capacity SSD had a capacity of 100 TB. , HDDs were forecast to reach 100 TB capacities around 2025, but , the expected pace of improvement was pared back to 50 TB by 2026. Smaller form factors, 1.8-inches and below, were discontinued around 2010. The cost of solid-state storage (NAND), represented by Moore's law, is improving faster than HDDs. NAND has a higher price elasticity of demand than HDDs, and this drives market growth. During the late 2000s and 2010s, the product life cycle of HDDs entered a mature phase, and slowing sales may indicate the onset of the declining phase. The 2011 Thailand floods damaged the manufacturing plants and impacted hard disk drive cost adversely between 2011 and 2013. In 2019, Western Digital closed its last Malaysian HDD factory due to decreasing demand, to focus on SSD production. All three remaining HDD manufacturers have had decreasing demand for their HDDs since 2014. Technology Magnetic recording A modern HDD records data by magnetizing a thin film of ferromagnetic material on both sides of a disk. Sequential changes in the direction of magnetization represent binary data bits. The data is read from the disk by detecting the transitions in magnetization. User data is encoded using an encoding scheme, such as run-length limited encoding, which determines how the data is represented by the magnetic transitions. A typical HDD design consists of a that holds flat circular disks, called platters, which hold the recorded data. The platters are made from a non-magnetic material, usually aluminum alloy, glass, or ceramic. They are coated with a shallow layer of magnetic material typically 10–20 nm in depth, with an outer layer of carbon for protection. For reference, a standard piece of copy paper is thick. The platters in contemporary HDDs are spun at speeds varying from in energy-efficient portable devices, to 15,000 rpm for high-performance servers. The first HDDs spun at 1,200 rpm and, for many years, 3,600 rpm was the norm. , the platters in most consumer-grade HDDs spin at 5,400 or 7,200 rpm. Information is written to and read from a platter as it rotates past devices called read-and-write heads that are positioned to operate very close to the magnetic surface, with their flying height often in the range of tens of nanometers. The read-and-write head is used to detect and modify the magnetization of the material passing immediately under it. In modern drives, there is one head for each magnetic platter surface on the spindle, mounted on a common arm. An actuator arm (or access arm) moves the heads on an arc (roughly radially) across the platters as they spin, allowing each head to access almost the entire surface of the platter as it spins. The arm is moved using a voice coil actuator or, in some older designs, a stepper motor. Early hard disk drives wrote data at some constant bits per second, resulting in all tracks having the same amount of data per track, but modern drives (since the 1990s) use zone bit recording, increasing the write speed from inner to outer zone and thereby storing more data per track in the outer zones. In modern drives, the small size of the magnetic regions creates the danger that their magnetic state might be lost because of thermal effects⁠ ⁠— thermally induced magnetic instability which is commonly known as the "superparamagnetic limit". To counter this, the platters are coated with two parallel magnetic layers, separated by a three-atom layer of the non-magnetic element ruthenium, and the two layers are magnetized in opposite orientation, thus reinforcing each other. Another technology used to overcome thermal effects to allow greater recording densities is perpendicular recording (PMR), first shipped in 2005, and , used in certain HDDs. Perpendicular recording may be accompanied by changes in the manufacturing of the read/write heads to increase the strength of the magnetic field created by the heads. In 2004, a higher-density recording media was introduced, consisting of coupled soft and hard magnetic layers. So-called exchange spring media magnetic storage technology, also known as exchange coupled composite media, allows good writability due to the write-assist nature of the soft layer. However, the thermal stability is determined only by the hardest layer and not influenced by the soft layer. Flux control MAMR (FC-MAMR) allows a hard drive to have increased recording capacity without the need for new hard disk drive platter materials. MAMR hard drives have a microwave-generating spin torque generator (STO) on the read/write heads which allows physically smaller bits to be recorded to the platters, increasing areal density. Normally hard drive recording heads have a pole called a main pole that is used for writing to the platters, and adjacent to this pole is an air gap and a shield. The write coil of the head surrounds the pole. The STO device is placed in the air gap between the pole and the shield to increase the strength of the magnetic field created by the pole; FC-MAMR technically doesn't use microwaves but uses technology employed in MAMR. The STO has a Field Generation Layer (FGL) and a Spin Injection Layer (SIL), and the FGL produces a magnetic field using spin-polarised electrons originating in the SIL, which is a form of spin torque energy. Components A typical HDD has two electric motors: a spindle motor that spins the disks and an actuator (motor) that positions the read/write head assembly across the spinning disks. The disk motor has an external rotor attached to the disks; the stator windings are fixed in place. Opposite the actuator at the end of the head support arm is the read-write head; thin printed-circuit cables connect the read-write heads to amplifier electronics mounted at the pivot of the actuator. The head support arm is very light, but also stiff; in modern drives, acceleration at the head reaches 550 g. The is a permanent magnet and moving coil motor that swings the heads to the desired position. A metal plate supports a squat neodymium–iron–boron (NIB) high-flux magnet. Beneath this plate is the moving coil, often referred to as the voice coil by analogy to the coil in loudspeakers, which is attached to the actuator hub, and beneath that is a second NIB magnet, mounted on the bottom plate of the motor (some drives have only one magnet). The voice coil itself is shaped rather like an arrowhead and is made of doubly coated copper magnet wire. The inner layer is insulation, and the outer is thermoplastic, which bonds the coil together after it is wound on a form, making it self-supporting. The portions of the coil along the two sides of the arrowhead (which point to the center of the actuator bearing) then interact with the magnetic field of the fixed magnet. Current flowing radially outward along one side of the arrowhead and radially inward on the other produces the tangential force. If the magnetic field were uniform, each side would generate opposing forces that would cancel each other out. Therefore, the surface of the magnet is half north pole and half south pole, with the radial dividing line in the middle, causing the two sides of the coil to see opposite magnetic fields and produce forces that add instead of canceling. Currents along the top and bottom of the coil produce radial forces that do not rotate the head. The HDD's electronics controls the movement of the actuator and the rotation of the disk and transfers data to or from a disk controller. Feedback of the drive electronics is accomplished by means of special segments of the disk dedicated to servo feedback. These are either complete concentric circles (in the case of dedicated servo technology) or segments interspersed with real data (in the case of embedded servo, otherwise known as sector servo technology). The servo feedback optimizes the signal-to-noise ratio of the GMR sensors by adjusting the voice coil motor to rotate the arm. A more modern servo system also employs milli or micro actuators to more accurately position the read/write heads. The spinning of the disks uses fluid-bearing spindle motors. Modern disk firmware is capable of scheduling reads and writes efficiently on the platter surfaces and remapping sectors of the media that have failed. Error rates and handling Modern drives make extensive use of error correction codes (ECCs), particularly Reed–Solomon error correction. These techniques store extra bits, determined by mathematical formulas, for each block of data; the extra bits allow many errors to be corrected invisibly. The extra bits themselves take up space on the HDD, but allow higher recording densities to be employed without causing uncorrectable errors, resulting in much larger storage capacity. For example, a typical 1 TB hard disk with 512-byte sectors provides additional capacity of about 93 GB for the ECC data. In the newest drives, , low-density parity-check codes (LDPC) were supplanting Reed–Solomon; LDPC codes enable performance close to the Shannon limit and thus provide the highest storage density available. Typical hard disk drives attempt to "remap" the data in a physical sector that is failing to a spare physical sector provided by the drive's "spare sector pool" (also called "reserve pool"), while relying on the ECC to recover stored data while the number of errors in a bad sector is still low enough. The S.M.A.R.T (Self-Monitoring, Analysis and Reporting Technology) feature counts the total number of errors in the entire HDD fixed by ECC (although not on all hard drives as the related S.M.A.R.T attributes "Hardware ECC Recovered" and "Soft ECC Correction" are not consistently supported), and the total number of performed sector remappings, as the occurrence of many such errors may predict an HDD failure. The "No-ID Format", developed by IBM in the mid-1990s, contains information about which sectors are bad and where remapped sectors have been located. Only a tiny fraction of the detected errors end up as not correctable. Examples of specified uncorrected bit read error rates include: 2013 specifications for enterprise SAS disk drives state the error rate to be one uncorrected bit read error in every 1016 bits read, 2018 specifications for consumer SATA hard drives state the error rate to be one uncorrected bit read error in every 1014 bits. Within a given manufacturers model the uncorrected bit error rate is typically the same regardless of capacity of the drive. The worst type of errors are silent data corruptions which are errors undetected by the disk firmware or the host operating system; some of these errors may be caused by hard disk drive malfunctions while others originate elsewhere in the connection between the drive and the host. Development The rate of areal density advancement was similar to Moore's law (doubling every two years) through 2010: 60% per year during 1988–1996, 100% during 1996–2003 and 30% during 2003–2010. Speaking in 1997, Gordon Moore called the increase "flabbergasting", while observing later that growth cannot continue forever. Price improvement decelerated to −12% per year during 2010–2017, as the growth of areal density slowed. The rate of advancement for areal density slowed to 10% per year during 2010–2016, and there was difficulty in migrating from perpendicular recording to newer technologies. As bit cell size decreases, more data can be put onto a single drive platter. In 2013, a production desktop 3 TB HDD (with four platters) would have had an areal density of about 500 Gbit/in2 which would have amounted to a bit cell comprising about 18 magnetic grains (11 by 1.6 grains). Since the mid-2000s, areal density progress has been challenged by a superparamagnetic trilemma involving grain size, grain magnetic strength and ability of the head to write. In order to maintain acceptable signal-to-noise, smaller grains are required; smaller grains may self-reverse (electrothermal instability) unless their magnetic strength is increased, but known write head materials are unable to generate a strong enough magnetic field sufficient to write the medium in the increasingly smaller space taken by grains. Magnetic storage technologies are being developed to address this trilemma, and compete with flash memory–based solid-state drives (SSDs). In 2013, Seagate introduced shingled magnetic recording (SMR), intended as something of a "stopgap" technology between PMR and Seagate's intended successor heat-assisted magnetic recording (HAMR). SMR utilizes overlapping tracks for increased data density, at the cost of design complexity and lower data access speeds (particularly write speeds and random access 4k speeds). By contrast, HGST (now part of Western Digital) focused on developing ways to seal helium-filled drives instead of the usual filtered air. Since turbulence and friction are reduced, higher areal densities can be achieved due to using a smaller track width, and the energy dissipated due to friction is lower as well, resulting in a lower power draw. Furthermore, more platters can be fit into the same enclosure space, although helium gas is notoriously difficult to prevent escaping. Thus, helium drives are completely sealed and do not have a breather port, unlike their air-filled counterparts. Other recording technologies are either under research or have been commercially implemented to increase areal density, including Seagate's heat-assisted magnetic recording (HAMR). HAMR requires a different architecture with redesigned media and read/write heads, new lasers, and new near-field optical transducers. HAMR is expected to ship commercially in late 2024, after technical issues delayed its introduction by more than a decade, from earlier projections as early as 2009. HAMR's planned successor, bit-patterned recording (BPR), has been removed from the roadmaps of Western Digital and Seagate. Western Digital's microwave-assisted magnetic recording (MAMR), also referred to as energy-assisted magnetic recording (EAMR), was sampled in 2020, with the first EAMR drive, the Ultrastar HC550, shipping in late 2020. Two-dimensional magnetic recording (TDMR) and "current perpendicular to plane" giant magnetoresistance (CPP/GMR) heads have appeared in research papers. Some drives have adopted dual independent actuator arms to increase read/write speeds and compete with SSDs. A 3D-actuated vacuum drive (3DHD) concept and 3D magnetic recording have been proposed. Depending upon assumptions on feasibility and timing of these technologies, Seagate forecasts that areal density will grow 20% per year during 2020–2034. Capacity The highest-capacity HDDs shipping commercially are 32 TB. The capacity of a hard disk drive, as reported by an operating system to the end user, is smaller than the amount stated by the manufacturer for several reasons, e.g. the operating system using some space, use of some space for data redundancy, space use for file system structures. Confusion of decimal prefixes and binary prefixes can also lead to errors. Calculation Modern hard disk drives appear to their host controller as a contiguous set of logical blocks, and the gross drive capacity is calculated by multiplying the number of blocks by the block size. This information is available from the manufacturer's product specification, and from the drive itself through use of operating system functions that invoke low-level drive commands. Older IBM and compatible drives, e.g. IBM 3390 using the CKD record format, have variable length records; such drive capacity calculations must take into account the characteristics of the records. Some newer DASD simulate CKD, and the same capacity formulae apply. The gross capacity of older sector-oriented HDDs is calculated as the product of the number of cylinders per recording zone, the number of bytes per sector (most commonly 512), and the count of zones of the drive. Some modern SATA drives also report cylinder-head-sector (CHS) capacities, but these are not physical parameters because the reported values are constrained by historic operating system interfaces. The C/H/S scheme has been replaced by logical block addressing (LBA), a simple linear addressing scheme that locates blocks by an integer index, which starts at LBA 0 for the first block and increments thereafter. When using the C/H/S method to describe modern large drives, the number of heads is often set to 64, although a typical modern hard disk drive has between one and four platters. In modern HDDs, spare capacity for defect management is not included in the published capacity; however, in many early HDDs, a certain number of sectors were reserved as spares, thereby reducing the capacity available to the operating system. Furthermore, many HDDs store their firmware in a reserved service zone, which is typically not accessible by the user, and is not included in the capacity calculation. For RAID subsystems, data integrity and fault-tolerance requirements also reduce the realized capacity. For example, a RAID 1 array has about half the total capacity as a result of data mirroring, while a RAID 5 array with drives loses of capacity (which equals to the capacity of a single drive) due to storing parity information. RAID subsystems are multiple drives that appear to be one drive or more drives to the user, but provide fault tolerance. Most RAID vendors use checksums to improve data integrity at the block level. Some vendors design systems using HDDs with sectors of 520 bytes to contain 512 bytes of user data and eight checksum bytes, or by using separate 512-byte sectors for the checksum data. Some systems may use hidden partitions for system recovery, reducing the capacity available to the end user without knowledge of special disk partitioning utilities like diskpart in Windows. Formatting Data is stored on a hard drive in a series of logical blocks. Each block is delimited by markers identifying its start and end, error detecting and correcting information, and space between blocks to allow for minor timing variations. These blocks often contained 512 bytes of usable data, but other sizes have been used. As drive density increased, an initiative known as Advanced Format extended the block size to 4096 bytes of usable data, with a resulting significant reduction in the amount of disk space used for block headers, error-checking data, and spacing. The process of initializing these logical blocks on the physical disk platters is called low-level formatting, which is usually performed at the factory and is not normally changed in the field. High-level formatting writes data structures used by the operating system to organize data files on the disk. This includes writing partition and file system structures into selected logical blocks. For example, some of the disk space will be used to hold a directory of disk file names and a list of logical blocks associated with a particular file. Examples of partition mapping scheme include Master boot record (MBR) and GUID Partition Table (GPT). Examples of data structures stored on disk to retrieve files include the File Allocation Table (FAT) in the DOS file system and inodes in many UNIX file systems, as well as other operating system data structures (also known as metadata). As a consequence, not all the space on an HDD is available for user files, but this system overhead is usually small compared with user data. Units In the early days of computing, the total capacity of HDDs was specified in seven to nine decimal digits frequently truncated with the idiom millions. By the 1970s, the total capacity of HDDs was given by manufacturers using SI decimal prefixes such as megabytes (1 MB = 1,000,000 bytes), gigabytes (1 GB = 1,000,000,000 bytes) and terabytes (1 TB = 1,000,000,000,000 bytes). However, capacities of memory are usually quoted using a binary interpretation of the prefixes, i.e. using powers of 1024 instead of 1000. Software reports hard disk drive or memory capacity in different forms using either decimal or binary prefixes. The Microsoft Windows family of operating systems uses the binary convention when reporting storage capacity, so an HDD offered by its manufacturer as a 1 TB drive is reported by these operating systems as a 931 GB HDD. Mac OS X 10.6 ("Snow Leopard") uses decimal convention when reporting HDD capacity. The default behavior of the command-line utility on Linux is to report the HDD capacity as a number of 1024-byte units. The difference between the decimal and binary prefix interpretation caused some consumer confusion and led to class action suits against HDD manufacturers. The plaintiffs argued that the use of decimal prefixes effectively misled consumers, while the defendants denied any wrongdoing or liability, asserting that their marketing and advertising complied in all respects with the law and that no class member sustained any damages or injuries. In 2020, a California court ruled that use of the decimal prefixes with a decimal meaning was not misleading. Form factors IBM's first hard disk drive, the IBM 350, used a stack of fifty 24-inch platters, stored 3.75 MB of data (approximately the size of one modern digital picture), and was of a size comparable to two large refrigerators. In 1962, IBM introduced its model 1311 disk, which used six 14-inch (nominal size) platters in a removable pack and was roughly the size of a washing machine. This became a standard platter size for many years, used also by other manufacturers. The IBM 2314 used platters of the same size in an eleven-high pack and introduced the "drive in a drawer" layout, sometimes called the "pizza oven", although the "drawer" was not the complete drive. Into the 1970s, HDDs were offered in standalone cabinets of varying dimensions containing from one to four HDDs. Beginning in the late 1960s, drives were offered that fit entirely into a chassis that would mount in a 19-inch rack. Digital's RK05 and RL01 were early examples using single 14-inch platters in removable packs, the entire drive fitting in a 10.5-inch-high rack space (six rack units). In the mid-to-late 1980s, the similarly sized Fujitsu Eagle, which used (coincidentally) 10.5-inch platters, was a popular product. With increasing sales of microcomputers having built-in floppy-disk drives (FDDs), HDDs that would fit to the FDD mountings became desirable. Starting with the Shugart Associates SA1000, HDD form factors initially followed those of 8-inch, 5¼-inch, and 3½-inch floppy disk drives. Although referred to by these nominal sizes, the actual sizes for those three drives respectively are 9.5", 5.75" and 4" wide. Because there were no smaller floppy disk drives, smaller HDD form factors such as 2½-inch drives (actually 2.75" wide) developed from product offerings or industry standards. , 2½-inch and 3½-inch hard disks are the most popular sizes. By 2009, all manufacturers had discontinued the development of new products for the 1.3-inch, 1-inch and 0.85-inch form factors due to falling prices of flash memory, which has no moving parts. While nominal sizes are in inches, actual dimensions are specified in millimeters. Performance characteristics The factors that limit the time to access the data on an HDD are mostly related to the mechanical nature of the rotating disks and moving heads, including: Seek time is a measure of how long it takes the head assembly to travel to the track of the disk that contains data. Rotational latency is incurred because the desired disk sector may not be directly under the head when data transfer is requested. Average rotational latency is shown in the table, based on the statistical relation that the average latency is one-half the rotational period. The bit rate or data transfer rate (once the head is in the right position) creates delay which is a function of the number of blocks transferred; typically relatively small, but can be quite long with the transfer of large contiguous files. Delay may also occur if the drive disks are stopped to save energy. Defragmentation is a procedure used to minimize delay in retrieving data by moving related items to physically proximate areas on the disk. Some computer operating systems perform defragmentation automatically. Although automatic defragmentation is intended to reduce access delays, performance will be temporarily reduced while the procedure is in progress. Time to access data can be improved by increasing rotational speed (thus reducing latency) or by reducing the time spent seeking. Increasing areal density increases throughput by increasing data rate and by increasing the amount of data under a set of heads, thereby potentially reducing seek activity for a given amount of data. The time to access data has not kept up with throughput increases, which themselves have not kept up with growth in bit density and storage capacity. Latency Data transfer rate , a typical 7,200-rpm desktop HDD has a sustained "disk-to-buffer" data transfer rate up to . This rate depends on the track location; the rate is higher for data on the outer tracks (where there are more data sectors per rotation) and lower toward the inner tracks (where there are fewer data sectors per rotation); and is generally somewhat higher for 10,000-rpm drives. A current, widely used standard for the "buffer-to-computer" interface is SATA, which can send about 300 megabyte/s (10-bit encoding) from the buffer to the computer, and thus is still comfortably ahead of today's disk-to-buffer transfer rates. Data transfer rate (read/write) can be measured by writing a large file to disk using special file-generator tools, then reading back the file. Transfer rate can be influenced by file system fragmentation and the layout of the files. HDD data transfer rate depends upon the rotational speed of the platters and the data recording density. Because heat and vibration limit rotational speed, advancing density becomes the main method to improve sequential transfer rates. Higher speeds require a more powerful spindle motor, which creates more heat. While areal density advances by increasing both the number of tracks across the disk and the number of sectors per track, only the latter increases the data transfer rate for a given rpm. Since data transfer rate performance tracks only one of the two components of areal density, its performance improves at a lower rate. Other considerations Other performance considerations include quality-adjusted price, power consumption, audible noise, and both operating and non-operating shock resistance. Access and interfaces Current hard drives connect to a computer over one of several bus types, including parallel ATA, Serial ATA, SCSI, Serial Attached SCSI (SAS), and Fibre Channel. Some drives, especially external portable drives, use IEEE 1394, or USB. All of these interfaces are digital; electronics on the drive process the analog signals from the read/write heads. Current drives present a consistent interface to the rest of the computer, independent of the data encoding scheme used internally, and independent of the physical number of disks and heads within the drive. Typically, a DSP in the electronics inside the drive takes the raw analog voltages from the read head and uses PRML and Reed–Solomon error correction to decode the data, then sends that data out the standard interface. That DSP also watches the error rate detected by error detection and correction, and performs bad sector remapping, data collection for Self-Monitoring, Analysis, and Reporting Technology, and other internal tasks. Modern interfaces connect the drive to the host interface with a single data/control cable. Each drive also has an additional power cable, usually direct to the power supply unit. Older interfaces had separate cables for data signals and for drive control signals. Small Computer System Interface (SCSI), originally named SASI for Shugart Associates System Interface, was standard on servers, workstations, Commodore Amiga, Atari ST and Apple Macintosh computers through the mid-1990s, by which time most models had been transitioned to newer interfaces. The length limit of the data cable allows for external SCSI devices. The SCSI command set is still used in the more modern SAS interface. Integrated Drive Electronics (IDE), later standardized under the name AT Attachment (ATA, with the alias PATA (Parallel ATA) retroactively added upon introduction of SATA) moved the HDD controller from the interface card to the disk drive. This helped to standardize the host/controller interface, reduce the programming complexity in the host device driver, and reduced system cost and complexity. The 40-pin IDE/ATA connection transfers 16 bits of data at a time on the data cable. The data cable was originally 40-conductor, but later higher speed requirements led to an "ultra DMA" (UDMA) mode using an 80-conductor cable with additional wires to reduce crosstalk at high speed. EIDE was an unofficial update (by Western Digital) to the original IDE standard, with the key improvement being the use of direct memory access (DMA) to transfer data between the disk and the computer without the involvement of the CPU, an improvement later adopted by the official ATA standards. By directly transferring data between memory and disk, DMA eliminates the need for the CPU to copy byte per byte, therefore allowing it to process other tasks while the data transfer occurs. Fibre Channel (FC) is a successor to parallel SCSI interface on enterprise market. It is a serial protocol. In disk drives usually the Fibre Channel Arbitrated Loop (FC-AL) connection topology is used. FC has much broader usage than mere disk interfaces, and it is the cornerstone of storage area networks (SANs). Recently other protocols for this field, like iSCSI and ATA over Ethernet have been developed as well. Confusingly, drives usually use copper twisted-pair cables for Fibre Channel, not fiber optics. The latter are traditionally reserved for larger devices, such as servers or disk array controllers. Serial Attached SCSI (SAS). The SAS is a new generation serial communication protocol for devices designed to allow for much higher speed data transfers and is compatible with SATA. SAS uses a mechanically compatible data and power connector to standard 3.5-inch SATA1/SATA2 HDDs, and many server-oriented SAS RAID controllers are also capable of addressing SATA HDDs. SAS uses serial communication instead of the parallel method found in traditional SCSI devices but still uses SCSI commands. Serial ATA (SATA). The SATA data cable has one data pair for differential transmission of data to the device, and one pair for differential receiving from the device, just like EIA-422. That requires that data be transmitted serially. A similar differential signaling system is used in RS485, LocalTalk, USB, FireWire, and differential SCSI. SATA I to III are designed to be compatible with, and use, a subset of SAS commands, and compatible interfaces. Therefore, a SATA hard drive can be connected to and controlled by a SAS hard drive controller (with some minor exceptions such as drives/controllers with limited compatibility). However, they cannot be connected the other way round—a SATA controller cannot be connected to a SAS drive. Integrity and failure Due to the extremely close spacing between the heads and the disk surface, HDDs are vulnerable to being damaged by a head crash – a failure of the disk in which the head scrapes across the platter surface, often grinding away the thin magnetic film and causing data loss. Head crashes can be caused by electronic failure, a sudden power failure, physical shock, contamination of the drive's internal enclosure, wear and tear, corrosion, or poorly manufactured platters and heads. The HDD's spindle system relies on air density inside the disk enclosure to support the heads at their proper flying height while the disk rotates. HDDs require a certain range of air densities to operate properly. The connection to the external environment and density occurs through a small hole in the enclosure (about 0.5 mm in breadth), usually with a filter on the inside (the breather filter). If the air density is too low, then there is not enough lift for the flying head, so the head gets too close to the disk, and there is a risk of head crashes and data loss. Specially manufactured sealed and pressurized disks are needed for reliable high-altitude operation, above about . Modern disks include temperature sensors and adjust their operation to the operating environment. Breather holes can be seen on all disk drives – they usually have a sticker next to them, warning the user not to cover the holes. The air inside the operating drive is constantly moving too, being swept in motion by friction with the spinning platters. This air passes through an internal recirculation filter to remove any leftover contaminants from manufacture, any particles or chemicals that may have somehow entered the enclosure, and any particles or outgassing generated internally in normal operation. Very high humidity present for extended periods of time can corrode the heads and platters. An exception to this are hermetically sealed, helium-filled HDDs that largely eliminate environmental issues that can arise due to humidity or atmospheric pressure changes. Such HDDs were introduced by HGST in their first successful high-volume implementation in 2013. For giant magnetoresistive (GMR) heads in particular, a minor head crash from contamination (that does not remove the magnetic surface of the disk) still results in the head temporarily overheating, due to friction with the disk surface and can render the data unreadable for a short period until the head temperature stabilizes (so-called "thermal asperity", a problem which can partially be dealt with by proper electronic filtering of the read signal). When the logic board of a hard disk fails, the drive can often be restored to functioning order and the data recovered by replacing the circuit board with one of an identical hard disk. In the case of read-write head faults, they can be replaced using specialized tools in a dust-free environment. If the disk platters are undamaged, they can be transferred into an identical enclosure and the data can be copied or cloned onto a new drive. In the event of disk-platter failures, disassembly and imaging of the disk platters may be required. For logical damage to file systems, a variety of tools, including fsck on UNIX-like systems and CHKDSK on Windows, can be used for data recovery. Recovery from logical damage can require file carving. A common expectation is that hard disk drives designed and marketed for server use will fail less frequently than consumer-grade drives usually used in desktop computers. However, two independent studies by Carnegie Mellon University and Google found that the "grade" of a drive does not relate to the drive's failure rate. A 2011 summary of research, into SSD and magnetic disk failure patterns by Tom's Hardware summarized research findings as follows: Mean time between failures (MTBF) does not indicate reliability; the annualized failure rate is higher and usually more relevant. HDDs do not tend to fail during early use, and temperature has only a minor effect; instead, failure rates steadily increase with age. S.M.A.R.T. warns of mechanical issues but not other issues affecting reliability, and is therefore not a reliable indicator of condition. Failure rates of drives sold as "enterprise" and "consumer" are "very much similar", although these drive types are customized for their different operating environments. In drive arrays, one drive's failure significantly increases the short-term risk of a second drive failing. , Backblaze, a storage provider, reported an annualized failure rate of two percent per year for a storage farm with 110,000 off-the-shelf HDDs with the reliability varying widely between models and manufacturers. Backblaze subsequently reported that the failure rate for HDDs and SSD of equivalent age was similar. To minimize cost and overcome failures of individual HDDs, storage systems providers rely on redundant HDD arrays. HDDs that fail are replaced on an ongoing basis. Market segments Consumer segment Desktop HDDs Desktop HDDs typically have one to five internal platters, rotate at 5,400 to 10,000 rpm, and have a media transfer rate of or higher (1 GB = 109 bytes; = ). Earlier (1980–1990s) drives tend to be slower in rotation speed. , the highest-capacity desktop HDDs stored 16 TB, with plans to release 18 TB drives later in 2019. 18 TB HDDs were released in 2020. , the typical speed of a hard drive in an average desktop computer is 7,200 rpm, whereas low-cost desktop computers may use 5,900 rpm or 5,400 rpm drives. For some time in the 2000s and early 2010s some desktop users and data centers also used 10,000 rpm drives such as Western Digital Raptor but such drives have become much rarer and are not commonly used now, having been replaced by NAND flash-based SSDs. Mobile (laptop) HDDs Smaller than their desktop and enterprise counterparts, they tend to be slower and have lower capacity, because typically has one internal platter and were 2.5" or 1.8" physical size instead of more common for desktops 3.5" form-factor. Mobile HDDs spin at 4,200 rpm, 5,200 rpm, 5,400 rpm, or 7,200 rpm, with 5,400 rpm being the most common; 7,200 rpm drives tend to be more expensive and have smaller capacities, while 4,200 rpm models usually have very high storage capacities. Because of smaller platter(s), mobile HDDs generally have lower capacity than their desktop counterparts. Consumer electronics HDDs These drives typically spin at 5400 rpm and include: Video hard drives, sometimes called "surveillance hard drives", are embedded into digital video recorders and provide a guaranteed streaming capacity, even in the face of read and write errors. Drives embedded into automotive vehicles; they are typically built to resist larger amounts of shock and operate over a larger temperature range. External and portable HDDs Current external hard disk drives typically connect via USB-C; earlier models use USB-B (sometimes with using of a pair of ports for better bandwidth) or (rarely) eSATA connection. Variants using USB 2.0 interface generally have slower data transfer rates when compared to internally mounted hard drives connected through SATA. Plug and play drive functionality offers system compatibility and features large storage options and portable design. , available capacities for external hard disk drives ranged from 500 GB to 10 TB. External hard disk drives are usually available as assembled integrated products, but may be also assembled by combining an external enclosure (with USB or other interface) with a separately purchased drive. They are available in 2.5-inch and 3.5-inch sizes; 2.5-inch variants are typically called portable external drives, while 3.5-inch variants are referred to as desktop external drives. "Portable" drives are packaged in smaller and lighter enclosures than the "desktop" drives; additionally, "portable" drives use power provided by the USB connection, while "desktop" drives require external power bricks. Features such as encryption, Wi-Fi connectivity, biometric security or multiple interfaces (for example, FireWire) are available at a higher cost. There are pre-assembled external hard disk drives that, when taken out from their enclosures, cannot be used internally in a laptop or desktop computer due to embedded USB interface on their printed circuit boards, and lack of SATA (or Parallel ATA) interfaces. Enterprise and business segment Server and workstation HDDs Typically used with multiple-user computers running enterprise software. Examples are: transaction processing databases, internet infrastructure (email, webserver, e-commerce), scientific computing software, and nearline storage management software. Enterprise drives commonly operate continuously ("24/7") in demanding environments while delivering the highest possible performance without sacrificing reliability. Maximum capacity is not the primary goal, and as a result the drives are often offered in capacities that are relatively low in relation to their cost. The fastest enterprise HDDs spin at 10,000 or 15,000 rpm, and can achieve sequential media transfer speeds above and a sustained transfer rate up to . Drives running at 10,000 or 15,000 rpm use smaller platters to mitigate increased power requirements (as they have less air drag) and therefore generally have lower capacity than the highest capacity desktop drives. Enterprise HDDs are commonly connected through Serial Attached SCSI (SAS) or Fibre Channel (FC). Some support multiple ports, so they can be connected to a redundant host bus adapter. Enterprise HDDs can have sector sizes larger than 512 bytes (often 520, 524, 528 or 536 bytes). The additional per-sector space can be used by hardware RAID controllers or applications for storing Data Integrity Field (DIF) or Data Integrity Extensions (DIX) data, resulting in higher reliability and prevention of silent data corruption. Surveillance hard drives; Video recording HDDs used in network video recorders. Economy Price evolution HDD price per byte decreased at the rate of 40% per year during 1988–1996, 51% per year during 1996–2003 and 34% per year during 2003–2010. The price decrease slowed down to 13% per year during 2011–2014, as areal density increase slowed and the 2011 Thailand floods damaged manufacturing facilities and have held at 11% per year during 2010–2017. The Federal Reserve Board has published a quality-adjusted price index for large-scale enterprise storage systems including three or more enterprise HDDs and associated controllers, racks and cables. Prices for these large-scale storage systems decreased at the rate of 30% per year during 2004–2009 and 22% per year during 2009–2014. Manufacturers and sales More than 200 companies have manufactured HDDs over time, but consolidations have concentrated production to just three manufacturers today: Western Digital, Seagate, and Toshiba. Production is mainly in the Pacific rim. HDD unit shipments peaked at 651 million units in 2010 and have been declining since then to 166 million units in 2022. Seagate at 43% of units had the largest market share. Competition from SSDs HDDs are being superseded by solid-state drives (SSDs) in markets where the higher speed (up to 7 gigabytes per second for M.2 (NGFF) NVMe drives and 2.5 gigabytes per second for PCIe expansion card drives)), ruggedness, and lower power of SSDs are more important than price, since the bit cost of SSDs is four to nine times higher than HDDs. , HDDs are reported to have a failure rate of 2–9% per year, while SSDs have fewer failures: 1–3% per year. However, SSDs have more un-correctable data errors than HDDs. SSDs are available in larger capacities (up to 100 TB) than the largest HDD, as well as higher storage densities (100 TB and 30 TB SSDs are housed in 2.5 inch HDD cases with the same height as a 3.5-inch HDD), although such large SSDs are very expensive. A laboratory demonstration of a 1.33 Tb 3D NAND chip with 96 layers (NAND commonly used in solid-state drives (SSDs)) had 5.5 Tbit/in2 ), while the maximum areal density for HDDs is 1.5 Tbit/in2. The areal density of flash memory is doubling every two years, similar to Moore's law (40% per year) and faster than the 10–20% per year for HDDs. , the maximum capacity was 16 terabytes for an HDD, and 100 terabytes for an SSD. HDDs were used in 70% of the desktop and notebook computers produced in 2016, and SSDs were used in 30%. The usage share of HDDs is declining and could drop below 50% in 2018–2019 according to one forecast, because SSDs are replacing smaller-capacity (less than one terabyte) HDDs in desktop and notebook computers and MP3 players. The market for silicon-based flash memory (NAND) chips, used in SSDs and other applications, is growing faster than for HDDs. Worldwide NAND revenue grew 16% per year from $22 billion to $57 billion during 2011–2017, while production grew 45% per year from 19 exabytes to 175 exabytes.
Technology
Data storage
null
13782
https://en.wikipedia.org/wiki/Hebrew%20calendar
Hebrew calendar
The Hebrew calendar (), also called the Jewish calendar, is a lunisolar calendar used today for Jewish religious observance and as an official calendar of Israel. It determines the dates of Jewish holidays and other rituals, such as yahrzeits and the schedule of public Torah readings. In Israel, it is used for religious purposes, provides a time frame for agriculture, and is an official calendar for civil holidays alongside the Gregorian calendar. Like other lunisolar calendars, the Hebrew calendar consists of months of 29 or 30 days which begin and end at approximately the time of the new moon. As 12 such months comprise a total of just 354 days, an extra lunar month is added every 2 or 3 years so that the long-term average year length closely approximates the actual length of the solar year. Originally, the beginning of each month was determined based on physical observation of a new moon, while the decision of whether to add the leap month was based on observation of natural agriculture-related events in ancient Israel. Between the years 70 and 1178, these empirical criteria were gradually replaced with a set of mathematical rules. Month length now follows a fixed schedule which is adjusted based on the molad interval (a mathematical approximation of the mean time between new moons) and several other rules, while leap months are now added in 7 out of every 19 years according to the Metonic cycle. Nowadays, Hebrew years are generally counted according to the system of (Latin: "in the year of the world"; , "from the creation of the world", abbreviated AM). This system attempts to calculate the number of years since the creation of the world according to the Genesis creation narrative and subsequent Biblical stories. The current Hebrew year, AM , began at sunset on and will end at sunset on . Components Days Based on the classic rabbinic interpretation of ("There was evening and there was morning, one day"), a day in the rabbinic Hebrew calendar runs from sunset (the start of "the evening") to the next sunset. Similarly, Yom Kippur, Passover, and Shabbat are described in the Bible as lasting "from evening to evening". The days are therefore figured locally. Halachically, the exact time when days begin or end is uncertain: this time could be either sundown (shekiah) or else nightfall (tzait ha'kochavim, "when the stars appear"). The time between sundown and nightfall (bein hashmashot) is of uncertain status. Thus (for example) observance of Shabbat begins before sundown on Friday and ends after nightfall on Saturday, to be sure that Shabbat is not violated no matter when the transition between days occurs. Instead of the International Date Line convention, there are varying opinions as to where the day changes. (See International date line in Judaism.) Hours Judaism uses multiple systems for dividing hours. In one system, the 24-hour day is divided into fixed hours equal to of a day, while each hour is divided into 1080 halakim (parts, singular: helek). A part is seconds ( minute). The ultimate ancestor of the helek was a Babylonian time period called a barleycorn, equal to of a Babylonian time degree (1° of celestial rotation). These measures are not generally used for everyday purposes; their best-known use is for calculating and announcing the molad. In another system, the daytime period is divided into 12 relative hours (sha'ah z'manit, also sometimes called "halachic hours"). A relative hour is defined as of the time from sunrise to sunset, or dawn to dusk, as per the two opinions in this regard. Therefore, an hour can be less than 60 minutes in winter, and more than 60 minutes in summer; similarly, the 6th hour ends at solar noon, which generally differs from 12:00. Relative hours are used for the calculation of prayer times (zmanim); for example, the Shema must be recited in the first three relative hours of the day. Neither system is commonly used in ordinary life; rather, the local civil clock is used. This is even the case for ritual times (e.g. "The latest time to recite Shema today is 9:38 AM"). Weeks The Hebrew week (, ) is a cycle of seven days, mirroring the seven-day period of the Book of Genesis in which the world is created. The names for the days of the week are simply the day number within the week. The week begins with Day 1 (Sunday) and ends with Shabbat (Saturday). (More precisely, since days begin in the evening, weeks begin and end on Saturday evening. Day 1 lasts from Saturday evening to Sunday evening, while Shabbat lasts from Friday evening to Saturday evening.) Since some calculations use division, a remainder of 0 signifies Saturday. In Hebrew, these names may be abbreviated using the numerical value of the Hebrew letters, for example (Day 1, or Yom Rishon ()): The names of the days of the week are modeled on the seven days mentioned in the Genesis creation account. For example, Genesis 1:8 "... And there was evening and there was morning, a second day" corresponds to Yom Sheni meaning "second day". (However, for days 1, 6, and 7 the modern name differs slightly from the version in Genesis.) The seventh day, Shabbat, as its Hebrew name indicates, is a day of rest in Judaism. In Talmudic Hebrew, the word Shabbat () can also mean "week", so that in ritual liturgy a phrase like "Yom Reviʻi beShabbat" means "the fourth day in the week". Days of week of holidays Jewish holidays can only fall on the weekdays shown in the following table: The period from 1 Adar (or Adar II, in leap years) to 29 Marcheshvan contains all of the festivals specified in the Bible (Purim, Passover, Shavuot, Rosh Hashanah, Yom Kippur, Sukkot, and Shemini Atzeret). The lengths of months in this period are fixed, meaning that the day of week of Passover dictates the day of week of the other Biblical holidays. However, the lengths of the months of Marcheshvan and Kislev can each vary by a day (due to the Rosh Hashanah postponement rules which are used to adjust the year length). As a result, the holidays falling after Marcheshvan (starting with Chanukah) can fall on multiple days for a given row of the table. A common mnemonic is "", meaning: "Rosh HaShana cannot be on Sunday, Wednesday or Friday, and Passover cannot be on Monday, Wednesday or Friday" with each day's numerical equivalent, in gematria, is used, such that א' = 1 = Sunday, and so forth. From this rule, every other date can be calculated by adding weeks and days until that date's possible day of the week can be derived. Months The Hebrew calendar is a lunisolar calendar, meaning that months are based on lunar months, but years are based on solar years. The calendar year features twelve lunar months of 29 or 30 days, with an additional lunar month ("leap month") added periodically to synchronize the twelve lunar cycles with the longer solar year. These extra months are added in seven years (3, 6, 8, 11, 14, 17, and 19) out of a 19-year cycle, known as the Metonic cycle (See Leap months, below). The beginning of each Jewish lunar month is based on the appearance of the new moon. Although originally the new lunar crescent had to be observed and certified by witnesses (as is still done in Karaite Judaism and Islam), nowadays Jewish months have generally fixed lengths which approximate the period between new moons. For these reasons, a given month does not always begin on the same day as its astronomical conjunction. The mean period of the lunar month (precisely, the synodic month) is very close to 29.5 days. Accordingly, the basic Hebrew calendar year is one of twelve lunar months alternating between 29 and 30 days: Thus, the year normally contains twelve months with a total of 354 days. In such a year, the month of Marcheshvan has 29 days and Kislev has 30 days. However, due to the Rosh Hashanah postponement rules, in some years Kislev may lose a day to have 29 days, or Marcheshvan may acquire an additional day to have 30 days. Normally the 12th month is named Adar. During leap years, the 12th and 13th months are named Adar I and Adar II (Hebrew: Adar Aleph and Adar Bet—"first Adar" and "second adar"). Sources disagree as to which of these months is the "real" Adar, and which is the added leap month. Justification for leap months The Bible does not directly mention the addition of leap months (also known as "embolismic" or "intercalary" months). The insertion of the leap month is based on the requirement that Passover occur at the same time of year as the spring barley harvest (aviv). (Since 12 lunar months make up less than a solar year, the date of Passover would gradually move throughout the solar year if leap months were not occasionally added.) According to the rabbinic calculation, this requirement means that Passover (or at least most of Passover) should fall after the March equinox. Similarly, the holidays of Shavuot and Sukkot are presumed by the Torah to fall in specific agricultural seasons. Maimonides, discussing the calendrical rules in his Mishneh Torah (1178), notes: By how much does the solar year exceed the lunar year? By approximately 11 days. Therefore, whenever this excess accumulates to about 30 days, or a little more or less, one month is added and the particular year is made to consist of 13 months, and this is the so-called embolismic (intercalated) year. For the year could not consist of twelve months plus so-and-so many days, since it is said: "throughout the months of the year", which implies that we should count the year by months and not by days. Years New year The Hebrew calendar year conventionally begins on Rosh Hashanah, the first day of Tishrei. However, the Jewish calendar also defines several additional new years, used for different purposes. The use of multiple starting dates for a year is comparable to different starting dates for civil "calendar years", "tax or fiscal years", "academic years", and so on. The Mishnah (c. 200 CE) identifies four new-year dates: The 1st of Nisan is the new year for kings and festivals. The 1st of Elul is the new year for the cattle tithe, Rabbi Eliezer and Rabbi Shimon say on the first of Tishrei. The 1st of Tishri is the new year for years, of the Shmita and Jubilee years, for planting and for vegetables. The 1st of Shevat is the new year for trees—so the school of Shammai, but the school of Hillel say: On the 15th thereof. Two of these dates are especially prominent: 1 Nisan is the ecclesiastical new year, i.e. the date from which months and festivals are counted. Thus Passover (which begins on 15 Nisan) is described in the Torah as falling "in the first month", while Rosh Hashana (which begins on 1 Tishrei) is described as falling "in the seventh month". 1 Tishrei is the civil new year, and the date on which the year number advances. This date is known as Rosh Hashanah (lit. "head of the year"). Tishrei marks the end of one agricultural year and the beginning of another, and thus 1 Tishrei is considered the new year for most agriculture-related commandments, including Shmita, Yovel, Maaser Rishon, Maaser Sheni, and Maaser Ani. For the dates of the Jewish New Year see Jewish and Israeli holidays 2000–2050. Anno Mundi The Jewish year number is generally given by (from Latin "in the year of the world", often abbreviated AM or A.M.). In this calendar era, the year number equals the number of years that have passed since the creation of the world, according to an interpretation of Biblical accounts of the creation and subsequent history. From the eleventh century, anno mundi dating became the dominant method of counting years throughout most of the world's Jewish communities, replacing earlier systems such as the Seleucid era. As with (A.D. or AD), the words or abbreviation for (A.M. or AM) for the era should properly precede the date rather than follow it. The reference junction of the Sun and the Moon (Molad 1) is considered to be at 5 hours and 204 halakim, or 11:11:20 p.m., on the evening of Sunday, 6 October 3761 BCE. According to rabbinic reckoning, this moment was not Creation, but about one year "before" Creation, with the new moon of its first month (Tishrei) called molad tohu (the mean new moon of chaos or nothing). It is about one year before the traditional Jewish date of Creation on 25 Elul AM 1, based upon the Seder Olam Rabbah. Thus, adding 3760 before Rosh Hashanah or 3761 after to a Julian calendar year number starting from 1 CE will yield the Hebrew year. For earlier years there may be a discrepancy; see Missing years (Jewish calendar). In Hebrew there are two common ways of writing the year number: with the thousands, called ("major era"), and without the thousands, called ("minor era"). Thus, the current year is written as ‎() using the "major era" and ‎() using the "minor era". Cycles of years Since the Jewish calendar has been fixed, leap months have been added according to the Metonic cycle of 19 years, of which 12 are common (non-leap) years of 12 months, and 7 are leap years of 13 months. This 19-year cycle is known in Hebrew as the Machzor Katan ("small cycle"). Because the Julian years are days long, every 28 years the weekday pattern repeats. This is called the sun cycle, or the Machzor Gadol ("great cycle") in Hebrew. The beginning of this cycle is arbitrary. Its main use is for determining the time of Birkat Hachama. Because every 50 years is a Jubilee year, there is a jubilee (yovel) cycle. Because every seven years is a sabbatical year, there is a seven-year release cycle. The placement of these cycles is debated. Historically, there is enough evidence to fix the sabbatical years in the Second Temple Period. But it may not match with the sabbatical cycle derived from the biblical period; and there is no consensus on whether or not the Jubilee year is the fiftieth year or the latter half of the forty ninth year. Every 247 years, or 13 cycles of 19 years, form a period known as an iggul, or the Iggul of Rabbi Nahshon. This period is notable in that the precise details of the calendar almost always (but not always) repeat over this period. This occurs because the molad interval (the average length of a Hebrew month) is 29.530594 days, which over 247 years results in a total of 90215.965 days. This is almost exactly 90216 days – a whole number and multiple of 7 (equalling the days of the week). So over 247 years, not only does the 19-year leap year cycle repeat itself, but the days of the week (and thus the days of Rosh Hashanah and the year length) typically repeat themselves. Calculations Leap year calculations To determine whether a Jewish year is a leap year, one must find its position in the 19-year Metonic cycle. This position is calculated by dividing the Jewish year number by 19 and finding the remainder. (Since there is no year 0, a remainder of 0 indicates that the year is year 19 of the cycle.) For example, the Jewish year divided by 19 results in a remainder of , indicating that it is year of the Metonic cycle. The Jewish year used is the anno mundi year, in which the year of creation according to the Rabbinical Chronology (3761 BCE) is taken as year 1. Years 3, 6, 8, 11, 14, 17, and 19 of the Metonic cycle are leap years. The Hebrew mnemonic GUCHADZaT refers to these years, while another memory aid refers to musical notation. Whether a year is a leap year can also be determined by a simple calculation (which also gives the fraction of a month by which the calendar is behind the seasons, useful for agricultural purposes). To determine whether year n of the calendar is a leap year, find the remainder on dividing [(7 × n) + 1] by 19. If the remainder is 6 or less it is a leap year; if it is 7 or more it is not. For example, the The This works because as there are seven leap years in nineteen years the difference between the solar and lunar years increases by month per year. When the difference goes above month this signifies a leap year, and the difference is reduced by one month. The Hebrew calendar assumes that a month is uniformly of the length of an average synodic month, taken as exactly days (about 29.530594 days, which is less than half a second from the modern scientific estimate); it also assumes that a tropical year is exactly times that, i.e., about 365.2468 days. Thus it overestimates the length of the tropical year (365.2422 days) by 0.0046 days (about 7 minutes) per year, or about one day in 216 years. This error is less than the Julian years (365.2500 days) make (0.0078 days/year, or one day in 128 years), but much more than what the Gregorian years (365.2425 days/year) make (0.0003 days/year, or one day in 3333 years). Rosh Hashanah postponement rules Besides the adding of leap months, the year length is sometimes adjusted by adding one day to the month of Marcheshvan, or removing one day from the month of Kislev. Because each calendar year begins with Rosh Hashanah, adjusting the year length is equivalent to moving the day of the next Rosh Hashanah. Several rules are used to determine when this is performed. To calculate the day on which Rosh Hashanah of a given year will fall, the expected molad (moment of lunar conjunction or new moon) of Tishrei in that year is calculated. The molad is calculated by multiplying the number of months that will have elapsed since some (preceding) molad (whose weekday is known) by the mean length of a (synodic) lunar month, which is 29 days, 12 hours, and 793 parts (there are 1080 "parts" in an hour, so that one part is equal to seconds). The very first molad, the molad tohu, fell on Sunday evening at 11:11:20 pm in the local time of Jerusalem, 6 October 3761 BCE (Proleptic Julian calendar) 20:50:23.1 UTC, or in Jewish terms Day 2, 5 hours, and 204 parts. The exact time of a molad in terms of days after midnight between 29 and 30 December 1899 (the form used by many spreadsheets for date and time) is -2067022+(23+34/3/60)/24+(29.5+793/1080/24)*N where N is the number of lunar months since the beginning. ( for the beginning of the 305th Machzor Katan on 1 October 2016.) Adding 0.25 to this converts it to the Jewish system in which the day begins at 6 pm. In calculating the number of months that will have passed since the known molad that one uses as the starting point, one must remember to include any leap months that falls within the elapsed interval, according to the cycle of leap years. A 19-year cycle of 235 synodic months has 991 weeks 2 days 16 hours 595 parts, a common year of 12 synodic months has 50 weeks 4 days 8 hours 876 parts, while a leap year of 13 synodic months has 54 weeks 5 days 21 hours 589 parts. Four conditions are considered to determine whether the date of Rosh Hashanah must be postponed. These are called the Rosh Hashanah postponement rules, or . The two most important conditions are: If the molad occurs at or later than noon, Rosh Hashanah is postponed a day. This is called (, literally, "old birth", i.e., late new moon). This rule is mentioned in the Talmud, and is used nowadays to prevent the molad falling on the second day of the month. This ensures that the long-term average month length is 29.530594 days (equal to the molad interval), rather than the 29.5 days implied by the standard alternation between 29- and 30-day months. If the molad occurs on a Sunday, Wednesday, or Friday, Rosh Hashanah is postponed a day. If the application of would place Rosh Hashanah on one of these days, then it must be postponed a second day. This is called (), an acronym that means "not [weekday] one, four, or six". This rule is applied for religious reasons, so that Yom Kippur does not fall on a Friday or Sunday, and Hoshana Rabbah does not fall on Shabbat. Since Shabbat restrictions also apply to Yom Kippur, if either day falls immediately before the other, it would not be possible to make necessary preparations for the second day (such as candle lighting). Additionally, the laws of Shabbat override those of Hoshana Rabbah, so that if Hoshana Rabbah were to fall on Shabbat, the Hoshana Rabbah aravah ritual could not be performed. Thus Rosh Hashanah can only fall on Monday, Tuesday, Thursday, and Saturday. The kevi'ah uses the letters ה ,ג ,ב and ז (representing 2, 3, 5, and 7, for Monday, Tuesday, Thursday, and Saturday) to denote the starting day of Rosh Hashana and the year. Another two rules are applied much less frequently and serve to prevent impermissible year lengths. Their names are Hebrew acronyms that refer to the ways they are calculated: If the molad in a common year falls on a Tuesday, on or after 9 hours and 204 parts, Rosh Hashanah is postponed to Thursday. This is (, where the acronym stands for "3 [Tuesday], 9, 204"). If the molad following a leap year falls on a Monday, on or after 15 hours and 589 parts after the Hebrew day began (for calculation purposes, this is taken to be 6 pm Sunday), Rosh Hashanah is postponed to Tuesday. This is (), where the acronym stands for "2 [Monday], 15, 589". Deficient, regular, and complete years The rules of postponement of Rosh HaShanah make it that a Jewish common year will have 353, 354, or 355 days while a leap year (with the addition of Adar I which always has 30 days) has 383, 384, or 385 days. A year (Hebrew for "deficient" or "incomplete") is 353 or 383 days long. Both Cheshvan and Kislev have 29 days. A year ("regular" or "in-order") is 354 or 384 days long. Cheshvan has 29 days while Kislev has 30 days. A year ("complete" or "perfect", also "abundant") is 355 or 385 days long. Both Cheshvan and Kislev have 30 days. Whether a year is deficient, regular, or complete is determined by the time between two adjacent Rosh Hashanah observances and the leap year. A Metonic cycle equates to 235 lunar months in each 19-year cycle. This gives an average of 6,939 days, 16 hours, and 595 parts for each cycle. But due to the Rosh Hashanah postponement rules (preceding section) a cycle of 19 Jewish years can be either 6,939, 6,940, 6,941, or 6,942 days in duration. For any given year in the Metonic cycle, the molad moves forward in the week by 2 days, 16 hours, and 595 parts every 19 years. The greatest common divisor of this and a week is 5 parts, so the Jewish calendar repeats exactly following a number of Metonic cycles equal to the number of parts in a week divided by 5, namely 7×24×216 = 36,288 Metonic cycles, or 689,472 Jewish years. There is a near-repetition every 247 years, except for an excess of 50 minutes seconds (905 parts). Contrary to popular impression, one's Hebrew birthday does not necessarily fall on the same Gregorian date every 19 years, since the length of the Metonic cycle varies by several days (as does the length of a 19-year Gregorian period, depending whether it contains 4 or 5 leap years). Keviah There are three qualities that distinguish one year from another: whether it is a leap year or a common year; on which of four permissible days of the week the year begins; and whether it is a deficient, regular, or complete year. Mathematically, there are 24 (2×4×3) possible combinations, but only 14 of them are valid. Each of these patterns is known by a ( for 'a setting' or 'an established thing'), which is a code consisting of two numbers and a letter. In English, the code consists of the following: The left number is the day of the week of , Rosh Hashanah The letter indicates whether that year is deficient (D, "ח", from ), regular (R, "כ", from ), or complete (C, "ש", from ) The right number is the day of the week of , the first day of Passover or Pesach , within the same Hebrew year (next Julian/Gregorian year) The in Hebrew letters is written right-to-left, so their days of the week are reversed, the right number for and the left for . The kevi'ah also determines the Torah reading cycle (which parshiyot are read together or separately. The four gates The keviah, and thus the annual calendar, of a numbered Hebrew year can be determined by consulting the table of Four Gates, whose inputs are the year's position in the 19-year cycle and its molad Tishrei. In this table, the years of a 19-year cycle are organized into four groups (called "gates"): common years after a leap year but before a common year ; common years between two leap years ; common years after a common year but before a leap year ; and leap years . This table numbers the days of the week and hours for the limits of molad Tishrei in the Hebrew manner for calendrical calculations, that is, both begin at , thus is noon Saturday, with the week starting on (Saturday 6pm, i.e. the beginning of Sunday reckoned in the Hebrew manner). The oldest surviving table of Four Gates was written by Muhammad ibn Musa al-Khwarizmi in 824. Incidence Comparing the days of the week of molad Tishrei with those in the shows that during 39% of years is not postponed beyond the day of the week of its molad Tishrei, 47% are postponed one day, and 14% are postponed two days. This table also identifies the seven types of common years and seven types of leap years. Most are represented in any 19-year cycle, except one or two may be in neighboring cycles. The most likely type of year is 5R7 in 18.1% of years, whereas the least likely is 5C1 in 3.3% of years. The day of the week of is later than that of by one, two or three days for common years and three, four or five days for leap years in deficient, regular or complete years, respectively. Worked example Given the length of the year, the length of each month is fixed as described above, so the real problem in determining the calendar for a year is determining the number of days in the year. In the modern calendar, this is determined in the following manner. The day of Rosh Hashanah and the length of the year are determined by the time and the day of the week of the Tishrei molad, that is, the moment of the average conjunction. Given the Tishrei molad of a certain year, the length of the year is determined as follows: First, one must determine whether each year is an ordinary or leap year by its position in the 19-year Metonic cycle. Years 3, 6, 8, 11, 14, 17, and 19 are leap years. Secondly, one must determine the number of days between the starting Tishrei molad (TM1) and the Tishrei molad of the next year (TM2). For calendar descriptions in general the day begins at 6 pm, but for the purpose of determining Rosh Hashanah, a molad occurring on or after noon is treated as belonging to the next day (the first deḥiyyah). All months are calculated as 29d, 12h, 44m, s long (MonLen). Therefore, in an ordinary year TM2 occurs 12 × MonLen days after TM1. This is usually 354 calendar days after TM1, but if TM1 is on or after 3:11:20 am and before noon, it will be 355 days. Similarly, in a leap year, TM2 occurs 13 × MonLen days after TM1. This is usually 384 days after TM1, but if TM1 is on or after noon and before  pm, TM2 will be only 383 days after TM1. In the same way, from TM2 one calculates TM3. Thus the four natural year lengths are 354, 355, 383, and 384 days. However, because of the holiday rules, Rosh Hashanah cannot fall on a Sunday, Wednesday, or Friday, so if TM2 is one of those days, Rosh Hashanah in year 2 is postponed by adding one day to year 1 (the second deḥiyyah). To compensate, one day is subtracted from year 2. It is to allow for these adjustments that the system allows 385-day years (long leap) and 353-day years (short ordinary) besides the four natural year lengths. But how can year 1 be lengthened if it is already a long ordinary year of 355 days or year 2 be shortened if it is a short leap year of 383 days? That is why the third and fourth deḥiyyahs are needed. If year 1 is already a long ordinary year of 355 days, there will be a problem if TM1 is on a Tuesday, as that means TM2 falls on a Sunday and will have to be postponed, creating a 356-day year. In this case, Rosh Hashanah in year 1 is postponed from Tuesday (the third deḥiyyah). As it cannot be postponed to Wednesday, it is postponed to Thursday, and year 1 ends up with 354 days. On the other hand, if year 2 is already a short year of 383 days, there will be a problem if TM2 is on a Wednesday. because Rosh Hashanah in year 2 will have to be postponed from Wednesday to Thursday and this will cause year 2 to be only 382 days long. In this case, year 2 is extended by one day by postponing Rosh Hashanah in year 3 from Monday to Tuesday (the fourth deḥiyyah), and year 2 will have 383 days. Holidays For calculated dates of Jewish holidays, see Jewish and Israeli holidays 2000–2050 Accuracy Molad interval A "new moon" (astronomically called a lunar conjunction and, in Hebrew, a molad) is the moment at which the sun and moon have the same ecliptic longitude (i.e. they are aligned horizontally with respect to a north–south line). The period between two new moons is a synodic month. The actual length of a synodic month varies from about 29 days 6 hours and 30 minutes (29.27 days) to about 29 days and 20 hours (29.83 days), a variation range of about 13 hours and 30 minutes. Accordingly, for convenience, the Hebrew calendar uses a long-term average month length, known as the molad interval, which equals the mean synodic month of ancient times. The molad interval is 29 days, 12 hours, and 793 "parts" (1 "part" = 1/18 minute = 31/3 seconds) (i.e., 29.530594 days), and is the same value determined by the Babylonians in their System B about 300 BCE and was adopted by Hipparchus (2nd century BCE) and by Ptolemy in the Almagest (2nd century CE). Its remarkable accuracy (less than one second from the current true value) is thought to have been achieved using records of lunar eclipses from the 8th to 5th centuries BCE. In the Talmudic era, when the mean synodic month was slightly shorter than at present, the molad interval was even more accurate, being "essentially a perfect fit" for the mean synodic month at the time. Currently, the accumulated drift in the moladot since the Talmudic era has reached a total of approximately 97 minutes. This means that the molad of Tishrei lands one day later than it ought to in (97 minutes) ÷ (1440 minutes per day) = nearly 7% of years. Therefore, the seemingly small drift of the moladot is already significant enough to affect the date of Rosh Hashanah, which then cascades to many other dates in the calendar year, and sometimes (due to the Rosh Hashanah postponement rules) also interacts with the dates of the prior or next year. The rate of calendar drift is increasing with time, since the mean synodic month is progressively shortening due to gravitational tidal effects. Measured on a strictly uniform time scale (such as that provided by an atomic clock) the mean synodic month is becoming gradually longer, but since the tides slow Earth's rotation rate even more, the mean synodic month is becoming gradually shorter in terms of mean solar time. Metonic cycle drift A larger source of error is the inaccuracy of the Metonic cycle. Nineteen Jewish years average 6939d 16h 33m 03s, compared to the 6939d 14h 26m 15s of nineteen mean solar years. Thus, the Hebrew calendar drifts by just over 2 hours every 19 years, or approximately one day every 216 years. Due to accumulation of this discrepancy, the earliest date on which Passover can fall has drifted by roughly eight days since the 4th century, and the 15th of Nisan now falls only on or after 26 March (the date in 2013), five days after the actual equinox on 21 March. In the distant future, this drift is projected to move Passover much further in the year. If the calendar is not amended, then Passover will start to land on or after the summer solstice around approximately AM 16652 (12892 CE). Implications for Jewish ritual When the calendar was fixed in the 4th century, the earliest Passover (in year 16 of the Metonic cycle) began on the first full moon after the March equinox. This is still the case in about 80% of years; but, in about 20% of years, Passover is a month late by this criterion. Presently, this occurs after the "premature" insertion of a leap month in years 8, 11, and 19 of each 19-year cycle, which causes Passover to fall especially far after the March equinox in such years. Calendar drift also impacts the observance of Sukkot, which will shift into Israel's winter rainy season, making dwelling in the sukkah less practical. It also affects the logic of the Shemini Atzeret prayer for rain, which will be more often recited once rains are already underway. Modern scholars have debated at which point the drift could become ritually problematic, and proposed adjustments to the fixed calendar to keep Passover in its proper season. The seriousness of the calendar drift is discounted by many, on the grounds that Passover will remain in the spring season for many millennia, and the Torah is generally not interpreted as having specified tight calendrical limits. However, some writers and researchers have proposed "corrected" calendars (with modifications to the leap year cycle, molad interval, or both) which would compensate for these issues: Irv Bromberg has suggested a 353-year cycle of 4,366 months, which would include 130 leap months, along with use of a progressively shorter molad interval, which would keep an amended fixed arithmetic Hebrew calendar from drifting for more than seven millennia. The 353 years would consist of 18 Metonic cycles, as well as an 11-year period in which the last 8 years of the Metonic cycle are omitted. Other authors have proposed to use cycles of 334 or 687 years. Another suggestion is to delay the leap years gradually so that a whole intercalary month is taken out at the end of Iggul 26; while also changing the synodic month to be the more accurate 29.53058868 days. Thus, the length of the year would be very close to the actual tropical year. The result is the "Hebrew Calendar" in the program CalMaster2000. Religious questions abound about how such a system might be implemented and administered throughout the diverse aspects of the world Jewish community. Usage In Auschwitz While imprisoned in Auschwitz, Jews made every effort to preserve Jewish tradition in the camps, despite the monumental dangers in doing so. The Hebrew calendar, which is a tradition with great importance to Jewish practice and rituals was particularly dangerous since no tools of telling of time, such as watches and calendars, were permitted in the camps. The keeping of a Hebrew calendar was a rarity amongst prisoners and there are only two known surviving calendars that were made in Auschwitz, both of which were made by women. Before this, the tradition of making a Hebrew calendar was greatly assumed to be the job of a man in Jewish society. In contemporary Israel Early Zionist pioneers were impressed by the fact that the calendar preserved by Jews over many centuries in far-flung diasporas, as a matter of religious ritual, was geared to the climate of their original country: major Jewish holidays such as Sukkot, Passover, and Shavuot correspond to major points of the country's agricultural year such as planting and harvest. Accordingly, in the early 20th century the Hebrew calendar was re-interpreted as an agricultural rather than religious calendar. After the creation of the State of Israel, the Hebrew calendar became one of the official calendars of Israel, along with the Gregorian calendar. Holidays and commemorations not derived from previous Jewish tradition were to be fixed according to the Hebrew calendar date. For example, the Israeli Independence Day falls on 5 Iyar, Jerusalem Reunification Day on 28 Iyar, Yom HaAliyah on 10 Nisan, and the Holocaust Commemoration Day on 27 Nisan. The Hebrew calendar is still widely acknowledged, appearing in public venues such as banks (where it is legal for use on cheques and other documents), and on the mastheads of newspapers. The Jewish New Year (Rosh Hashanah) is a two-day public holiday in Israel. However, since the 1980s an increasing number of secular Israelis celebrate the Gregorian New Year (usually known as "Silvester Night"—) on the night between 31 December and 1 January. Prominent rabbis have on several occasions sharply denounced this practice, but with no noticeable effect on the secularist celebrants. Wall calendars commonly used in Israel are hybrids. Most are organised according to Gregorian rather than Jewish months, but begin in September, when the Jewish New Year usually falls, and provide the Jewish date in small characters. History Early formation Lunisolar calendars similar to the Hebrew calendar, consisting of twelve lunar months plus an occasional 13th intercalary month to synchronize with the solar/agricultural cycle, were used in all ancient Middle Eastern civilizations except Egypt, and likely date to the 3rd millennium BCE. While there is no mention of this 13th month anywhere in the Hebrew Bible, still most Biblical scholars hold that the intercalation process was almost certainly a regularly occurring aspect of the early Hebrew calendar keeping process. Month names Biblical references to the pre-exilic calendar include ten of the twelve months identified by number rather than by name. Prior to the Babylonian captivity, the names of only four months are referred to in the Tanakh: Aviv (first month), Ziv (second month), Ethanim (seventh month), and Bul (eighth month). All of these are believed to be Canaanite names. The last three of these names are only mentioned in connection with the building of the First Temple and Håkan Ulfgard suggests that the use of what are rarely used Canaanite (or in the case of Ethanim perhaps Northwest Semitic) names indicates that "the author is consciously utilizing an archaizing terminology, thus giving the impression of an ancient story...". Alternatively, these names may be attributed to the presence of Phoenician scribes in Solomon's court at the time of the building of the Temple. During the Babylonian captivity, the Jewish people adopted the Babylonian names for the months. The Babylonian calendar descended directly from the Sumerian calendar. These Babylonian month-names (such as Nisan, Iyyar, Tammuz, Ab, Elul, Tishri and Adar) are shared with the modern Levantine solar calendar (currently used in the Arabic-speaking countries of the Fertile Crescent) and the modern Assyrian calendar, indicating a common origin. The origin is thought to be the Babylonian calendar. Past methods of dividing years According to some Christian and Karaite sources, the tradition in ancient Israel was that 1 Nisan would not start until the barley is ripe, being the test for the onset of spring. If the barley was not ripe, an intercalary month would be added before Nisan. In the 1st century, Josephus stated that while – Moses...appointed Nisan...as the first month for the festivals...the commencement of the year for everything relating to divine worship, but for selling and buying and other ordinary affairs he preserved the ancient order [i. e. the year beginning with Tishrei]." Edwin Thiele concluded that the ancient northern Kingdom of Israel counted years using the ecclesiastical new year starting on 1 Aviv/Nisan (Nisan-years), while the southern Kingdom of Judah counted years using the civil new year starting on 1 Tishrei (Tishri-years). The practice of the Kingdom of Israel was also that of Babylon, as well as other countries of the region. The practice of Judah is continued in modern Judaism and is celebrated as Rosh Hashana. Past methods of numbering years Before the adoption of the current Anno Mundi year numbering system, other systems were used. In early times, the years were counted from some significant event such as the Exodus. During the period of the monarchy, it was the widespread practice in western Asia to use era year numbers according to the accession year of the monarch of the country involved. This practice was followed by the united kingdom of Israel, kingdom of Judah, kingdom of Israel, Persia, and others. Besides, the author of Kings coordinated dates in the two kingdoms by giving the accession year of a monarch in terms of the year of the monarch of the other kingdom, though some commentators note that these dates do not always synchronise. Other era dating systems have been used at other times. For example, Jewish communities in the Babylonian diaspora counted the years from the first deportation from Israel, that of Jehoiachin in 597 BCE. The era year was then called "year of the captivity of Jehoiachin". During the Hellenistic Maccabean period, Seleucid era counting was used, at least in Land of Israel (under Greek influence at the time). The Books of the Maccabees used Seleucid era dating exclusively, as did Josephus writing in the Roman period. From the 1st-10th centuries, the center of world Judaism was in the Middle East (primarily Iraq and Palestine), and Jews in these regions also used Seleucid era dating, which they called the "Era of Contracts [or Documents]"; this counting is still sometimes used by Yemenite Jews. The Talmud states: Rav Aha bar Jacob then put this question: How do we know that our Era [of Documents] is connected with the Kingdom of Greece at all? Why not say that it is reckoned from the Exodus from Egypt, omitting the first thousand years and giving the years of the next thousand? In that case, the document is really post-dated!Said Rav Nahman: In the Diaspora the Greek Era alone is used.He [Rav Aha] thought that Rav Nahman wanted to dispose of him anyhow, but when he went and studied it thoroughly he found that it is indeed taught [in a Baraita]: In the Diaspora the Greek Era alone is used. In the 8th and 9th centuries, as the center of Jewish life moved from Babylonia to Europe, counting using the Seleucid era "became meaningless", and thus was replaced by the anno mundi system. The use of the Seleucid era continued till the 16th century in the East, and was employed even in the 19th century among Yemenite Jews. Occasionally in Talmudic writings, reference was made to other starting points for eras, such as destruction era dating, being the number of years since the 70 CE destruction of the Second Temple. Leap months According to normative Judaism, requires that the months be determined by a proper court with the necessary authority to sanctify the months. Hence the court, not the astronomy, has the final decision. When the observational form of the calendar was in use, whether or not a leap month was added depended on three factors: 'aviv [i.e., the ripeness of barley], fruits of trees, and the equinox. On two of these grounds it should be intercalated, but not on one of them alone. It may be noted that in the Bible the name of the first month, Aviv, literally means "spring". Thus, if Adar was over and spring had not yet arrived, an additional month was observed. Determining the new month in the Mishnaic period The Tanakh contains several commandments related to the keeping of the calendar and the lunar cycle, and records changes that have taken place to the Hebrew calendar. Numbers 10:10 stresses the importance in Israelite religious observance of the new month (Hebrew: , Rosh Chodesh, "beginning of the month"): "... in your new moons, ye shall blow with the trumpets over your burnt-offerings..." Similarly in Numbers 28:11. "The beginning of the month" meant the appearance of a new moon, and in Exodus 12:2. "This month is to you". According to the Mishnah and Tosefta, in the Maccabean, Herodian, and Mishnaic periods, new months were determined by the sighting of a new crescent, with two eyewitnesses required to testify to the Sanhedrin to having seen the new lunar crescent at sunset. The practice in the time of Gamaliel II (c. 100 CE) was for witnesses to select the appearance of the moon from a collection of drawings that depicted the crescent in a variety of orientations, only a few of which could be valid in any given month. These observations were compared against calculations. At first the beginning of each Jewish month was signaled to the communities of Israel and beyond by fires lit on mountaintops, but after the Samaritans began to light false fires, messengers were sent. The inability of the messengers to reach communities outside Israel before mid-month High Holy Days (Succot and Passover) led outlying communities to celebrate scriptural festivals for two days rather than one, observing the second feast-day of the Jewish diaspora because of uncertainty of whether the previous month ended after 29 or 30 days. Historicity It has been noted that the procedures described in the Mishnah and Tosefta are all plausible procedures for regulating an empirical lunar calendar. Fire-signals, for example, or smoke-signals, are known from the pre-exilic Lachish ostraca. Furthermore, the Mishnah contains laws that reflect the uncertainties of an empirical calendar. Mishnah Sanhedrin, for example, holds that when one witness holds that an event took place on a certain day of the month, and another that the same event took place on the following day, their testimony can be held to agree, since the length of the preceding month was uncertain. Another Mishnah takes it for granted that it cannot be known in advance whether a year's lease is for twelve or thirteen months. Hence it is a reasonable conclusion that the Mishnaic calendar was actually used in the Mishnaic period. The accuracy of the Mishnah's claim that the Mishnaic calendar was also used in the late Second Temple period is less certain. One scholar has noted that there are no laws from Second Temple period sources that indicate any doubts about the length of a month or of a year. This led him to propose that the priests must have had some form of computed calendar or calendrical rules that allowed them to know in advance whether a month would have 30 or 29 days, and whether a year would have 12 or 13 months. The fixing of the calendar Between 70 and 1178 CE, the observation-based calendar was gradually replaced by a mathematically calculated one. The Talmuds indicate at least the beginnings of a transition from a purely empirical to a computed calendar. Samuel of Nehardea (c. 165–254) stated that he could determine the dates of the holidays by calculation rather than observation. According to a statement attributed to Yose (late 3rd century), Purim could not fall on a Sabbath nor a Monday, lest Yom Kippur fall on a Friday or a Sunday. This indicates that, by the time of the redaction of the Jerusalem Talmud (c. 400 CE), there were a fixed number of days in all months from Adar to Elul, also implying that the extra month was already a second Adar added before the regular Adar. Elsewhere, Shimon ben Pazi is reported to have counseled "those who make the computations" not to set Rosh Hashana or Hoshana Rabbah on Shabbat. This indicates that there was a group who "made computations" and controlled, to some extent, the day of the week on which Rosh Hashana would fall. There is a tradition, first mentioned by Hai Gaon (died 1038 CE), that Hillel II was responsible for the new calculated calendar with a fixed intercalation cycle "in the year 670 of the Seleucid era" (i.e., 358–359 CE). Later writers, such as Nachmanides, explained Hai Gaon's words to mean that the entire computed calendar was due to Hillel II in response to persecution of Jews. Maimonides (12th century) stated that the Mishnaic calendar was used "until the days of Abaye and Rava" (c. 320–350 CE), and that the change came when "the land of Israel was destroyed, and no permanent court was left." Taken together, these two traditions suggest that Hillel II (whom they identify with the mid-4th-century Jewish patriarch Ioulos, attested in a letter of the Emperor Julian, and the Jewish patriarch Ellel, mentioned by Epiphanius) instituted the computed Hebrew calendar because of persecution. H. Graetz linked the introduction of the computed calendar to a sharp repression following a failed Jewish insurrection that occurred during the rule of the Christian emperor Constantius and Gallus. Saul Lieberman argued instead that the introduction of the fixed calendar was due to measures taken by Christian Roman authorities to prevent the Jewish patriarch from sending calendrical messengers. Both the tradition that Hillel II instituted the complete computed calendar, and the theory that the computed calendar was introduced due to repression or persecution, have been questioned. Furthermore, two Jewish dates during post-Talmudic times (specifically in 506 and 776) are impossible under the rules of the modern calendar, indicating that some of its arithmetic rules were established in Babylonia during the times of the Geonim (7th to 8th centuries). Most likely, the procedure established in 359 involved a fixed molad interval slightly different from the current one, Rosh Hashana postponement rules similar but not identical to current rules, and leap months were added based on when Passover preceded a fixed cutoff date rather than through a repeated 19-year cycle. The Rosh Hashana rules apparently reached their modern form between 629 and 648, the modern molad interval was likely fixed in 776, while the fixed 19-year cycle also likely dates to the late 8th century. Except for the epoch year number (the fixed reference point at the beginning of year 1, which at that time was one year later than the epoch of the modern calendar), the calendar rules reached their current form by the beginning of the 9th century, as described by the Persian Muslim astronomer Muhammad ibn Musa al-Khwarizmi in 823. Al-Khwarizmi's study of the Jewish calendar describes the 19-year intercalation cycle, the rules for determining on what day of the week the first day of the month Tishrei shall fall, the interval between the Jewish era (creation of Adam) and the Seleucid era, and the rules for determining the mean longitude of the sun and the moon using the Jewish calendar. Not all the rules were in place by 835. In 921, Aaron ben Meïr had a debate with Saadya Gaon about one of the rules of the calendar. This indicates that the rules of the modern calendar were not so clear and set. In 1000, the Muslim chronologist al-Biruni described all of the modern rules of the Hebrew calendar, except that he specified three different epochs used by various Jewish communities being one, two, or three years later than the modern epoch. In 1178, Maimonides included all the rules for the calculated calendar and their scriptural basis, including the modern epochal year, in his work Mishneh Torah. He wrote that he had chosen the epoch from which calculations of all dates should be as "the third day of Nisan in this present year ... which is the year 4938 of the creation of the world" (22 March 1178). Today, these rules are generally used by Jewish communities throughout the world. Other calendars Outside of Rabbinic Judaism, evidence shows a diversity of practice. Karaite calendar Karaites use the lunar month and the solar year, but the Karaite calendar differs from the current Rabbinic calendar in a number of ways. The Karaite calendar is identical to the Rabbinic calendar used before the Sanhedrin changed the Rabbinic calendar from the lunar, observation based, calendar to the current, mathematically based, calendar used in Rabbinic Judaism today. In the lunar Karaite calendar, the beginning of each month, the Rosh Chodesh, can be calculated, but is confirmed by the observation in Israel of the first sightings of the new moon. This may result in an occasional variation of a maximum of one day, depending on the inability to observe the new moon. The day is usually "picked up" in the next month. The addition of the leap month (Adar II) is determined by observing in Israel the ripening of barley at a specific stage (defined by Karaite tradition) (called aviv), rather than using the calculated and fixed calendar of rabbinic Judaism. Occasionally this results in Karaites being one month ahead of other Jews using the calculated rabbinic calendar. The "lost" month would be "picked up" in the next cycle when Karaites would observe a leap month while other Jews would not. Furthermore, the seasonal drift of the rabbinic calendar is avoided, resulting in the years affected by the drift starting one month earlier in the Karaite calendar. Also, the four rules of postponement of the rabbinic calendar are not applied, since they are not mentioned in the Tanakh. This can affect the dates observed for all the Jewish holidays in a particular year by one or two days. In the Middle Ages many Karaite Jews outside Israel followed the calculated rabbinic calendar, because it was not possible to retrieve accurate aviv barley data from the land of Israel. However, since the establishment of the State of Israel, and especially since the Six-Day War, the Karaite Jews that have made aliyah can now again use the observational calendar. Samaritan calendar The Samaritan community's calendar also relies on lunar months and solar years. Calculation of the Samaritan calendar has historically been a secret reserved to the priestly family alone, and was based on observations of the new crescent moon. More recently, a 20th-century Samaritan High Priest transferred the calculation to a computer algorithm. The current High Priest confirms the results twice a year, and then distributes calendars to the community. The epoch of the Samaritan calendar is year of the entry of the Children of Israel into the Land of Israel with Joshua. The month of Passover is the first month in the Samaritan calendar, but the year number increments in the sixth month. Like in the Rabbinic calendar, there are seven leap years within each 19-year cycle. However, the Rabbinic and Samaritan calendars' cycles are not synchronized, so Samaritan festivals—notionally the same as the Rabbinic festivals of Torah origin—are frequently one month off from the date according to the Rabbinic calendar. Additionally, as in the Karaite calendar, the Samaritan calendar does not apply the four rules of postponement, since they are not mentioned in the Tanakh. This can affect the dates observed for all the Jewish holidays in a particular year by one or two days. The Qumran calendar Many of the Dead Sea Scrolls have references to a unique calendar, used by the people there, who are often assumed to be Essenes. The year of this calendar used the ideal Mesopotamian calendar of twelve 30-day months, to which were added 4 days at the equinoxes and solstices (cardinal points), making a total of 364 days. With only 364 days, the calendar would be very noticeably different from the actual seasons after a few years, but there is nothing to indicate what was done about this problem. Various scholars have suggested that nothing was done and the calendar was allowed to change with respect to the seasons, or that changes were made irregularly when the seasonal anomaly was too great to be ignored any longer. Other calendars used by ancient Jews Calendrical evidence for the postexilic Persian period is found in papyri from the Jewish colony at Elephantine, in Egypt. These documents show that the Jewish community of Elephantine used the Egyptian and Babylonian calendars. The Sardica paschal table shows that the Jewish community of some eastern city, possibly Antioch, used a calendrical scheme that kept Nisan 14 within the limits of the Julian month of March. Some of the dates in the document are clearly corrupt, but they can be emended to make the sixteen years in the table consistent with a regular intercalation scheme. Peter, the bishop of Alexandria (early 4th century CE), mentions that the Jews of his city "hold their Passover according to the course of the moon in the month of Phamenoth, or according to the intercalary month every third year in the month of Pharmuthi", suggesting a fairly consistent intercalation scheme that kept Nisan 14 approximately between Phamenoth 10 (6 March in the 4th century CE) and Pharmuthi 10 (5 April). Jewish funerary inscriptions from Zoar (south of the Dead Sea), dated from the 3rd to the 5th century, indicate that when years were intercalated, the intercalary month was at least sometimes a repeated month of Adar. The inscriptions, however, reveal no clear pattern of regular intercalations, nor do they indicate any consistent rule for determining the start of the lunar month.
Technology
Calendars
null
13802
https://en.wikipedia.org/wiki/Hammer
Hammer
A hammer is a tool, most often a hand tool, consisting of a weighted "head" fixed to a long handle that is swung to deliver an impact to a small area of an object. This can be, for example, to drive nails into wood, to shape metal (as with a forge), or to crush rock. Hammers are used for a wide range of driving, shaping, breaking and non-destructive striking applications. Traditional disciplines include carpentry, blacksmithing, warfare, and percussive musicianship (as with a gong). Hammering is use of a hammer in its strike capacity, as opposed to prying with a secondary claw or grappling with a secondary hook. Carpentry and blacksmithing hammers are generally wielded from a stationary stance against a stationary target as gripped and propelled with one arm, in a lengthy downward planar arc—downward to add kinetic energy to the impact—pivoting mainly around the shoulder and elbow, with a small but brisk wrist rotation shortly before impact; for extreme impact, concurrent motions of the torso and knee can lower the shoulder joint during the swing to further increase the length of the swing arc (but this is tiring). War hammers are often wielded in non-vertical planes of motion, with a far greater share of energy input provided from the legs and hips, which can also include a lunging motion, especially against moving targets. Small mallets can be swung from the wrists in a smaller motion permitting a much higher cadence of repeated strikes. Use of hammers and heavy mallets for demolition must adapt the hammer stroke to the location and orientation of the target, which can necessitate a clubbing or golfing motion with a two-handed grip. The modern hammer head is typically made of steel which has been heat treated for hardness, and the handle (also known as a haft or helve) is typically made of wood or plastic. Ubiquitous in framing, the claw hammer has a "claw" to pull nails out of wood, and is commonly found in an inventory of household tools in North America. Other types of hammers vary in shape, size, and structure, depending on their purposes. Hammers used in many trades include sledgehammers, mallets, and ball-peen hammers. Although most hammers are hand tools, powered hammers, such as steam hammers and trip hammers, are used to deliver forces beyond the capacity of the human arm. There are over 40 different types of hammers that have many different types of uses. For hand hammers, the grip of the shaft is an important consideration. Many forms of hammering by hand are heavy work, and perspiration can lead to slippage from the hand, turning a hammer into a dangerous or destructive uncontrolled projectile. Steel is highly elastic and transmits shock and vibration; steel is also a good conductor of heat, making it unsuitable for contact with bare skin in frigid conditions. Modern hammers with steel shafts are almost invariably clad with a synthetic polymer to improve grip, dampen vibration, and to provide thermal insulation. A suitably contoured handle is also an important aid in providing a secure grip during heavy use. Traditional wooden handles were reasonably good in all regards, but lack strength and durability compared to steel, and there are safety issues with wooden handles if the head becomes loose on the shaft. The high elasticity of the steel head is important in energy transfer, especially when used in conjunction with an equally elastic anvil. In terms of human physiology, many uses of the hammer involve coordinated ballistic movements under intense muscular forces which must be planned in advance at the neuromuscular level, as they occur too rapidly for conscious adjustment in flight. For this reason, accurate striking at speed requires more practice than a tapping movement to the same target area. It has been suggested that the cognitive demands for pre-planning, sequencing and accurate timing associated with the related ballistic movements of throwing, clubbing, and hammering precipitated aspects of brain evolution in early hominids. History The use of simple hammers dates to around 3.3 million years ago according to the 2012 find made by Sonia Harmand and Jason Lewis of Stony Brook University, who while excavating a site near Kenya's Lake Turkana discovered a very large deposit of various shaped stones including those used to strike wood, bone, or other stones to break them apart and shape them. The first hammers were made without handles. Stones attached to sticks with strips of leather or animal sinew were being used as hammers with handles by about 30,000 BCE during the middle of the Paleolithic Stone Age. The addition of a handle gave the user better control and less accidents. The hammer became the primary tool used for building, food, and protection. The hammer's archaeological record shows that it may be the oldest tool for which definite evidence exists. Construction and materials A traditional hand-held hammer consists of a separate head and a handle, which can be fastened together by means of a special wedge made for the purpose, or by glue, or both. This two-piece design is often used to combine a dense metallic striking head with a non-metallic mechanical-shock-absorbing handle (to reduce user fatigue from repeated strikes). If wood is used for the handle, it is often hickory or ash, which are tough and long-lasting materials that can dissipate shock waves from the hammer head. Rigid fiberglass resin may be used for the handle; this material does not absorb water or decay but does not dissipate shock as well as wood. A loose hammer head is considered hazardous due to the risk of the head becoming detached from the handle while being swung becoming a dangerous uncontrolled projectile. Wooden handles can often be replaced when worn or damaged; specialized kits are available covering a range of handle sizes and designs, plus special wedges and spacers for secure attachment. Some hammers are one-piece designs made mostly of a single material. A one-piece metallic hammer may optionally have its handle coated or wrapped in a resilient material such as rubber for improved grip and to reduce user fatigue. The hammer head may be surfaced with a variety of materials including brass, bronze, wood, plastic, rubber, or leather. Some hammers have interchangeable striking surfaces, which can be selected as needed or replaced when worn out. Designs and variations A large hammer-like tool is a maul (sometimes called a "beetle"), a wood- or rubber-headed hammer is a mallet, and a hammer-like tool with a cutting blade is usually called a hatchet. The essential part of a hammer is the head, a compact solid mass that is able to deliver a blow to the intended target without itself deforming. The impacting surface of the tool is usually flat or slightly rounded; the opposite end of the impacting mass may have a ball shape, as in the ball-peen hammer. Some upholstery hammers have a magnetized face, to pick up tacks. In the hatchet, the flat hammer head may be secondary to the cutting edge of the tool. The impact between steel hammer heads and the objects being hit can create sparks, which may ignite flammable or explosive gases. These are a hazard in some industries such as underground coal mining (due to the presence of methane gas), or in other hazardous environments such as petroleum refineries and chemical plants. In these environments, a variety of non-sparking metal tools are used, primarily made of aluminium or beryllium copper. In recent years, the handles have been made of durable plastic or rubber, though wood is still widely used because of its shock-absorbing qualities and repairability. Hand-powered Ball-peen hammer, or mechanic's hammer Boiler scaling hammer Brass hammer, also known as non-sparking hammer or spark-proof hammer and used mainly in flammable areas like oil fields Bricklayer's hammer Carpenter's hammer (used for nailing), such as the framing hammer and the claw hammer, and pinhammers (ball-peen and cross-peen types) Cow hammer – sometimes used for livestock slaughter, a practice now deprecated due to animal welfare objections Cross-peen hammer, having one round face and one wedge-peen face. Dead blow hammer delivers impact with very little recoil, often due to a hollow head filled with sand, lead shot or pellets Demolition hammer Drilling hammer – a short handled sledgehammer originally used for drilling in rock with a chisel. The name usually refers to a hammer with a head and a handle, also called a "single-jack" hammer because it was used by one person drilling, holding the chisel in one hand and the hammer in the other. In modern usage, the term is mostly interchangeable with "engineer's hammer", although it can indicate a version with a slightly shorter handle. Engineer's hammer, a short-handled hammer, was originally an essential components of a railroad engineer's toolkit for working on steam locomotives. Typical weight is 2–4 lbs (0.9–1.8 kg) with a 12–14-inch (30–35 cm) handle. Originally these were often cross-peen hammers, with one round face and one wedge-peen face, but in modern usage the term primarily refers to hammers with two round faces. Gavel, used by judges and presiding authorities to draw attention Geologist's hammer or rock pick Joiner's hammer, or Warrington hammer Knife-edged hammer, its properties developed to aid a hammerer in the act of slicing whilst bludgeoning Lathe hammer (also known as a lath hammer, lathing hammer, or lathing hatchet), a tool used for cutting and nailing wood lath, which has a small hatchet blade on one side (with a small, lateral nick for pulling nails) and a hammer head on the other Lump hammer, or club hammer Mallets, including versions made with hard rubber or rolled sheets of rawhide Railway track keying hammer Magnetic double-head hammer Magnetic tack hammer Rock climbing hammer Rounding hammer, Blacksmith or farrier hammer. Round face generally for moving or drawing metal and flat for "planishing" or smoothing out the surface marks. Shingler's hammer Sledgehammer Soft-faced hammer Spiking hammer Splitting maul Strike Tack hammer Stonemason's hammer Tinner's hammer Upholstery hammer Welder's chipping hammer Mechanically powered Mechanically powered hammers often look quite different from the hand tools, but nevertheless, most of them work on the same principle. They include: Hammer drill, that combines a jackhammer-like mechanism with a drill High Frequency Impact Treatment hammer – for after-treatment of weld transitions Jackhammer Steam hammer Trip hammer Nail gun Staple gun Associated tools Anvil Chisel Pipe drift (Blacksmithing – spreading a punched hole to proper size and/or shape) Star drill Punch Woodsplitting maul – can be hit with a sledgehammer for splitting wood. Woodsplitting wedge – hit with a sledgehammer for splitting wood. Physics As a force amplifier A hammer is a simple force amplifier that works by converting mechanical work into kinetic energy and back. In the swing that precedes each blow, the hammer head stores a certain amount of kinetic energy—equal to the length D of the swing times the force f produced by the muscles of the arm and by gravity. When the hammer strikes, the head is stopped by an opposite force coming from the target, equal and opposite to the force applied by the head to the target. If the target is a hard and heavy object, or if it is resting on some sort of anvil, the head can travel only a very short distance d before stopping. Since the stopping force F times that distance must be equal to the head's kinetic energy, it follows that F is much greater than the original driving force f—roughly, by a factor D/d. In this way, great strength is not needed to produce a force strong enough to bend steel, or crack the hardest stone. Effect of the head's mass The amount of energy delivered to the target by the hammer-blow is equivalent to one half the mass of the head times the square of the head's speed at the time of impact . While the energy delivered to the target increases linearly with mass, it increases quadratically with the speed (see the effect of the handle, below). High tech titanium heads are lighter and allow for longer handles, thus increasing velocity and delivering the same energy with less arm fatigue than that of a heavier steel head hammer. A titanium head has about 3% recoil energy and can result in greater efficiency and less fatigue when compared to a steel head with up to 30% recoil. Dead blow hammers use special rubber or steel shot to absorb recoil energy, rather than bouncing the hammer head after impact. Effect of the handle The handle of the hammer helps in several ways. It keeps the user's hands away from the point of impact. It provides a broad area that is better-suited for gripping by the hand. Most importantly, it allows the user to maximize the speed of the head on each blow. The primary constraint on additional handle length is the lack of space to swing the hammer. This is why sledgehammers, largely used in open spaces, can have handles that are much longer than a standard carpenter's hammer. The second most important constraint is more subtle. Even without considering the effects of fatigue, the longer the handle, the harder it is to guide the head of the hammer to its target at full speed. Most designs are a compromise between practicality and energy efficiency. With too long a handle, the hammer is inefficient because it delivers force to the wrong place, off-target. With too short a handle, the hammer is inefficient because it does not deliver enough force, requiring more blows to complete a given task. Modifications have also been made with respect to the effect of the hammer on the user. Handles made of shock-absorbing materials or varying angles attempt to make it easier for the user to continue to wield this age-old device, even as nail guns and other powered drivers encroach on its traditional field of use. As hammers must be used in many circumstances, where the position of the person using them cannot be taken for granted, trade-offs are made for the sake of practicality. In areas where one has plenty of room, a long handle with a heavy head (like a sledgehammer) can deliver the maximum amount of energy to the target. It is not practical to use such a large hammer for all tasks, however, and thus the overall design has been modified repeatedly to achieve the optimum utility in a wide variety of situations. Effect of gravity Gravity exerts a force on the hammer head. If hammering downwards, gravity increases the acceleration during the hammer stroke and increases the energy delivered with each blow. If hammering upwards, gravity reduces the acceleration during the hammer stroke and therefore reduces the energy delivered with each blow. Some hammering methods, such as traditional mechanical pile drivers, rely entirely on gravity for acceleration on the down stroke. Ergonomics and injury risks A hammer may cause significant injury if it strikes the body. Both manual and powered hammers can cause peripheral neuropathy or a variety of other ailments when used improperly. Awkward handles can cause repetitive stress injury (RSI) to hand and arm joints, and uncontrolled shock waves from repeated impacts can injure nerves and the skeleton. Additionally, striking metal objects with a hammer may produce small metallic projectiles which can become lodged in the eye. It is therefore recommended to wear safety glasses. War hammers A war hammer is a late medieval weapon of war intended for close combat action. Symbolism The hammer, being one of the most used tools by man, has been used very much in symbols such as flags and heraldry. In the Middle Ages, it was used often in blacksmith guild logos, as well as in many family symbols. The hammer and pick are used as a symbol of mining. In mythology, the gods Thor (Norse) and Sucellus (Celtic and Gallo-Roman), and the hero Hercules (Greek), all had hammers that appear in their lore and carried different meanings. Thor, the god of thunder and lightning, wields a hammer named Mjölnir. Many artifacts of decorative hammers have been found, leading modern practitioners of this religion to often wear reproductions as a sign of their faith. In American folklore, the hammer of John Henry represents the strength and endurance of a man. A political party in Singapore, Workers' Party of Singapore, based their logo on a hammer to symbolize the party's civic nationalism and social democracy ideology. A variant, well-known symbol with a hammer in it is the hammer and sickle, which was the symbol of the former Soviet Union and is strongly linked to communism and early socialism. The hammer in this symbol represents the industrial working class (and the sickle represents the agricultural working class). The hammer is used in some coats of arms in former socialist countries like East Germany. Similarly, the Hammer and Sword symbolizes Strasserism, a strand of Nazism seeking to appeal to the working class. Another variant of the symbol was used for the North Korean party, Workers' Party of Korea, incorporated with an ink brush on the middle, which symbolizes both Juche and Songun ideologies. In Pink Floyd – The Wall, two hammers crossed are used as a symbol for the fascist takeover of the concert during "In the Flesh". This also has the meaning of the hammer beating down any "nails" that stick out. The gavel, a small wooden mallet, is used to symbolize a mandate to preside over a meeting or judicial proceeding, and a graphic image of one is used as a symbol of legislative or judicial decision-making authority. Judah Maccabee was nicknamed "The Hammer", possibly in recognition of his ferocity in battle. The name "Maccabee" may derive from the Aramaic maqqaba. (see .) The hammer in the song "If I Had a Hammer" represents a relentless message of justice broadcast across the land. The song became a symbol of the civil rights movement. Image gallery
Technology
Hand tools
null
13821
https://en.wikipedia.org/wiki/Hadron
Hadron
In particle physics, a hadron (; ) is a composite subatomic particle made of two or more quarks held together by the strong interaction. They are analogous to molecules, which are held together by the electric force. Most of the mass of ordinary matter comes from two hadrons: the proton and the neutron, while most of the mass of the protons and neutrons is in turn due to the binding energy of their constituent quarks, due to the strong force. Hadrons are categorized into two broad families: baryons, made of an odd number of quarks (usually three) and mesons, made of an even number of quarks (usually two: one quark and one antiquark). Protons and neutrons (which make the majority of the mass of an atom) are examples of baryons; pions are an example of a meson. A tetraquark state (an exotic meson), named the Z(4430), was discovered in 2007 by the Belle Collaboration and confirmed as a resonance in 2014 by the LHCb collaboration. Two pentaquark states (exotic baryons), named and , were discovered in 2015 by the LHCb collaboration. There are several other "Exotic" hadron candidates and other colour-singlet quark combinations that may also exist. Almost all "free" hadrons and antihadrons (meaning, in isolation and not bound within an atomic nucleus) are believed to be unstable and eventually decay into other particles. The only known possible exception is free protons, which appear to be stable, or at least, take immense amounts of time to decay (order of 1034+ years). By way of comparison, free neutrons are the longest-lived unstable particle, and decay with a half-life of about 611 seconds, and have a mean lifetime of 879 seconds, see free neutron decay. Hadron physics is studied by colliding hadrons, e.g. protons, with each other or the nuclei of dense, heavy elements, such as lead (Pb) or gold (Au), and detecting the debris in the produced particle showers. A similar process occurs in the natural environment, in the extreme upper-atmosphere, where muons and mesons such as pions are produced by the collisions of cosmic rays with rarefied gas particles in the outer atmosphere. Terminology and etymology The term "hadron" is a new Greek word introduced by L. B. Okun in a plenary talk at the 1962 International Conference on High Energy Physics at CERN. He opened his talk with the definition of a new category term: Properties According to the quark model, the properties of hadrons are primarily determined by their so-called valence quarks. For example, a proton is composed of two up quarks (each with electric charge , for a total of + together) and one down quark (with electric charge ). Adding these together yields the proton charge of +1. Although quarks also carry color charge, hadrons must have zero total color charge because of a phenomenon called color confinement. That is, hadrons must be "colorless" or "white". The simplest ways for this to occur are with a quark of one color and an antiquark of the corresponding anticolor, or three quarks of different colors. Hadrons with the first arrangement are a type of meson, and those with the second arrangement are a type of baryon. Massless virtual gluons compose the overwhelming majority of particles inside hadrons, as well as the major constituents of its mass (with the exception of the heavy charm and bottom quarks; the top quark vanishes before it has time to bind into a hadron). The strength of the strong-force gluons which bind the quarks together has sufficient energy () to have resonances composed of massive () quarks ( ≥ 2). One outcome is that short-lived pairs of virtual quarks and antiquarks are continually forming and vanishing again inside a hadron. Because the virtual quarks are not stable wave packets (quanta), but an irregular and transient phenomenon, it is not meaningful to ask which quark is real and which virtual; only the small excess is apparent from the outside in the form of a hadron. Therefore, when a hadron or anti-hadron is stated to consist of (typically) two or three quarks, this technically refers to the constant excess of quarks versus antiquarks. Like all subatomic particles, hadrons are assigned quantum numbers corresponding to the representations of the Poincaré group: (), where is the spin quantum number, the intrinsic parity (or P-parity), the charge conjugation (or C-parity), and is the particle's mass. Note that the mass of a hadron has very little to do with the mass of its valence quarks; rather, due to mass–energy equivalence, most of the mass comes from the large amount of energy associated with the strong interaction. Hadrons may also carry flavor quantum numbers such as isospin (G-parity), and strangeness. All quarks carry an additive, conserved quantum number called a baryon number (), which is for quarks and for antiquarks. This means that baryons (composite particles made of three, five or a larger odd number of quarks) have  = 1 whereas mesons have  = 0. Hadrons have excited states known as resonances. Each ground state hadron may have several excited states; several hundred different resonances have been observed in experiments. Resonances decay extremely quickly (within about 10 seconds) via the strong nuclear force. In other phases of matter the hadrons may disappear. For example, at very high temperature and high pressure, unless there are sufficiently many flavors of quarks, the theory of quantum chromodynamics (QCD) predicts that quarks and gluons will no longer be confined within hadrons, "because the strength of the strong interaction diminishes with energy". This property, which is known as asymptotic freedom, has been experimentally confirmed in the energy range between 1 GeV (gigaelectronvolt) and 1 TeV (teraelectronvolt). All free hadrons except (possibly) the proton and antiproton are unstable. Baryons Baryons are hadrons containing an odd number of valence quarks (at least 3). Most well-known baryons such as the proton and neutron have three valence quarks, but pentaquarks with five quarks—three quarks of different colors, and also one extra quark-antiquark pair—have also been proven to exist. Because baryons have an odd number of quarks, they are also all fermions, i.e., they have half-integer spin. As quarks possess baryon number B = , baryons have baryon number B = 1. Pentaquarks also have B = 1, since the extra quark's and antiquark's baryon numbers cancel. Each type of baryon has a corresponding antiparticle (antibaryon) in which quarks are replaced by their corresponding antiquarks. For example, just as a proton is made of two up quarks and one down quark, its corresponding antiparticle, the antiproton, is made of two up antiquarks and one down antiquark. As of August 2015, there are two known pentaquarks, and , both discovered in 2015 by the LHCb collaboration. Mesons Mesons are hadrons containing an even number of valence quarks (at least two). Most well known mesons are composed of a quark-antiquark pair, but possible tetraquarks (four quarks) and hexaquarks (six quarks, comprising either a dibaryon or three quark-antiquark pairs) may have been discovered and are being investigated to confirm their nature. Several other hypothetical types of exotic meson may exist which do not fall within the quark model of classification. These include glueballs and hybrid mesons (mesons bound by excited gluons). Because mesons have an even number of quarks, they are also all bosons, with integer spin, i.e., 0, +1, or −1. They have baryon number Examples of mesons commonly produced in particle physics experiments include pions and kaons. Pions also play a role in holding atomic nuclei together via the residual strong force.
Physical sciences
Fermions
null
13833
https://en.wikipedia.org/wiki/Hash%20table
Hash table
In computer science, a hash table is a data structure that implements an associative array, also called a dictionary or simply map; an associative array is an abstract data type that maps keys to values. A hash table uses a hash function to compute an index, also called a hash code, into an array of buckets or slots, from which the desired value can be found. During lookup, the key is hashed and the resulting hash indicates where the corresponding value is stored. A map implemented by a hash table is called a hash map. Most hash table designs employ an imperfect hash function. Hash collisions, where the hash function generates the same index for more than one key, therefore typically must be accommodated in some way. In a well-dimensioned hash table, the average time complexity for each lookup is independent of the number of elements stored in the table. Many hash table designs also allow arbitrary insertions and deletions of key–value pairs, at amortized constant average cost per operation. Hashing is an example of a space-time tradeoff. If memory is infinite, the entire key can be used directly as an index to locate its value with a single memory access. On the other hand, if infinite time is available, values can be stored without regard for their keys, and a binary search or linear search can be used to retrieve the element. In many situations, hash tables turn out to be on average more efficient than search trees or any other table lookup structure. For this reason, they are widely used in many kinds of computer software, particularly for associative arrays, database indexing, caches, and sets. History The idea of hashing arose independently in different places. In January 1953, Hans Peter Luhn wrote an internal IBM memorandum that used hashing with chaining. The first example of open addressing was proposed by A. D. Linh, building on Luhn's memorandum. Around the same time, Gene Amdahl, Elaine M. McGraw, Nathaniel Rochester, and Arthur Samuel of IBM Research implemented hashing for the IBM 701 assembler. Open addressing with linear probing is credited to Amdahl, although Andrey Ershov independently had the same idea. The term "open addressing" was coined by W. Wesley Peterson in his article which discusses the problem of search in large files. The first published work on hashing with chaining is credited to Arnold Dumey, who discussed the idea of using remainder modulo a prime as a hash function. The word "hashing" was first published in an article by Robert Morris. A theoretical analysis of linear probing was submitted originally by Konheim and Weiss. Overview An associative array stores a set of (key, value) pairs and allows insertion, deletion, and lookup (search), with the constraint of unique keys. In the hash table implementation of associative arrays, an array of length is partially filled with elements, where . A key is hashed using a hash function to compute an index location in the hash table, where . At this index, both the key and its associated value are stored. Storing the key alongside the value ensures that lookups can verify the key at the index to retrieve the correct value, even in the presence of collisions. Under reasonable assumptions, hash tables have better time complexity bounds on search, delete, and insert operations in comparison to self-balancing binary search trees. Hash tables are also commonly used to implement sets, by omitting the stored value for each key and merely tracking whether the key is present. Load factor A load factor is a critical statistic of a hash table, and is defined as follows: where is the number of entries occupied in the hash table. is the number of buckets. The performance of the hash table deteriorates in relation to the load factor . The software typically ensures that the load factor remains below a certain constant, . This helps maintain good performance. Therefore, a common approach is to resize or "rehash" the hash table whenever the load factor reaches . Similarly the table may also be resized if the load factor drops below . Load factor for separate chaining With separate chaining hash tables, each slot of the bucket array stores a pointer to a list or array of data. Separate chaining hash tables suffer gradually declining performance as the load factor grows, and no fixed point beyond which resizing is absolutely needed. With separate chaining, the value of that gives best performance is typically between 1 and 3. Load factor for open addressing With open addressing, each slot of the bucket array holds exactly one item. Therefore an open-addressed hash table cannot have a load factor greater than 1. The performance of open addressing becomes very bad when the load factor approaches 1. Therefore a hash table that uses open addressing must be resized or rehashed if the load factor approaches 1. With open addressing, acceptable figures of max load factor should range around 0.6 to 0.75. Hash function A hash function maps the universe of keys to indices or slots within the table, that is, for . The conventional implementations of hash functions are based on the integer universe assumption that all elements of the table stem from the universe , where the bit length of is confined within the word size of a computer architecture. A hash function is said to be perfect for a given set if it is injective on , that is, if each element maps to a different value in . A perfect hash function can be created if all the keys are known ahead of time. Integer universe assumption The schemes of hashing used in integer universe assumption include hashing by division, hashing by multiplication, universal hashing, dynamic perfect hashing, and static perfect hashing. However, hashing by division is the commonly used scheme. Hashing by division The scheme in hashing by division is as follows: where is the hash value of and is the size of the table. Hashing by multiplication The scheme in hashing by multiplication is as follows: Where is a non-integer real-valued constant and is the size of the table. An advantage of the hashing by multiplication is that the is not critical. Although any value produces a hash function, Donald Knuth suggests using the golden ratio. Choosing a hash function Uniform distribution of the hash values is a fundamental requirement of a hash function. A non-uniform distribution increases the number of collisions and the cost of resolving them. Uniformity is sometimes difficult to ensure by design, but may be evaluated empirically using statistical tests, e.g., a Pearson's chi-squared test for discrete uniform distributions. The distribution needs to be uniform only for table sizes that occur in the application. In particular, if one uses dynamic resizing with exact doubling and halving of the table size, then the hash function needs to be uniform only when the size is a power of two. Here the index can be computed as some range of bits of the hash function. On the other hand, some hashing algorithms prefer to have the size be a prime number. For open addressing schemes, the hash function should also avoid clustering, the mapping of two or more keys to consecutive slots. Such clustering may cause the lookup cost to skyrocket, even if the load factor is low and collisions are infrequent. The popular multiplicative hash is claimed to have particularly poor clustering behavior. K-independent hashing offers a way to prove a certain hash function does not have bad keysets for a given type of hashtable. A number of K-independence results are known for collision resolution schemes such as linear probing and cuckoo hashing. Since K-independence can prove a hash function works, one can then focus on finding the fastest possible such hash function. Collision resolution A search algorithm that uses hashing consists of two parts. The first part is computing a hash function which transforms the search key into an array index. The ideal case is such that no two search keys hashes to the same array index. However, this is not always the case and is impossible to guarantee for unseen given data. Hence the second part of the algorithm is collision resolution. The two common methods for collision resolution are separate chaining and open addressing. Separate chaining In separate chaining, the process involves building a linked list with key–value pair for each search array index. The collided items are chained together through a single linked list, which can be traversed to access the item with a unique search key. Collision resolution through chaining with linked list is a common method of implementation of hash tables. Let and be the hash table and the node respectively, the operation involves as follows: Chained-Hash-Insert(T, k) insert x at the head of linked list T[h(k)] Chained-Hash-Search(T, k) search for an element with key k in linked list T[h(k)] Chained-Hash-Delete(T, k) delete x from the linked list T[h(k)] If the element is comparable either numerically or lexically, and inserted into the list by maintaining the total order, it results in faster termination of the unsuccessful searches. Other data structures for separate chaining If the keys are ordered, it could be efficient to use "self-organizing" concepts such as using a self-balancing binary search tree, through which the theoretical worst case could be brought down to , although it introduces additional complexities. In dynamic perfect hashing, two-level hash tables are used to reduce the look-up complexity to be a guaranteed in the worst case. In this technique, the buckets of entries are organized as perfect hash tables with slots providing constant worst-case lookup time, and low amortized time for insertion. A study shows array-based separate chaining to be 97% more performant when compared to the standard linked list method under heavy load. Techniques such as using fusion tree for each buckets also result in constant time for all operations with high probability. Caching and locality of reference The linked list of separate chaining implementation may not be cache-conscious due to spatial locality—locality of reference—when the nodes of the linked list are scattered across memory, thus the list traversal during insert and search may entail CPU cache inefficiencies. In cache-conscious variants of collision resolution through separate chaining, a dynamic array found to be more cache-friendly is used in the place where a linked list or self-balancing binary search trees is usually deployed, since the contiguous allocation pattern of the array could be exploited by hardware-cache prefetchers—such as translation lookaside buffer—resulting in reduced access time and memory consumption. Open addressing Open addressing is another collision resolution technique in which every entry record is stored in the bucket array itself, and the hash resolution is performed through probing. When a new entry has to be inserted, the buckets are examined, starting with the hashed-to slot and proceeding in some probe sequence, until an unoccupied slot is found. When searching for an entry, the buckets are scanned in the same sequence, until either the target record is found, or an unused array slot is found, which indicates an unsuccessful search. Well-known probe sequences include: Linear probing, in which the interval between probes is fixed (usually 1). Quadratic probing, in which the interval between probes is increased by adding the successive outputs of a quadratic polynomial to the value given by the original hash computation. Double hashing, in which the interval between probes is computed by a secondary hash function. The performance of open addressing may be slower compared to separate chaining since the probe sequence increases when the load factor approaches 1. The probing results in an infinite loop if the load factor reaches 1, in the case of a completely filled table. The average cost of linear probing depends on the hash function's ability to distribute the elements uniformly throughout the table to avoid clustering, since formation of clusters would result in increased search time. Caching and locality of reference Since the slots are located in successive locations, linear probing could lead to better utilization of CPU cache due to locality of references resulting in reduced memory latency. Other collision resolution techniques based on open addressing Coalesced hashing Coalesced hashing is a hybrid of both separate chaining and open addressing in which the buckets or nodes link within the table. The algorithm is ideally suited for fixed memory allocation. The collision in coalesced hashing is resolved by identifying the largest-indexed empty slot on the hash table, then the colliding value is inserted into that slot. The bucket is also linked to the inserted node's slot which contains its colliding hash address. Cuckoo hashing Cuckoo hashing is a form of open addressing collision resolution technique which guarantees worst-case lookup complexity and constant amortized time for insertions. The collision is resolved through maintaining two hash tables, each having its own hashing function, and collided slot gets replaced with the given item, and the preoccupied element of the slot gets displaced into the other hash table. The process continues until every key has its own spot in the empty buckets of the tables; if the procedure enters into infinite loop—which is identified through maintaining a threshold loop counter—both hash tables get rehashed with newer hash functions and the procedure continues. Hopscotch hashing Hopscotch hashing is an open addressing based algorithm which combines the elements of cuckoo hashing, linear probing and chaining through the notion of a neighbourhood of buckets—the subsequent buckets around any given occupied bucket, also called a "virtual" bucket. The algorithm is designed to deliver better performance when the load factor of the hash table grows beyond 90%; it also provides high throughput in concurrent settings, thus well suited for implementing resizable concurrent hash table. The neighbourhood characteristic of hopscotch hashing guarantees a property that, the cost of finding the desired item from any given buckets within the neighbourhood is very close to the cost of finding it in the bucket itself; the algorithm attempts to be an item into its neighbourhood—with a possible cost involved in displacing other items. Each bucket within the hash table includes an additional "hop-information"—an H-bit bit array for indicating the relative distance of the item which was originally hashed into the current virtual bucket within H-1 entries. Let and be the key to be inserted and bucket to which the key is hashed into respectively; several cases are involved in the insertion procedure such that the neighbourhood property of the algorithm is vowed: if is empty, the element is inserted, and the leftmost bit of bitmap is set to 1; if not empty, linear probing is used for finding an empty slot in the table, the bitmap of the bucket gets updated followed by the insertion; if the empty slot is not within the range of the neighbourhood, i.e. H-1, subsequent swap and hop-info bit array manipulation of each bucket is performed in accordance with its neighbourhood invariant properties. Robin Hood hashing Robin Hood hashing is an open addressing based collision resolution algorithm; the collisions are resolved through favouring the displacement of the element that is farthest—or longest probe sequence length (PSL)—from its "home location" i.e. the bucket to which the item was hashed into. Although Robin Hood hashing does not change the theoretical search cost, it significantly affects the variance of the distribution of the items on the buckets, i.e. dealing with cluster formation in the hash table. Each node within the hash table that uses Robin Hood hashing should be augmented to store an extra PSL value. Let be the key to be inserted, be the (incremental) PSL length of , be the hash table and be the index, the insertion procedure is as follows: If : the iteration goes into the next bucket without attempting an external probe. If : insert the item into the bucket ; swap with —let it be ; continue the probe from the st bucket to insert ; repeat the procedure until every element is inserted. Dynamic resizing Repeated insertions cause the number of entries in a hash table to grow, which consequently increases the load factor; to maintain the amortized performance of the lookup and insertion operations, a hash table is dynamically resized and the items of the tables are rehashed into the buckets of the new hash table, since the items cannot be copied over as varying table sizes results in different hash value due to modulo operation. If a hash table becomes "too empty" after deleting some elements, resizing may be performed to avoid excessive memory usage. Resizing by moving all entries Generally, a new hash table with a size double that of the original hash table gets allocated privately and every item in the original hash table gets moved to the newly allocated one by computing the hash values of the items followed by the insertion operation. Rehashing is simple, but computationally expensive. Alternatives to all-at-once rehashing Some hash table implementations, notably in real-time systems, cannot pay the price of enlarging the hash table all at once, because it may interrupt time-critical operations. If one cannot avoid dynamic resizing, a solution is to perform the resizing gradually to avoid storage blip—typically at 50% of new table's size—during rehashing and to avoid memory fragmentation that triggers heap compaction due to deallocation of large memory blocks caused by the old hash table. In such case, the rehashing operation is done incrementally through extending prior memory block allocated for the old hash table such that the buckets of the hash table remain unaltered. A common approach for amortized rehashing involves maintaining two hash functions and . The process of rehashing a bucket's items in accordance with the new hash function is termed as cleaning, which is implemented through command pattern by encapsulating the operations such as , and through a wrapper such that each element in the bucket gets rehashed and its procedure involve as follows: Clean bucket. Clean bucket. The command gets executed. Linear hashing Linear hashing is an implementation of the hash table which enables dynamic growths or shrinks of the table one bucket at a time. Performance The performance of a hash table is dependent on the hash function's ability in generating quasi-random numbers () for entries in the hash table where , and denotes the key, number of buckets and the hash function such that . If the hash function generates the same for distinct keys (), this results in collision, which is dealt with in a variety of ways. The constant time complexity () of the operation in a hash table is presupposed on the condition that the hash function doesn't generate colliding indices; thus, the performance of the hash table is directly proportional to the chosen hash function's ability to disperse the indices. However, construction of such a hash function is practically infeasible, that being so, implementations depend on case-specific collision resolution techniques in achieving higher performance. Applications Associative arrays Hash tables are commonly used to implement many types of in-memory tables. They are used to implement associative arrays. Database indexing Hash tables may also be used as disk-based data structures and database indices (such as in dbm) although B-trees are more popular in these applications. Caches Hash tables can be used to implement caches, auxiliary data tables that are used to speed up the access to data that is primarily stored in slower media. In this application, hash collisions can be handled by discarding one of the two colliding entries—usually erasing the old item that is currently stored in the table and overwriting it with the new item, so every item in the table has a unique hash value. Sets Hash tables can be used in the implementation of set data structure, which can store unique values without any particular order; set is typically used in testing the membership of a value in the collection, rather than element retrieval. Transposition table A transposition table to a complex Hash Table which stores information about each section that has been searched. Implementations Many programming languages provide hash table functionality, either as built-in associative arrays or as standard library modules. In JavaScript, an "object" is a mutable collection of key-value pairs (called "properties"), where each key is either a string or a guaranteed-unique "symbol"; any other value, when used as a key, is first coerced to a string. Aside from the seven "primitive" data types, every value in JavaScript is an object. ECMAScript 2015 also added the Map data structure, which accepts arbitrary values as keys. C++11 includes unordered_map in its standard library for storing keys and values of arbitrary types. Go's built-in map implements a hash table in the form of a type. Java programming language includes the HashSet, HashMap, LinkedHashSet, and LinkedHashMap generic collections. Python's built-in dict implements a hash table in the form of a type. Ruby's built-in Hash uses the open addressing model from Ruby 2.4 onwards. Rust programming language includes HashMap, HashSet as part of the Rust Standard Library. The .NET standard library includes HashSet and Dictionary, so it can be used from languages such as C# and VB.NET.
Mathematics
Data structures and types
null
13899
https://en.wikipedia.org/wiki/Harmonic%20oscillator
Harmonic oscillator
In classical mechanics, a harmonic oscillator is a system that, when displaced from its equilibrium position, experiences a restoring force F proportional to the displacement x: where k is a positive constant. If F is the only force acting on the system, the system is called a simple harmonic oscillator, and it undergoes simple harmonic motion: sinusoidal oscillations about the equilibrium point, with a constant amplitude and a constant frequency (which does not depend on the amplitude). If a frictional force (damping) proportional to the velocity is also present, the harmonic oscillator is described as a damped oscillator. Depending on the friction coefficient, the system can: Oscillate with a frequency lower than in the undamped case, and an amplitude decreasing with time (underdamped oscillator). Decay to the equilibrium position, without oscillations (overdamped oscillator). The boundary solution between an underdamped oscillator and an overdamped oscillator occurs at a particular value of the friction coefficient and is called critically damped. If an external time-dependent force is present, the harmonic oscillator is described as a driven oscillator. Mechanical examples include pendulums (with small angles of displacement), masses connected to springs, and acoustical systems. Other analogous systems include electrical harmonic oscillators such as RLC circuits. The harmonic oscillator model is very important in physics, because any mass subject to a force in stable equilibrium acts as a harmonic oscillator for small vibrations. Harmonic oscillators occur widely in nature and are exploited in many manmade devices, such as clocks and radio circuits. They are the source of virtually all sinusoidal vibrations and waves. Simple harmonic oscillator A simple harmonic oscillator is an oscillator that is neither driven nor damped. It consists of a mass m, which experiences a single force F, which pulls the mass in the direction of the point and depends only on the position x of the mass and a constant k. Balance of forces (Newton's second law) for the system is Solving this differential equation, we find that the motion is described by the function where The motion is periodic, repeating itself in a sinusoidal fashion with constant amplitude A. In addition to its amplitude, the motion of a simple harmonic oscillator is characterized by its period , the time for a single oscillation or its frequency , the number of cycles per unit time. The position at a given time t also depends on the phase φ, which determines the starting point on the sine wave. The period and frequency are determined by the size of the mass m and the force constant k, while the amplitude and phase are determined by the starting position and velocity. The velocity and acceleration of a simple harmonic oscillator oscillate with the same frequency as the position, but with shifted phases. The velocity is maximal for zero displacement, while the acceleration is in the direction opposite to the displacement. The potential energy stored in a simple harmonic oscillator at position x is Damped harmonic oscillator In real oscillators, friction, or damping, slows the motion of the system. Due to frictional force, the velocity decreases in proportion to the acting frictional force. While in a simple undriven harmonic oscillator the only force acting on the mass is the restoring force, in a damped harmonic oscillator there is in addition a frictional force which is always in a direction to oppose the motion. In many vibrating systems the frictional force Ff can be modeled as being proportional to the velocity v of the object: , where c is called the viscous damping coefficient. The balance of forces (Newton's second law) for damped harmonic oscillators is then which can be rewritten into the form where is called the "undamped angular frequency of the oscillator", is called the "damping ratio". The value of the damping ratio ζ critically determines the behavior of the system. A damped harmonic oscillator can be: Overdamped (ζ > 1): The system returns (exponentially decays) to steady state without oscillating. Larger values of the damping ratio ζ return to equilibrium more slowly. Critically damped (ζ = 1): The system returns to steady state as quickly as possible without oscillating (although overshoot can occur if the initial velocity is nonzero). This is often desired for the damping of systems such as doors. Underdamped (ζ < 1): The system oscillates (with a slightly different frequency than the undamped case) with the amplitude gradually decreasing to zero. The angular frequency of the underdamped harmonic oscillator is given by the exponential decay of the underdamped harmonic oscillator is given by The Q factor of a damped oscillator is defined as Q is related to the damping ratio by Driven harmonic oscillators Driven harmonic oscillators are damped oscillators further affected by an externally applied force F(t). Newton's second law takes the form It is usually rewritten into the form This equation can be solved exactly for any driving force, using the solutions z(t) that satisfy the unforced equation and which can be expressed as damped sinusoidal oscillations: in the case where . The amplitude A and phase φ determine the behavior needed to match the initial conditions. Step input In the case and a unit step input with : the solution is with phase φ given by The time an oscillator needs to adapt to changed external conditions is of the order . In physics, the adaptation is called relaxation, and τ is called the relaxation time. In electrical engineering, a multiple of τ is called the settling time, i.e. the time necessary to ensure the signal is within a fixed departure from final value, typically within 10%. The term overshoot refers to the extent the response maximum exceeds final value, and undershoot refers to the extent the response falls below final value for times following the response maximum. Sinusoidal driving force In the case of a sinusoidal driving force: where is the driving amplitude, and is the driving frequency for a sinusoidal driving mechanism. This type of system appears in AC-driven RLC circuits (resistor–inductor–capacitor) and driven spring systems having internal mechanical resistance or external air resistance. The general solution is a sum of a transient solution that depends on initial conditions, and a steady state that is independent of initial conditions and depends only on the driving amplitude , driving frequency , undamped angular frequency , and the damping ratio . The steady-state solution is proportional to the driving force with an induced phase change : where is the absolute value of the impedance or linear response function, and is the phase of the oscillation relative to the driving force. The phase value is usually taken to be between −180° and 0 (that is, it represents a phase lag, for both positive and negative values of the arctan argument). For a particular driving frequency called the resonance, or resonant frequency , the amplitude (for a given ) is maximal. This resonance effect only occurs when , i.e. for significantly underdamped systems. For strongly underdamped systems the value of the amplitude can become quite large near the resonant frequency. The transient solutions are the same as the unforced () damped harmonic oscillator and represent the systems response to other events that occurred previously. The transient solutions typically die out rapidly enough that they can be ignored. Parametric oscillators A parametric oscillator is a driven harmonic oscillator in which the drive energy is provided by varying the parameters of the oscillator, such as the damping or restoring force. A familiar example of parametric oscillation is "pumping" on a playground swing. A person on a moving swing can increase the amplitude of the swing's oscillations without any external drive force (pushes) being applied, by changing the moment of inertia of the swing by rocking back and forth ("pumping") or alternately standing and squatting, in rhythm with the swing's oscillations. The varying of the parameters drives the system. Examples of parameters that may be varied are its resonance frequency and damping . Parametric oscillators are used in many applications. The classical varactor parametric oscillator oscillates when the diode's capacitance is varied periodically. The circuit that varies the diode's capacitance is called the "pump" or "driver". In microwave electronics, waveguide/YAG based parametric oscillators operate in the same fashion. The designer varies a parameter periodically to induce oscillations. Parametric oscillators have been developed as low-noise amplifiers, especially in the radio and microwave frequency range. Thermal noise is minimal, since a reactance (not a resistance) is varied. Another common use is frequency conversion, e.g., conversion from audio to radio frequencies. For example, the Optical parametric oscillator converts an input laser wave into two output waves of lower frequency (). Parametric resonance occurs in a mechanical system when a system is parametrically excited and oscillates at one of its resonant frequencies. Parametric excitation differs from forcing, since the action appears as a time varying modification on a system parameter. This effect is different from regular resonance because it exhibits the instability phenomenon. Universal oscillator equation The equation is known as the universal oscillator equation, since all second-order linear oscillatory systems can be reduced to this form. This is done through nondimensionalization. If the forcing function is , where , the equation becomes The solution to this differential equation contains two parts: the "transient" and the "steady-state". Transient solution The solution based on solving the ordinary differential equation is for arbitrary constants c1 and c2 The transient solution is independent of the forcing function. Steady-state solution Apply the "complex variables method" by solving the auxiliary equation below and then finding the real part of its solution: Supposing the solution is of the form Its derivatives from zeroth to second order are Substituting these quantities into the differential equation gives Dividing by the exponential term on the left results in Equating the real and imaginary parts results in two independent equations Amplitude part Squaring both equations and adding them together gives Therefore, Compare this result with the theory section on resonance, as well as the "magnitude part" of the RLC circuit. This amplitude function is particularly important in the analysis and understanding of the frequency response of second-order systems. Phase part To solve for , divide both equations to get This phase function is particularly important in the analysis and understanding of the frequency response of second-order systems. Full solution Combining the amplitude and phase portions results in the steady-state solution The solution of original universal oscillator equation is a superposition (sum) of the transient and steady-state solutions: Equivalent systems Harmonic oscillators occurring in a number of areas of engineering are equivalent in the sense that their mathematical models are identical (see universal oscillator equation above). Below is a table showing analogous quantities in four harmonic oscillator systems in mechanics and electronics. If analogous parameters on the same line in the table are given numerically equal values, the behavior of the oscillatorstheir output waveform, resonant frequency, damping factor, etc.are the same. Application to a conservative force The problem of the simple harmonic oscillator occurs frequently in physics, because a mass at equilibrium under the influence of any conservative force, in the limit of small motions, behaves as a simple harmonic oscillator. A conservative force is one that is associated with a potential energy. The potential-energy function of a harmonic oscillator is Given an arbitrary potential-energy function , one can do a Taylor expansion in terms of around an energy minimum () to model the behavior of small perturbations from equilibrium. Because is a minimum, the first derivative evaluated at must be zero, so the linear term drops out: The constant term is arbitrary and thus may be dropped, and a coordinate transformation allows the form of the simple harmonic oscillator to be retrieved: Thus, given an arbitrary potential-energy function with a non-vanishing second derivative, one can use the solution to the simple harmonic oscillator to provide an approximate solution for small perturbations around the equilibrium point. Examples Simple pendulum Assuming no damping, the differential equation governing a simple pendulum of length , where is the local acceleration of gravity, is If the maximal displacement of the pendulum is small, we can use the approximation and instead consider the equation The general solution to this differential equation is where and are constants that depend on the initial conditions. Using as initial conditions and , the solution is given by where is the largest angle attained by the pendulum (that is, is the amplitude of the pendulum). The period, the time for one complete oscillation, is given by the expression which is a good approximation of the actual period when is small. Notice that in this approximation the period is independent of the amplitude . In the above equation, represents the angular frequency. Spring/mass system When a spring is stretched or compressed by a mass, the spring develops a restoring force. Hooke's law gives the relationship of the force exerted by the spring when the spring is compressed or stretched a certain length: where F is the force, k is the spring constant, and x is the displacement of the mass with respect to the equilibrium position. The minus sign in the equation indicates that the force exerted by the spring always acts in a direction that is opposite to the displacement (i.e. the force always acts towards the zero position), and so prevents the mass from flying off to infinity. By using either force balance or an energy method, it can be readily shown that the motion of this system is given by the following differential equation: the latter being Newton's second law of motion. If the initial displacement is A, and there is no initial velocity, the solution of this equation is given by Given an ideal massless spring, is the mass on the end of the spring. If the spring itself has mass, its effective mass must be included in . Energy variation in the spring–damping system In terms of energy, all systems have two types of energy: potential energy and kinetic energy. When a spring is stretched or compressed, it stores elastic potential energy, which is then transferred into kinetic energy. The potential energy within a spring is determined by the equation When the spring is stretched or compressed, kinetic energy of the mass gets converted into potential energy of the spring. By conservation of energy, assuming the datum is defined at the equilibrium position, when the spring reaches its maximal potential energy, the kinetic energy of the mass is zero. When the spring is released, it tries to return to equilibrium, and all its potential energy converts to kinetic energy of the mass. Definition of terms
Physical sciences
Basics_10
null
13926
https://en.wikipedia.org/wiki/Hyena
Hyena
Hyenas or hyaenas ( ; from Ancient Greek , ) are feliform carnivoran mammals belonging to the family Hyaenidae (). With just four extant species (each in its own genus), it is the fifth-smallest family in the order Carnivora and one of the smallest in the class Mammalia. Despite their low diversity, hyenas are unique and vital components of most African ecosystems. Although phylogenetically closer to felines and viverrids, hyenas are behaviourally and morphologically similar to canids in several elements due to convergent evolution: both hyenas and canines are non-arboreal, cursorial hunters that catch prey with their teeth rather than claws. Both eat food quickly and may store it, and their calloused feet with large, blunt, nonretractable claws are adapted for running and making sharp turns. However, hyenas' grooming, scent marking, defecation habits, mating and parental behavior are consistent with the behavior of other feliforms. Hyenas feature prominently in the folklore and mythology of human cultures that live alongside them. Hyenas are commonly viewed as frightening and worthy of contempt. In some cultures, hyenas are thought to influence people's spirits, rob graves, and steal livestock and children. Other cultures associate them with witchcraft, using their body parts in traditional medicine. Evolution Origins Hyenas originated in the jungles of Miocene Eurasia 22 million years ago, when most early feliform species were still largely arboreal. The first ancestral hyenas were likely similar to the modern African civet; one of the earliest hyena species described, Plioviverrops, was a lithe, civet-like animal that inhabited Eurasia 20–22 million years ago, and is identifiable as a hyaenid by the structure of the middle ear and dentition. The lineage of Plioviverrops prospered, and gave rise to descendants with longer legs and more pointed jaws, a direction similar to that taken by canids in North America. Hyenas then diversified into two distinct types: lightly built dog-like hyenas and robust bone-crushing hyenas. Although the dog-like hyenas thrived 15 million years ago (with one taxon having colonised North America), they became extinct after a change in climate, along with the arrival of canids into Eurasia. Of the dog-like hyena lineage, only the insectivorous aardwolf survived, while the bone-crushing hyenas (including the extant spotted, brown and striped hyenas) became the undisputed top scavengers of Eurasia and Africa. Rise and fall of the dog-like hyenas The descendants of Plioviverrops reached their peak 15 million years ago, with more than 30 species having been identified. Unlike most modern hyena species, which are specialised bone-crushers, these dog-like hyenas were nimble-bodied, wolfish animals; one species among them was Ictitherium viverrinum, which was similar to a jackal. The dog-like hyenas were numerous; in some Miocene fossil sites, the remains of Ictitherium and other dog-like hyenas outnumber those of all other carnivores combined. The decline of the dog-like hyenas began 5–7 million years ago during a period of climate change, exacerbated by canids crossing the Bering land bridge to Eurasia. One species, Chasmaporthetes ossifragus, managed to cross the land bridge into North America, being the only hyena to do so. Chasmaporthetes managed to survive for some time in North America by deviating from the endurance-running and bone-crushing niches monopolized by canids, and developing into a cheetah-like sprinter. Most of the dog-like hyenas had died off by 1.5 million years ago. Bone-crushing hyenas By 10–14 million years ago, the hyena family had split into two distinct groups: dog-like hyenas and bone-crushing hyenas. The arrival of the ancestral bone-crushing hyenas coincided with the decline of the similarly built family Percrocutidae. The bone-crushing hyenas survived the changes in climate and the arrival of canids, which wiped out the dog-like hyenas, though they never crossed into North America, as their niche there had already been taken by the dog subfamily Borophaginae. By 5 million years ago, the bone-crushing hyenas had become the dominant scavengers of Eurasia, primarily feeding on large herbivore carcasses felled by sabre-toothed cats. One genus, Pachycrocuta, was a mega-scavenger that could splinter the bones of elephants. Starting in the early Middle Pleistocene Pachycrocuta was replaced by the smaller Crocuta and Hyena, which corresponds to a general faunal change, perhaps in connection to the Mid-Pleistocene transition. Rise of modern hyenas The four extant species are the striped hyena (Hyaena hyaena), the brown hyena (Parahyaena brunnea), the spotted hyena (Crocuta crocuta), and the aardwolf (Proteles cristata). The aardwolf can trace its lineage directly back to Plioviverrops 15 million years ago, and is the only survivor of the dog-like hyena lineage. Its success is partly attributed to its insectivorous diet, for which it faced no competition from canids crossing from North America. It is likely that its unrivaled ability to digest the terpene excretions from soldier termites is a modification of the strong digestive system its ancestors used to consume fetid carrion. The striped hyena may have evolved from Hyaenictitherium namaquensis of Pliocene Africa. Striped hyena fossils are common in Africa, with records going back as far as the Villafranchian. As fossil striped hyenas are absent from the Mediterranean region, it is likely that the species is a relatively late invader to Eurasia, having likely spread outside Africa only after the extinction of spotted hyenas in Asia at the end of the Ice Age. The striped hyena occurred for some time in Europe during the Pleistocene, having been particularly widespread in France and Germany. It also occurred in Montmaurin, Hollabrunn in Austria, the Furninha Cave in Portugal and the Genista Caves in Gibraltar. The European form was similar in appearance to modern populations, but was larger, being comparable in size to the brown hyena. The spotted hyena (Crocuta crocuta) diverged from the striped and brown hyena 10 million years ago. Its direct ancestor was the Indian Crocuta sivalensis, which lived during the Villafranchian. Ancestral spotted hyenas probably developed social behaviours in response to increased pressure from rivals on carcasses, thus forcing them to operate in teams. Spotted hyenas evolved sharp carnassials behind their crushing premolars, therefore they did not need to wait for their prey to die, and thus became pack hunters as well as scavengers. They began forming increasingly larger territories, necessitated by the fact that their prey was often migratory, and long chases in a small territory would have caused them to encroach into another clan's turf. Spotted hyenas spread from their original homeland during the Middle Pleistocene, and quickly colonised a very wide area from Europe, to southern Africa and China. The eventual disappearance of the spotted hyena from Europe has traditionally been attributed to the end of the last glacial period and a subsequent displacement of open grassland by closed forests, which favoured wolves and humans instead. However, analyses have shown that climate change alone is insufficient to explain the spotted hyena's disappearance from Europe, suggesting that other factors – such as human pressure – must have played a role. This suggests that the events must be seen within the broader context of late-Quaternary extinctions, as the late Pleistocene and early Holocene saw the disappearance of many primarily large mammals from Europe and the world. Expansion or duplication of the olfatory receptor gene family has been found in all 4 extant species, which would have led to the evolution of the more specialised feeding habits of hyenas. Expansion in immune-related gene families was also found in the spotted hyena, striped hyena and brown hyena, which would have led to the evolution of the scavenging in these species. Mutations and variants were also found in digestion-related genes (ASH1L, PTPN5, PKP3, AQP10). One of these digestion-related genes has variants also related to enhanced bone mineralisation (PTPN5), while other have also a role in inflammatory skin responses (PKP3). In aardwolves, expansion of genes related to toxin response were found (Lipocalin and UDP Glucuronosyltransferase gene families), which would have led to the evolution of the feeding of termites Trinervitermes in this species. Mutations and variants in genes related to craniofacial shape were also found (GARS, GMPR, STIP1, SMO and PAPSS2). Another gene is related to protective epidermis function (DSC1). Genera of the Hyaenidae (extinct and recent) The list follows McKenna and Bell's Classification of Mammals for prehistoric genera (1997) and Wozencraft (2005) in Wilson and Reeders Mammal Species of the World for extant genera. The percrocutids are, in contrast to McKenna and Bell's classification, not included as a subfamily into the Hyaenidae, but as the separate family Percrocutidae (though they are generally grouped as sister-taxa to hyenas). Furthermore, the living brown hyena and its closest extinct relatives are not included in the genus Pachycrocuta, but in the genus Parahyaena. However, some research has suggested Parahyaena may be synonymous with Pachycrocuta, making the brown hyena the only extant member of this genus. Family Hyaenidae Subfamily Incertae sedis †Tongxinictis (Middle Miocene of Asia) †Subfamily Ictitheriinae †Herpestides (Early Miocene of Africa and Eurasia) †Plioviverrops (including Jordanictis, Protoviverrops, Mesoviverrops; Early Miocene to Early Pliocene of Europe, Late Miocene of Asia) †Ictitherium (=Galeotherium; including Lepthyaena, Sinictitherium, Paraictitherium; Middle Miocene of Africa, Late Miocene to Early Pliocene of Eurasia) †Thalassictis (including Palhyaena, Miohyaena, Hyaenictitherium, Hyaenalopex; Middle to Late Miocene of Asia, Late Miocene of Africa and Europe) †Hyaenotherium (Late Miocene to Early Pliocene of Eurasia) †Miohyaenotherium(Late Miocene of Europe) †Lycyaena (Late Miocene of Eurasia) †Tungurictis (Middle Miocene of Africa and Eurasia) †Protictitherium (Middle Miocene of Africa and Asia, Middle to Late Miocene of Europe) Subfamily Hyaeninae †Palinhyaena (Late Miocene of Asia) †Ikelohyaena (Early Pliocene of Africa) Hyaena (=Euhyaena,=Parahyaena; including striped hyena, Pliohyaena, Pliocrocuta, Anomalopithecus) Early Pliocene (?Middle Miocene) to Recent of Africa, Late Pliocene (?Late Miocene) to Late Pleistocene of Europe, Late Pliocene to recent in Asia Parahyaena (=Hyaena; brown hyena Pliocene to recent of Africa) †Hyaenictis (Late Miocene of Asia?, Late Miocene of Europe, Early Pliocene (?Early Pleistocene) of Africa) †Leecyaena (Late Miocene and/or Early Pliocene of Asia) †Chasmaporthetes (=Ailuriaena; including Lycaenops, Euryboas; Late Miocene to Early Pleistocene of Eurasia, Early Pliocene to Late Pliocene or Early Pleistocene of Africa, Late Pliocene to Early Pleistocene of North America) †Pachycrocuta (Pliocene and Pleistocene of Eurasia and Africa) †Adcrocuta (Late Miocene of Eurasia) Crocuta (=Crocotta; including Eucrocuta; spotted hyena and cave hyena. Late Pliocene to recent of Africa, Late Pliocene to Late Pleistocene of Eurasia) Subfamily Protelinae †Gansuyaena Proteles (=Geocyon; aardwolf. Pleistocene to Recent of Africa) Phylogeny The following cladogram illustrates the phylogenetic relationships between extant and extinct hyaenids based on the morphological analysis by Werdelin & Solounias (1991), as updated by Turner et al. (2008). A more recent molecular analysis agrees on the phylogenetic relationship between the four extant hyaenidae species (Koepfli et al, 2006). Characteristics Build Hyenas have relatively short torsos and are fairly massive and wolf-like in build, but have lower hind quarters, high withers and their backs slope noticeably downward towards their rumps. The forelegs are high, while the hind legs are very short and their necks are thick and short. Their skulls superficially resemble those of large canids, but are much larger and heavier, with shorter facial portions. Hyenas are digitigrade, with the fore and hind paws having four digits each and sporting bulging pawpads. Like canids, hyenas have short, blunt, non-retractable claws. Their pelage is sparse and coarse with poorly developed or absent underfur. Most species have a rich mane of long hair running from the withers or from the head. With the exception of the spotted hyena, hyaenids have striped coats, which they likely inherited from their viverrid ancestors. Their ears are large and have simple basal ridges and no marginal bursa. Their vertebral column, including the cervical region are of limited mobility. Hyenas have no baculum. Hyenas have one more pair of ribs than canids do, and their tongues are rough like those of felids and viverrids. Males in most hyena species are larger than females, though the spotted hyena is an exception, as it is the female of the species that outweighs and dominates the male. Also, unlike other hyenas, the female spotted hyena's external genitalia closely resembles that of the male. Their dentition is similar to that of the canid, but is more specialised for consuming coarse food and crushing bones. The carnassials, especially the upper, are very powerful and are shifted far back to the point of exertion of peak pressure on the jaws. The other teeth, save for the underdeveloped upper molars, are powerful, with broad bases and cutting edges. The canines are short, but thick and robust. Labiolingually, their mandibles are much stronger at the canine teeth than in canids, reflecting the fact that hyenas crack bones with both their anterior dentition and premolars, unlike canids, which do so with their post-carnassial molars. The strength of their jaws is such that both striped and spotted hyenas have been recorded to kill dogs with a single bite to the neck without breaking the skin. The spotted hyena is renowned for its strong bite proportional to its size, but a number of other animals (including the Tasmanian devil) are proportionately stronger. The aardwolf has greatly reduced cheek teeth, sometimes absent in the adult, but otherwise has the same dental formula as the other three species. The dental formula for all hyena species is: Although hyenas lack perineal scent glands, they have a large pouch of naked skin located at the anal opening. Large anal glands above the anus open into this pouch. Several sebaceous glands are present between the openings of the anal glands and above them. These glands produce a white, creamy secretion that the hyenas paste onto grass stalks. The odor of this secretion is very strong, smelling of boiling cheap soap or burning, and can be detected by humans several meters downwind. The secretions are primarily used for territorial marking, though both the aardwolf and the striped hyena will spray them when attacked. Behavior Hyenas groom themselves often like felids and viverrids, and their way of licking their genitals is very cat-like (sitting on the lower back, legs spread with one leg pointing vertically upward). They defecate in the same manner as other Carnivora, though they never raise their legs as canids do when urinating, as urination serves no territorial function for them. Instead, hyenas mark their territories using their anal glands, a trait found also in viverrids and mustelids, but not canids and felids. When attacked by lions or dogs, striped and brown hyenas will feign death, though the spotted hyena will defend itself ferociously. The spotted hyena is very vocal, producing a number of different sounds consisting of whoops, grunts, groans, lows, giggles, yells, growls, laughs and whines. The striped hyena is comparatively silent, its vocalizations being limited to a chattering laugh and howling. Mating between hyenas involves a number of short copulations with brief intervals, unlike canids, who generally engage in a single, drawn out copulation. Spotted hyena cubs are born almost fully developed, with their eyes open and erupting incisors and canines, though lacking adult markings. In contrast, striped hyena cubs are born with adult markings, closed eyes and small ears. Hyenas do not regurgitate food for their young and male spotted hyenas play no part in raising their cubs, though male striped hyenas do so. The striped hyena is primarily a scavenger, though it will also attack and kill any animals it can overcome, and will supplement its diet with fruit. The spotted hyena, though it also scavenges occasionally, is an active pack hunter of medium to large sized ungulates, which it catches by wearing them down in long chases and dismembering them in a canid-like manner. Spotted hyenas may kill as many as 95% of the animals they eat. The aardwolf is primarily an insectivore, specialised for feeding on termites of the genus Trinervitermes and Hodotermes, which it consumes by licking them up with its long, broad tongue. An aardwolf can eat 300,000 Trinervitermes on a single outing. Except for the aardwolf, hyenas are known to drive off larger predators, like lions, from their kills, despite having a reputation in popular culture for being cowardly. Hyenas are primarily nocturnal animals, but sometimes venture from their lairs in the early-morning hours. With the exception of the highly social spotted hyena, hyenas are generally not gregarious animals, though the striped and brown hyenas may live in family groups and congregate at kills. Spotted hyenas are one of the few mammals other than bats known to survive infection with rabies virus and have shown little or no disease-induced mortality during outbreaks in sympatric carnivores, in part due to the high concentration of antibodies present in their saliva. Despite this perceived unique disease resistance, little is known about the immune system of spotted hyenas, and even less is known about other Hyaenidae species. Relationships with humans Folklore, mythology and literature Spotted hyenas vary in their folkloric and mythological depictions, depending on the ethnic group from which the tales originate. It is often difficult to know whether spotted hyenas are the specific hyena species featured in such stories, particularly in West Africa, as both spotted and striped hyenas are often given the same names. In West African tales, spotted hyenas are sometimes depicted as bad Muslims who challenge the local animism that exists among the Beng in Côte d’Ivoire. In East Africa, Tabwa mythology portrays the spotted hyena as a solar animal that first brought the sun to warm the cold earth, while West African folklore generally shows the hyena as symbolizing immorality, dirty habits, the reversal of normal activities, and other negative traits. In Tanzania, there is a belief that witches use spotted hyenas as mounts. In the Mtwara Region of Tanzania, it is believed that a child born at night while a hyena is crying will likely grow up to be a thief. In the same area, hyena feces are believed to enable a child to walk at an early age, thus it is not uncommon in that area to see children with hyena dung wrapped in their clothes. The Kaguru of Tanzania and the Kujamaat of southern Senegal view hyenas as inedible and greedy hermaphrodites. A mythical African tribe called the Bouda is reputed to have members able to transform into hyenas. A similar myth occurs in Mansôa. These "werehyenas" are killed when discovered, and do not revert to human form once dead. Striped hyenas are often referred to in Middle Eastern literature and folklore, typically as symbols of treachery and stupidity. In the Near and Middle East, striped hyenas are generally regarded as physical incarnations of jinns. Arab writer al-Qazwīnī (1204–1283) spoke of a tribe of people called al-Ḍabyūn meaning "hyena people". In his book ‘Ajā’ib Al-Makhlūqāt he wrote that should one of this tribe be in a group of 1,000 people, a hyena could pick him out and eat him. A Persian medical treatise written in 1376 tells how to cure cannibalistic people known as kaftar, who are said to be "half-man, half-hyena". al-Damīrī in his writings in Ḥawayān al-Kubrā (1406) wrote that striped hyenas were vampiric creatures that attacked people at night and sucked the blood from their necks. He also wrote that hyenas only attacked brave people. Arab folklore tells of how hyenas can mesmerise victims with their eyes or sometimes with their pheromones. In a similar vein to al-Damīrī, the Greeks until the end of the 19th century believed that the bodies of werewolves, if not destroyed, would haunt battlefields as vampiric hyenas that drank the blood of dying soldiers. The image of striped hyenas in Afghanistan, India and Palestine is more varied. Though feared, striped hyenas were also symbolic of love and fertility, leading to numerous varieties of love medicine derived from hyena body parts. Among the Baluch and in northern India, witches or magicians are said to ride striped hyenas at night. The striped hyena is mentioned in the Bible. The Arabic word for the hyena, ḍab` or ḍabu` (plural ḍibā`), is alluded to in a valley in Israel known as Shaqq-ud-Diba` (meaning "cleft of the hyenas") and Wadi-Abu-Diba` (meaning "valley of the hyenas"). Both places have been interpreted by some scholars as being the Biblical Valley of Tsebo`im mentioned in 1 Samuel 13:18. In modern Hebrew, the word for hyena and hypocrite are both the same: tsavua. Though the Authorized King James Version of the Bible interprets the term "`ayit tsavua`" (found in Jeremiah 12:9) as "speckled bird", Henry Baker Tristram argued that it was most likely a hyena being mentioned. The vocalization of the spotted hyena resembling hysterical human laughter has been alluded to in numerous works of literature: "to laugh like a hyæna" was a common simile, and is featured in The Cobbler's Prophecy (1594), Webster's Duchess of Malfy (1623) and Shakespeare's As You Like It, Act IV. Sc.1. Die Strandjutwolf (The brown hyena) is an allegorical poem by the renowned South African poet, N. P. van Wyk Louw, which evokes a sinister and ominous presence. Attacks on humans In ordinary circumstances, striped hyenas are extremely timid around humans, though they may show bold behaviors towards people at night. On rare occasions, striped hyenas have preyed on humans. Among hyenas, only the spotted and striped hyenas have been known to become man-eaters. Hyenas are known to have preyed on humans in prehistory: human hair has been found in fossilized hyena dung dating back 195,000 to 257,000 years. Some paleontologists believe that competition and predation by cave hyenas (Crocuta crocuta spelaea) in Siberia was a significant factor in delaying human colonization of Alaska. Hyenas may have occasionally stolen human kills, or entered campsites to drag off the young and weak, much like modern spotted hyenas in Africa. The oldest Alaskan human remains coincide with roughly the same time cave hyenas became extinct, leading some paleontologists to infer that hyena predation prevented humans from crossing the Bering Strait earlier. Hyenas readily scavenge from human corpses; in Ethiopia, hyenas were reported to feed extensively on the corpses of victims of the 1960 attempted coup and the Red Terror. Hyenas habituated to scavenging on human corpses may develop bold behaviors towards living people: hyena attacks on people in southern Sudan increased during the Second Sudanese Civil War, when human corpses were readily available to them. Although spotted hyenas have been known to prey on humans in modern times, such incidents are rare. However, attacks on humans by spotted hyenas are likely to be underreported. Man-eating spotted hyenas tend to be very large specimens; a pair of man-eating hyenas, responsible for killing 27 people in Mulanje, Malawi in 1962, weighed in at 72 kg (159 lb) and 77 kg (170 lb) after being shot. A 1903 report describes spotted hyenas in the Mzimba district of Angoniland waiting at dawn outside people's huts to attack them when they opened their doors. Victims of spotted hyenas tend to be women, children and sick or infirm men; Theodore Roosevelt wrote in 1908–1909 in Uganda that spotted hyenas regularly killed sufferers of African sleeping sickness as they slept outside in camps. Spotted hyenas are widely feared in Malawi, where they have been known to attack people at night, particularly during the hot season when people sleep outside. A spate of hyena attacks were reported in Malawi's Phalombe plain, with five deaths recorded in 1956, five in 1957 and six in 1958. This pattern continued until 1961, when eight people were killed. Attacks occurred most commonly in September, when people slept outdoors and bush fires made the hunting of wild game difficult for the hyenas. A 2004 news report stated that 35 people were killed by spotted hyenas in a 12-month period in Mozambique along a 20-km stretch of road near the Tanzanian border. In the 1880s, a hyena was reported to have attacked humans, especially sleeping children, over a three-year period in the Iğdır Province of Turkey, with 25 children and 3 adults being wounded in one year. The attacks provoked local authorities into announcing a reward of 100 rubles for every hyena killed. Further attacks were reported later in some parts of the South Caucasus, particularly in 1908. Instances are known in Azerbaijan of striped hyenas killing children sleeping in courtyards during the 1930s and 1940s. In 1942, a sleeping guard was mauled in his hut by a hyena in Qalıncaq (Golyndzhakh). Cases of children being taken by hyenas by night are known in southeast Turkmenistan's Bathyz Nature Reserve. A further attack on a child was reported around Serakhs in 1948. Several attacks have occurred in India; in 1962, 9 children were thought to have been taken by hyenas in the town of Bhagalpur in the Bihar State in a six-week period, and 19 children up to the age of four were killed by hyenas in Karnataka in 1974. A survey of wild animal attacks during a five-year period in the Indian state of Madhya Pradesh reported that hyenas had attacked three people, causing fewer deaths than wolves, gaur, boar, elephants, tigers, leopards and sloth bears. Hyenas as food and medicine Hyenas are used for food and medicinal purposes in Somalia. Some Somali may consider it halal in Islam. This practice dates back to the times of the Ancient Greeks and Romans, who believed that different parts of the hyena's body were effective means to ward off evil and to ensure love and fertility.
Biology and health sciences
Carnivora
null
13974
https://en.wikipedia.org/wiki/Hymenoptera
Hymenoptera
Hymenoptera is a large order of insects, comprising the sawflies, wasps, bees, and ants. Over 150,000 living species of Hymenoptera have been described, in addition to over 2,000 extinct ones. Many of the species are parasitic. Females typically have a special ovipositor for inserting eggs into hosts or places that are otherwise inaccessible. This ovipositor is often modified into a stinger. The young develop through holometabolism (complete metamorphosis)—that is, they have a wormlike larval stage and an inactive pupal stage before they reach adulthood. Etymology The name Hymenoptera refers to the wings of the insects, but the original derivation is ambiguous. All references agree that the derivation involves the Ancient Greek πτερόν (pteron) for wing. The Ancient Greek ὑμήν (hymen) for membrane provides a plausible etymology for the term because species in this order have membranous wings. However, a key characteristic of this order is that the hindwings are connected to the forewings by a series of hooks. Thus, another plausible etymology involves Hymen, the Ancient Greek god of marriage, as these insects have "married" wings in flight. Another suggestion for the inclusion of Hymen is the myth of Melissa, a nymph with a prominent role at the wedding of Zeus. Evolution Molecular analysis finds that Hymenoptera is the earliest branching group of Holometabola. Hymenoptera originated in the Triassic, with the oldest fossils belonging to the family Xyelidae. Social hymenopterans appeared during the Cretaceous. The evolution of this group has been intensively studied by Alex Rasnitsyn, Michael S. Engel, and others. Phylogenetic relationships within the Hymenoptera, based on both morphology and molecular data, have been intensively studied since 2000. In 2023, a molecular study based on the analysis of ultra-conserved elements confirmed many previous findings and produced a relatively robust phylogeny of the whole Order. Basal superfamilies are shown in the cladogram below. Anatomy Hymenopterans range in size from very small to large insects, and usually have two pairs of wings. Their mouthparts are adapted for chewing, with well-developed mandibles (ectognathous mouthparts). Many species have further developed the mouthparts into a lengthy proboscis, with which they can drink liquids, such as nectar. They have large compound eyes, and typically three simple eyes, ocelli. The forward margin of the hind wing bears a number of hooked bristles, or "hamuli", which lock onto the fore wing, keeping them held together. The smaller species may have only two or three hamuli on each side, but the largest wasps may have a considerable number, keeping the wings gripped together especially tightly. Hymenopteran wings have relatively few veins compared with many other insects, especially in the smaller species. In the more ancestral hymenopterans, the ovipositor is blade-like, and has evolved for slicing plant tissues. In the majority, however, it is modified for piercing, and, in some cases, is several times the length of the body. In some species, the ovipositor has become modified as a stinger, and the eggs are laid from the base of the structure, rather than from the tip, which is used only to inject venom. The sting is typically used to immobilize prey, but in some wasps and bees may be used in defense. Hymenopteran larvae typically have a distinct head region, three thoracic segments, and usually nine or 10 abdominal segments. In the suborder Symphyta, the eruciform larvae resemble caterpillars in appearance, and like them, typically feed on leaves. They have large chewing mandibles, three pairs of thoracic limbs, and, in most cases, six or eight abdominal prolegs. Unlike caterpillars, however, the prolegs have no grasping spines, and the antennae are reduced to mere stubs. Symphytan larvae that are wood borers or stem borers have no abdominal legs and the thoracic legs are smaller than those of non-borers. With rare exceptions, larvae of the suborder Apocrita have no legs and are maggotlike in form, and are adapted to life in a protected environment. This may be the body of a host organism, or a cell in a nest, where the adults will care for the larva. In parasitic forms, the head is often greatly reduced and partially withdrawn into the prothorax (anterior part of the thorax). Sense organs appear to be poorly developed, with no ocelli, very small or absent antennae, and toothlike, sicklelike, or spinelike mandibles. They are also unable to defecate until they reach adulthood due to having an incomplete digestive tract (a blind sac), presumably to avoid contaminating their environment. The larvae of stinging forms (Aculeata) generally have 10 pairs of spiracles, or breathing pores, whereas parasitic forms usually have nine pairs present. Reproduction Sex determination Among most or all hymenopterans, sex is determined by the number of chromosomes an individual possesses. Fertilized eggs get two sets of chromosomes (one from each parent's respective gametes) and develop into diploid females, while unfertilized eggs only contain one set (from the mother) and develop into haploid males. The act of fertilization is under the voluntary control of the egg-laying female, giving her control of the sex of her offspring. This phenomenon is called haplodiploidy. However, the actual genetic mechanisms of haplodiploid sex determination may be more complex than simple chromosome number. In many Hymenoptera, sex is determined by a single gene locus with many alleles. In these species, haploids are male and diploids heterozygous at the sex locus are female, but occasionally a diploid will be homozygous at the sex locus and develop as a male, instead. This is especially likely to occur in an individual whose parents were siblings or other close relatives. Diploid males are known to be produced by inbreeding in many ant, bee, and wasp species. Diploid biparental males are usually sterile but a few species that have fertile diploid males are known. One consequence of haplodiploidy is that females on average have more genes in common with their sisters than they do with their daughters. Because of this, cooperation among kindred females may be unusually advantageous and has been hypothesized to contribute to the multiple origins of eusociality within this order. In many colonies of bees, ants, and wasps, worker females will remove eggs laid by other workers due to increased relatedness to direct siblings, a phenomenon known as worker policing. Another consequence is that hymenopterans may be more resistant to the deleterious effects of inbreeding. As males are haploid, any recessive genes will automatically be expressed, exposing them to natural selection. Thus, the genetic load of deleterious genes is purged relatively quickly. Thelytoky Some hymenopterans take advantage of parthenogenesis, the creation of embryos without fertilization. Thelytoky is a particular form of parthenogenesis in which female embryos are created (without fertilisation). The form of thelytoky in hymenopterans is a kind of automixis in which two haploid products (proto-eggs) from the same meiosis fuse to form a diploid zygote. This process tends to maintain heterozygosity in the passage of the genome from mother to daughter. It is found in several ant species including the desert ant Cataglyphis cursor, the clonal raider ant Cerapachys biroi, the predaceous ant Platythyrea punctata, and the electric ant (little fire ant) Wasmannia auropunctata. It also occurs in the Cape honey bee Apis mellifera capensis. Oocytes that undergo automixis with central fusion often have a reduced rate of crossover recombination, which helps to maintain heterozygosity and avoid inbreeding depression. Species that display central fusion with reduced recombination include the ants Platythyrea punctata and Wasmannia auropunctata and the Cape honey bee Apis mellifera capensis. In A. m. capensis, the recombination rate during meiosis is reduced more than tenfold. In W. auropunctata the reduction is 45 fold. Single queen colonies of the narrow headed ant Formica exsecta illustrate the possible deleterious effects of increased homozygosity. Colonies of this species which have more homozygous queens will age more rapidly, resulting in reduced colony survival. Diet Different species of Hymenoptera show a wide range of feeding habits. The most primitive forms are typically phytophagous, feeding on flowers, pollen, foliage, or stems. Stinging wasps are predators, and will provision their larvae with immobilised prey, while bees feed on nectar and pollen. A huge number of species are parasitoids as larvae. The adults inject the eggs into a host, which they begin to consume after hatching. For example, the eggs of the endangered Papilio homerus are parasitized at a rate of 77%, mainly by Hymenoptera species. Some species are even hyperparasitoid, with the host itself being another parasitoid insect. Habits intermediate between those of the herbivorous and parasitoid forms are shown in some hymenopterans, which inhabit the galls or nests of other insects, stealing their food, and eventually killing and eating the occupant. Classification The Hymenoptera are divided into two groups; the Symphyta which have no waist, and the Apocrita which have a narrow waist. Symphyta The suborder Symphyta includes the sawflies, horntails, and parasitic wood wasps. The group may be paraphyletic, as it has been suggested that the family Orussidae may be the group from which the Apocrita arose. They have an unconstricted junction between the thorax and abdomen. The larvae are herbivorous, free-living, and eruciform, usually with three pairs of true legs, prolegs (on every segment, unlike Lepidoptera) and ocelli. The prolegs do not have crochet hooks at the ends unlike the larvae of the Lepidoptera. The legs and prolegs tend to be reduced or absent in larvae that mine or bore plant tissue, as well as in larvae of Pamphiliidae. Apocrita The wasps, bees, and ants together make up the suborder (and clade) Apocrita, characterized by a constriction between the first and second abdominal segments called a wasp-waist (petiole), also involving the fusion of the first abdominal segment to the thorax. Also, the larvae of all Apocrita lack legs, prolegs, or ocelli. The hindgut of the larvae also remains closed during development, with feces being stored inside the body, with the exception of some bee larvae where the larval anus has reappeared through developmental reversion. In general, the anus only opens at the completion of larval growth. Threats Hymenoptera as a group are highly susceptible to habitat loss, which can lead to substantial decreases in species richness and have major ecological implications due to their pivotal role as plant pollinators.
Biology and health sciences
Hymenoptera
null
13980
https://en.wikipedia.org/wiki/Homeostasis
Homeostasis
In biology, homeostasis (British also homoeostasis; ) is the state of steady internal physical and chemical conditions maintained by living systems. This is the condition of optimal functioning for the organism and includes many variables, such as body temperature and fluid balance, being kept within certain pre-set limits (homeostatic range). Other variables include the pH of extracellular fluid, the concentrations of sodium, potassium, and calcium ions, as well as the blood sugar level, and these need to be regulated despite changes in the environment, diet, or level of activity. Each of these variables is controlled by one or more regulators or homeostatic mechanisms, which together maintain life. Homeostasis is brought about by a natural resistance to change when already in optimal conditions, and equilibrium is maintained by many regulatory mechanisms; it is thought to be the central motivation for all organic action. All homeostatic control mechanisms have at least three interdependent components for the variable being regulated: a receptor, a control center, and an effector. The receptor is the sensing component that monitors and responds to changes in the environment, either external or internal. Receptors include thermoreceptors and mechanoreceptors. Control centers include the respiratory center and the renin-angiotensin system. An effector is the target acted on, to bring about the change back to the normal state. At the cellular level, effectors include nuclear receptors that bring about changes in gene expression through up-regulation or down-regulation and act in negative feedback mechanisms. An example of this is in the control of bile acids in the liver. Some centers, such as the renin–angiotensin system, control more than one variable. When the receptor senses a stimulus, it reacts by sending action potentials to a control center. The control center sets the maintenance range—the acceptable upper and lower limits—for the particular variable, such as temperature. The control center responds to the signal by determining an appropriate response and sending signals to an effector, which can be one or more muscles, an organ, or a gland. When the signal is received and acted on, negative feedback is provided to the receptor that stops the need for further signaling. The cannabinoid receptor type 1, located at the presynaptic neuron, is a receptor that can stop stressful neurotransmitter release to the postsynaptic neuron; it is activated by endocannabinoids such as anandamide (N-arachidonoylethanolamide) and 2-arachidonoylglycerol via a retrograde signaling process in which these compounds are synthesized by and released from postsynaptic neurons, and travel back to the presynaptic terminal to bind to the CB1 receptor for modulation of neurotransmitter release to obtain homeostasis. The polyunsaturated fatty acids are lipid derivatives of omega-3 (docosahexaenoic acid, and eicosapentaenoic acid) or of omega-6 (arachidonic acid). They are synthesized from membrane phospholipids and used as precursors for endocannabinoids to mediate significant effects in the fine-tuning adjustment of body homeostasis. Etymology The word homeostasis () uses combining forms of homeo- and -stasis, Neo-Latin from Greek: ὅμοιος homoios, "similar" and στάσις stasis, "standing still", yielding the idea of "staying the same". History The concept of the regulation of the internal environment was described by French physiologist Claude Bernard in 1849, and the word homeostasis was coined by Walter Bradford Cannon in 1926. In 1932, Joseph Barcroft a British physiologist, was the first to say that higher brain function required the most stable internal environment. Thus, to Barcroft homeostasis was not only organized by the brain—homeostasis served the brain. Homeostasis is an almost exclusively biological term, referring to the concepts described by Bernard and Cannon, concerning the constancy of the internal environment in which the cells of the body live and survive. The term cybernetics is applied to technological control systems such as thermostats, which function as homeostatic mechanisms but are often defined much more broadly than the biological term of homeostasis. Overview The metabolic processes of all organisms can only take place in very specific physical and chemical environments. The conditions vary with each organism, and with whether the chemical processes take place inside the cell or in the interstitial fluid bathing the cells. The best-known homeostatic mechanisms in humans and other mammals are regulators that keep the composition of the extracellular fluid (or the "internal environment") constant, especially with regard to the temperature, pH, osmolality, and the concentrations of sodium, potassium, glucose, carbon dioxide, and oxygen. However, a great many other homeostatic mechanisms, encompassing many aspects of human physiology, control other entities in the body. Where the levels of variables are higher or lower than those needed, they are often prefixed with hyper- and hypo-, respectively such as hyperthermia and hypothermia or hypertension and hypotension. If an entity is homeostatically controlled it does not imply that its value is necessarily absolutely steady in health. Core body temperature is, for instance, regulated by a homeostatic mechanism with temperature sensors in, amongst others, the hypothalamus of the brain. However, the set point of the regulator is regularly reset. For instance, core body temperature in humans varies during the course of the day (i.e. has a circadian rhythm), with the lowest temperatures occurring at night, and the highest in the afternoons. Other normal temperature variations include those related to the menstrual cycle. The temperature regulator's set point is reset during infections to produce a fever. Organisms are capable of adjusting somewhat to varied conditions such as temperature changes or oxygen levels at altitude, by a process of acclimatisation. Homeostasis does not govern every activity in the body. For instance, the signal (be it via neurons or hormones) from the sensor to the effector is, of necessity, highly variable in order to convey information about the direction and magnitude of the error detected by the sensor. Similarly, the effector's response needs to be highly adjustable to reverse the error – in fact it should be very nearly in proportion (but in the opposite direction) to the error that is threatening the internal environment. For instance, arterial blood pressure in mammals is homeostatically controlled and measured by stretch receptors in the walls of the aortic arch and carotid sinuses at the beginnings of the internal carotid arteries. The sensors send messages via sensory nerves to the medulla oblongata of the brain indicating whether the blood pressure has fallen or risen, and by how much. The medulla oblongata then distributes messages along motor or efferent nerves belonging to the autonomic nervous system to a wide variety of effector organs, whose activity is consequently changed to reverse the error in the blood pressure. One of the effector organs is the heart whose rate is stimulated to rise (tachycardia) when the arterial blood pressure falls, or to slow down (bradycardia) when the pressure rises above the set point. Thus the heart rate (for which there is no sensor in the body) is not homeostatically controlled but is one of the effector responses to errors in arterial blood pressure. Another example is the rate of sweating. This is one of the effectors in the homeostatic control of body temperature, and therefore highly variable in rough proportion to the heat load that threatens to destabilize the body's core temperature, for which there is a sensor in the hypothalamus of the brain. Controls of variables Core temperature Mammals regulate their core temperature using input from thermoreceptors in the hypothalamus, brain, spinal cord, internal organs, and great veins. Apart from the internal regulation of temperature, a process called allostasis can come into play that adjusts behaviour to adapt to the challenge of very hot or cold extremes (and to other challenges). These adjustments may include seeking shade and reducing activity, seeking warmer conditions and increasing activity, or huddling. Behavioral thermoregulation takes precedence over physiological thermoregulation since necessary changes can be affected more quickly and physiological thermoregulation is limited in its capacity to respond to extreme temperatures. When the core temperature falls, the blood supply to the skin is reduced by intense vasoconstriction. The blood flow to the limbs (which have a large surface area) is similarly reduced and returned to the trunk via the deep veins which lie alongside the arteries (forming venae comitantes). This acts as a counter-current exchange system that short-circuits the warmth from the arterial blood directly into the venous blood returning into the trunk, causing minimal heat loss from the extremities in cold weather. The subcutaneous limb veins are tightly constricted, not only reducing heat loss from this source but also forcing the venous blood into the counter-current system in the depths of the limbs. The metabolic rate is increased, initially by non-shivering thermogenesis, followed by shivering thermogenesis if the earlier reactions are insufficient to correct the hypothermia. When core temperature rises are detected by thermoreceptors, the sweat glands in the skin are stimulated via cholinergic sympathetic nerves to secrete sweat onto the skin, which, when it evaporates, cools the skin and the blood flowing through it. Panting is an alternative effector in many vertebrates, which cools the body also by the evaporation of water, but this time from the mucous membranes of the throat and mouth. Blood glucose Blood sugar levels are regulated within fairly narrow limits. In mammals, the primary sensors for this are the beta cells of the pancreatic islets. The beta cells respond to a rise in the blood sugar level by secreting insulin into the blood and simultaneously inhibiting their neighboring alpha cells from secreting glucagon into the blood. This combination (high blood insulin levels and low glucagon levels) act on effector tissues, the chief of which is the liver, fat cells, and muscle cells. The liver is inhibited from producing glucose, taking it up instead, and converting it to glycogen and triglycerides. The glycogen is stored in the liver, but the triglycerides are secreted into the blood as very low-density lipoprotein (VLDL) particles which are taken up by adipose tissue, there to be stored as fats. The fat cells take up glucose through special glucose transporters (GLUT4), whose numbers in the cell wall are increased as a direct effect of insulin acting on these cells. The glucose that enters the fat cells in this manner is converted into triglycerides (via the same metabolic pathways as are used by the liver) and then stored in those fat cells together with the VLDL-derived triglycerides that were made in the liver. Muscle cells also take glucose up through insulin-sensitive GLUT4 glucose channels, and convert it into muscle glycogen. A fall in blood glucose, causes insulin secretion to be stopped, and glucagon to be secreted from the alpha cells into the blood. This inhibits the uptake of glucose from the blood by the liver, fats cells, and muscle. Instead the liver is strongly stimulated to manufacture glucose from glycogen (through glycogenolysis) and from non-carbohydrate sources (such as lactate and de-aminated amino acids) using a process known as gluconeogenesis. The glucose thus produced is discharged into the blood correcting the detected error (hypoglycemia). The glycogen stored in muscles remains in the muscles, and is only broken down, during exercise, to glucose-6-phosphate and thence to pyruvate to be fed into the citric acid cycle or turned into lactate. It is only the lactate and the waste products of the citric acid cycle that are returned to the blood. The liver can take up only the lactate, and, by the process of energy-consuming gluconeogenesis, convert it back to glucose. Iron levels Controlling iron levels in the body is a critically important part of many aspects of human health and disease. In humans iron is both necessary to the body and potentially harmful. Copper regulation Copper is absorbed, transported, distributed, stored, and excreted in the body according to complex homeostatic processes which ensure a constant and sufficient supply of the micronutrient while simultaneously avoiding excess levels. If an insufficient amount of copper is ingested for a short period of time, copper stores in the liver will be depleted. Should this depletion continue, a copper health deficiency condition may develop. If too much copper is ingested, an excess condition can result. Both of these conditions, deficiency and excess, can lead to tissue injury and disease. However, due to homeostatic regulation, the human body is capable of balancing a wide range of copper intakes for the needs of healthy individuals. Many aspects of copper homeostasis are known at the molecular level. Copper's essentiality is due to its ability to act as an electron donor or acceptor as its oxidation state fluxes between Cu1+ (cuprous) and Cu2+ (cupric). As a component of about a dozen cuproenzymes, copper is involved in key redox (i.e., oxidation-reduction) reactions in essential metabolic processes such as mitochondrial respiration, synthesis of melanin, and cross-linking of collagen. Copper is an integral part of the antioxidant enzyme copper-zinc superoxide dismutase, and has a role in iron homeostasis as a cofactor in ceruloplasmin. Levels of blood gases Changes in the levels of oxygen, carbon dioxide, and plasma pH are sent to the respiratory center, in the brainstem where they are regulated. The partial pressure of oxygen and carbon dioxide in the arterial blood is monitored by the peripheral chemoreceptors (PNS) in the carotid artery and aortic arch. A change in the partial pressure of carbon dioxide is detected as altered pH in the cerebrospinal fluid by central chemoreceptors (CNS) in the medulla oblongata of the brainstem. Information from these sets of sensors is sent to the respiratory center which activates the effector organs – the diaphragm and other muscles of respiration. An increased level of carbon dioxide in the blood, or a decreased level of oxygen, will result in a deeper breathing pattern and increased respiratory rate to bring the blood gases back to equilibrium. Too little carbon dioxide, and, to a lesser extent, too much oxygen in the blood can temporarily halt breathing, a condition known as apnea, which freedivers use to prolong the time they can stay underwater. The partial pressure of carbon dioxide is more of a deciding factor in the monitoring of pH. However, at high altitude (above 2500 m) the monitoring of the partial pressure of oxygen takes priority, and hyperventilation keeps the oxygen level constant. With the lower level of carbon dioxide, to keep the pH at 7.4 the kidneys secrete hydrogen ions into the blood and excrete bicarbonate into the urine. This is important in acclimatization to high altitude. Blood oxygen content The kidneys measure the oxygen content rather than the partial pressure of oxygen in the arterial blood. When the oxygen content of the blood is chronically low, oxygen-sensitive cells secrete erythropoietin (EPO) into the blood. The effector tissue is the red bone marrow which produces red blood cells (RBCs, also called ). The increase in RBCs leads to an increased hematocrit in the blood, and a subsequent increase in hemoglobin that increases the oxygen carrying capacity. This is the mechanism whereby high altitude dwellers have higher hematocrits than sea-level residents, and also why persons with pulmonary insufficiency or right-to-left shunts in the heart (through which venous blood by-passes the lungs and goes directly into the systemic circulation) have similarly high hematocrits. Regardless of the partial pressure of oxygen in the blood, the amount of oxygen that can be carried, depends on the hemoglobin content. The partial pressure of oxygen may be sufficient for example in anemia, but the hemoglobin content will be insufficient and subsequently as will be the oxygen content. Given enough supply of iron, vitamin B12 and folic acid, EPO can stimulate RBC production, and hemoglobin and oxygen content restored to normal. Arterial blood pressure The brain can regulate blood flow over a range of blood pressure values by vasoconstriction and vasodilation of the arteries. High pressure receptors called baroreceptors in the walls of the aortic arch and carotid sinus (at the beginning of the internal carotid artery) monitor the arterial blood pressure. Rising pressure is detected when the walls of the arteries stretch due to an increase in blood volume. This causes heart muscle cells to secrete the hormone atrial natriuretic peptide (ANP) into the blood. This acts on the kidneys to inhibit the secretion of renin and aldosterone causing the release of sodium, and accompanying water into the urine, thereby reducing the blood volume. This information is then conveyed, via afferent nerve fibers, to the solitary nucleus in the medulla oblongata. From here motor nerves belonging to the autonomic nervous system are stimulated to influence the activity of chiefly the heart and the smallest diameter arteries, called arterioles. The arterioles are the main resistance vessels in the arterial tree, and small changes in diameter cause large changes in the resistance to flow through them. When the arterial blood pressure rises the arterioles are stimulated to dilate making it easier for blood to leave the arteries, thus deflating them, and bringing the blood pressure down, back to normal. At the same time, the heart is stimulated via cholinergic parasympathetic nerves to beat more slowly (called bradycardia), ensuring that the inflow of blood into the arteries is reduced, thus adding to the reduction in pressure, and correcting the original error. Low pressure in the arteries, causes the opposite reflex of constriction of the arterioles, and a speeding up of the heart rate (called tachycardia). If the drop in blood pressure is very rapid or excessive, the medulla oblongata stimulates the adrenal medulla, via "preganglionic" sympathetic nerves, to secrete epinephrine (adrenaline) into the blood. This hormone enhances the tachycardia and causes severe vasoconstriction of the arterioles to all but the essential organ in the body (especially the heart, lungs, and brain). These reactions usually correct the low arterial blood pressure (hypotension) very effectively. Calcium levels The plasma ionized calcium (Ca2+) concentration is very tightly controlled by a pair of homeostatic mechanisms. The sensor for the first one is situated in the parathyroid glands, where the chief cells sense the Ca2+ level by means of specialized calcium receptors in their membranes. The sensors for the second are the parafollicular cells in the thyroid gland. The parathyroid chief cells secrete parathyroid hormone (PTH) in response to a fall in the plasma ionized calcium level; the parafollicular cells of the thyroid gland secrete calcitonin in response to a rise in the plasma ionized calcium level. The effector organs of the first homeostatic mechanism are the bones, the kidney, and, via a hormone released into the blood by the kidney in response to high PTH levels in the blood, the duodenum and jejunum. Parathyroid hormone (in high concentrations in the blood) causes bone resorption, releasing calcium into the plasma. This is a very rapid action which can correct a threatening hypocalcemia within minutes. High PTH concentrations cause the excretion of phosphate ions via the urine. Since phosphates combine with calcium ions to form insoluble salts (see also bone mineral), a decrease in the level of phosphates in the blood, releases free calcium ions into the plasma ionized calcium pool. PTH has a second action on the kidneys. It stimulates the manufacture and release, by the kidneys, of calcitriol into the blood. This steroid hormone acts on the epithelial cells of the upper small intestine, increasing their capacity to absorb calcium from the gut contents into the blood. The second homeostatic mechanism, with its sensors in the thyroid gland, releases calcitonin into the blood when the blood ionized calcium rises. This hormone acts primarily on bone, causing the rapid removal of calcium from the blood and depositing it, in insoluble form, in the bones. The two homeostatic mechanisms working through PTH on the one hand, and calcitonin on the other can very rapidly correct any impending error in the plasma ionized calcium level by either removing calcium from the blood and depositing it in the skeleton, or by removing calcium from it. The skeleton acts as an extremely large calcium store (about 1 kg) compared with the plasma calcium store (about 180 mg). Longer term regulation occurs through calcium absorption or loss from the gut. Another example are the most well-characterised endocannabinoids like anandamide (N-arachidonoylethanolamide; AEA) and 2-arachidonoylglycerol (2-AG), whose synthesis occurs through the action of a series of intracellular enzymes activated in response to a rise in intracellular calcium levels to introduce homeostasis and prevention of tumor development through putative protective mechanisms that prevent cell growth and migration by activation of CB1 and/or CB2 and adjoining receptors. Sodium concentration The homeostatic mechanism which controls the plasma sodium concentration is rather more complex than most of the other homeostatic mechanisms described on this page. The sensor is situated in the juxtaglomerular apparatus of kidneys, which senses the plasma sodium concentration in a surprisingly indirect manner. Instead of measuring it directly in the blood flowing past the juxtaglomerular cells, these cells respond to the sodium concentration in the renal tubular fluid after it has already undergone a certain amount of modification in the proximal convoluted tubule and loop of Henle. These cells also respond to rate of blood flow through the juxtaglomerular apparatus, which, under normal circumstances, is directly proportional to the arterial blood pressure, making this tissue an ancillary arterial blood pressure sensor. In response to a lowering of the plasma sodium concentration, or to a fall in the arterial blood pressure, the juxtaglomerular cells release renin into the blood. Renin is an enzyme which cleaves a decapeptide (a short protein chain, 10 amino acids long) from a plasma α-2-globulin called angiotensinogen. This decapeptide is known as angiotensin I. It has no known biological activity. However, when the blood circulates through the lungs a pulmonary capillary endothelial enzyme called angiotensin-converting enzyme (ACE) cleaves a further two amino acids from angiotensin I to form an octapeptide known as angiotensin II. Angiotensin II is a hormone which acts on the adrenal cortex, causing the release into the blood of the steroid hormone, aldosterone. Angiotensin II also acts on the smooth muscle in the walls of the arterioles causing these small diameter vessels to constrict, thereby restricting the outflow of blood from the arterial tree, causing the arterial blood pressure to rise. This, therefore, reinforces the measures described above (under the heading of "Arterial blood pressure"), which defend the arterial blood pressure against changes, especially hypotension. The angiotensin II-stimulated aldosterone released from the zona glomerulosa of the adrenal glands has an effect on particularly the epithelial cells of the distal convoluted tubules and collecting ducts of the kidneys. Here it causes the reabsorption of sodium ions from the renal tubular fluid, in exchange for potassium ions which are secreted from the blood plasma into the tubular fluid to exit the body via the urine. The reabsorption of sodium ions from the renal tubular fluid halts further sodium ion losses from the body, and therefore preventing the worsening of hyponatremia. The hyponatremia can only be corrected by the consumption of salt in the diet. However, it is not certain whether a "salt hunger" can be initiated by hyponatremia, or by what mechanism this might come about. When the plasma sodium ion concentration is higher than normal (hypernatremia), the release of renin from the juxtaglomerular apparatus is halted, ceasing the production of angiotensin II, and its consequent aldosterone-release into the blood. The kidneys respond by excreting sodium ions into the urine, thereby normalizing the plasma sodium ion concentration. The low angiotensin II levels in the blood lower the arterial blood pressure as an inevitable concomitant response. The reabsorption of sodium ions from the tubular fluid as a result of high aldosterone levels in the blood does not, of itself, cause renal tubular water to be returned to the blood from the distal convoluted tubules or collecting ducts. This is because sodium is reabsorbed in exchange for potassium and therefore causes only a modest change in the osmotic gradient between the blood and the tubular fluid. Furthermore, the epithelium of the distal convoluted tubules and collecting ducts is impermeable to water in the absence of antidiuretic hormone (ADH) in the blood. ADH is part of the control of fluid balance. Its levels in the blood vary with the osmolality of the plasma, which is measured in the hypothalamus of the brain. Aldosterone's action on the kidney tubules prevents sodium loss to the extracellular fluid (ECF). So there is no change in the osmolality of the ECF, and therefore no change in the ADH concentration of the plasma. However, low aldosterone levels cause a loss of sodium ions from the ECF, which could potentially cause a change in extracellular osmolality and therefore of ADH levels in the blood. Potassium concentration High potassium concentrations in the plasma cause depolarization of the zona glomerulosa cells' membranes in the outer layer of the adrenal cortex. This causes the release of aldosterone into the blood. Aldosterone acts primarily on the distal convoluted tubules and collecting ducts of the kidneys, stimulating the excretion of potassium ions into the urine. It does so, however, by activating the basolateral Na+/K+ pumps of the tubular epithelial cells. These sodium/potassium exchangers pump three sodium ions out of the cell, into the interstitial fluid and two potassium ions into the cell from the interstitial fluid. This creates an ionic concentration gradient which results in the reabsorption of sodium (Na+) ions from the tubular fluid into the blood, and secreting potassium (K+) ions from the blood into the urine (lumen of collecting duct). Fluid balance The total amount of water in the body needs to be kept in balance. Fluid balance involves keeping the fluid volume stabilized, and also keeping the levels of electrolytes in the extracellular fluid stable. Fluid balance is maintained by the process of osmoregulation and by behavior. Osmotic pressure is detected by osmoreceptors in the median preoptic nucleus in the hypothalamus. Measurement of the plasma osmolality to give an indication of the water content of the body, relies on the fact that water losses from the body, (through unavoidable water loss through the skin which is not entirely waterproof and therefore always slightly moist, water vapor in the exhaled air, sweating, vomiting, normal feces and especially diarrhea) are all hypotonic, meaning that they are less salty than the body fluids (compare, for instance, the taste of saliva with that of tears. The latter has almost the same salt content as the extracellular fluid, whereas the former is hypotonic with respect to the plasma. Saliva does not taste salty, whereas tears are decidedly salty). Nearly all normal and abnormal losses of body water therefore cause the extracellular fluid to become hypertonic. Conversely, excessive fluid intake dilutes the extracellular fluid causing the hypothalamus to register hypotonic hyponatremia conditions. When the hypothalamus detects a hypertonic extracellular environment, it causes the secretion of an antidiuretic hormone (ADH) called vasopressin which acts on the effector organ, which in this case is the kidney. The effect of vasopressin on the kidney tubules is to reabsorb water from the distal convoluted tubules and collecting ducts, thus preventing aggravation of the water loss via the urine. The hypothalamus simultaneously stimulates the nearby thirst center causing an almost irresistible (if the hypertonicity is severe enough) urge to drink water. The cessation of urine flow prevents the hypovolemia and hypertonicity from getting worse; the drinking of water corrects the defect. Hypo-osmolality results in very low plasma ADH levels. This results in the inhibition of water reabsorption from the kidney tubules, causing high volumes of very dilute urine to be excreted, thus getting rid of the excess water in the body. Urinary water loss, when the body water homeostat is intact, is a compensatory water loss, correcting any water excess in the body. However, since the kidneys cannot generate water, the thirst reflex is the all-important second effector mechanism of the body water homeostat, correcting any water deficit in the body. Blood pH The plasma pH can be altered by respiratory changes in the partial pressure of carbon dioxide; or altered by metabolic changes in the carbonic acid to bicarbonate ion ratio. The bicarbonate buffer system regulates the ratio of carbonic acid to bicarbonate to be equal to 1:20, at which ratio the blood pH is 7.4 (as explained in the Henderson–Hasselbalch equation). A change in the plasma pH gives an acid–base imbalance. In acid–base homeostasis there are two mechanisms that can help regulate the pH. Respiratory compensation a mechanism of the respiratory center, adjusts the partial pressure of carbon dioxide by changing the rate and depth of breathing, to bring the pH back to normal. The partial pressure of carbon dioxide also determines the concentration of carbonic acid, and the bicarbonate buffer system can also come into play. Renal compensation can help the bicarbonate buffer system. The sensor for the plasma bicarbonate concentration is not known for certain. It is very probable that the renal tubular cells of the distal convoluted tubules are themselves sensitive to the pH of the plasma. The metabolism of these cells produces carbon dioxide, which is rapidly converted to hydrogen and bicarbonate through the action of carbonic anhydrase. When the ECF pH falls (becoming more acidic) the renal tubular cells excrete hydrogen ions into the tubular fluid to leave the body via urine. Bicarbonate ions are simultaneously secreted into the blood that decreases the carbonic acid, and consequently raises the plasma pH. The converse happens when the plasma pH rises above normal: bicarbonate ions are excreted into the urine, and hydrogen ions released into the plasma. When hydrogen ions are excreted into the urine, and bicarbonate into the blood, the latter combines with the excess hydrogen ions in the plasma that stimulated the kidneys to perform this operation. The resulting reaction in the plasma is the formation of carbonic acid which is in equilibrium with the plasma partial pressure of carbon dioxide. This is tightly regulated to ensure that there is no excessive build-up of carbonic acid or bicarbonate. The overall effect is therefore that hydrogen ions are lost in the urine when the pH of the plasma falls. The concomitant rise in the plasma bicarbonate mops up the increased hydrogen ions (caused by the fall in plasma pH) and the resulting excess carbonic acid is disposed of in the lungs as carbon dioxide. This restores the normal ratio between bicarbonate and the partial pressure of carbon dioxide and therefore the plasma pH. The converse happens when a high plasma pH stimulates the kidneys to secrete hydrogen ions into the blood and to excrete bicarbonate into the urine. The hydrogen ions combine with the excess bicarbonate ions in the plasma, once again forming an excess of carbonic acid which can be exhaled, as carbon dioxide, in the lungs, keeping the plasma bicarbonate ion concentration, the partial pressure of carbon dioxide and, therefore, the plasma pH, constant. Cerebrospinal fluid Cerebrospinal fluid (CSF) allows for regulation of the distribution of substances between cells of the brain, and neuroendocrine factors, to which slight changes can cause problems or damage to the nervous system. For example, high glycine concentration disrupts temperature and blood pressure control, and high CSF pH causes dizziness and syncope. Neurotransmission Inhibitory neurons in the central nervous system play a homeostatic role in the balance of neuronal activity between excitation and inhibition. Inhibitory neurons using GABA, make compensating changes in the neuronal networks preventing runaway levels of excitation. An imbalance between excitation and inhibition is seen to be implicated in a number of neuropsychiatric disorders. Neuroendocrine system The neuroendocrine system is the mechanism by which the hypothalamus maintains homeostasis, regulating metabolism, reproduction, eating and drinking behaviour, energy utilization, osmolarity and blood pressure. The regulation of metabolism, is carried out by hypothalamic interconnections to other glands. Three endocrine glands of the hypothalamic–pituitary–gonadal axis (HPG axis) often work together and have important regulatory functions. Two other regulatory endocrine axes are the hypothalamic–pituitary–adrenal axis (HPA axis) and the hypothalamic–pituitary–thyroid axis (HPT axis). The liver also has many regulatory functions of the metabolism. An important function is the production and control of bile acids. Too much bile acid can be toxic to cells and its synthesis can be inhibited by activation of FXR a nuclear receptor. Gene regulation At the cellular level, homeostasis is carried out by several mechanisms including transcriptional regulation that can alter the activity of genes in response to changes. Energy balance The amount of energy taken in through nutrition needs to match the amount of energy used. To achieve energy homeostasis appetite is regulated by two hormones, grehlin and leptin. Grehlin stimulates hunger and the intake of food and leptin acts to signal satiety (fullness). A 2019 review of weight-change interventions, including dieting, exercise and overeating, found that body weight homeostasis could not precisely correct for "energetic errors", the loss or gain of calories, in the short-term. Clinical significance Many diseases are the result of a homeostatic failure. Almost any homeostatic component can malfunction either as a result of an inherited defect, an inborn error of metabolism, or an acquired disease. Some homeostatic mechanisms have inbuilt redundancies, which ensures that life is not immediately threatened if a component malfunctions; but sometimes a homeostatic malfunction can result in serious disease, which can be fatal if not treated. A well-known example of a homeostatic failure is shown in type 1 diabetes mellitus. Here blood sugar regulation is unable to function because the beta cells of the pancreatic islets are destroyed and cannot produce the necessary insulin. The blood sugar rises in a condition known as hyperglycemia. The plasma ionized calcium homeostat can be disrupted by the constant, unchanging, over-production of parathyroid hormone by a parathyroid adenoma resulting in the typically features of hyperparathyroidism, namely high plasma ionized Ca2+ levels and the resorption of bone, which can lead to spontaneous fractures. The abnormally high plasma ionized calcium concentrations cause conformational changes in many cell-surface proteins (especially ion channels and hormone or neurotransmitter receptors) giving rise to lethargy, muscle weakness, anorexia, constipation and labile emotions. The body water homeostat can be compromised by the inability to secrete ADH in response to even the normal daily water losses via the exhaled air, the feces, and insensible sweating. On receiving a zero blood ADH signal, the kidneys produce huge unchanging volumes of very dilute urine, causing dehydration and death if not treated. As organisms age, the efficiency of their control systems becomes reduced. The inefficiencies gradually result in an unstable internal environment that increases the risk of illness, and leads to the physical changes associated with aging. Various chronic diseases are kept under control by homeostatic compensation, which masks a problem by compensating for it (making up for it) in another way. However, the compensating mechanisms eventually wear out or are disrupted by a new complicating factor (such as the advent of a concurrent acute viral infection), which sends the body reeling through a new cascade of events. Such decompensation unmasks the underlying disease, worsening its symptoms. Common examples include decompensated heart failure, kidney failure, and liver failure. Biosphere In the Gaia hypothesis, James Lovelock stated that the entire mass of living matter on Earth (or any planet with life) functions as a vast homeostatic superorganism that actively modifies its planetary environment to produce the environmental conditions necessary for its own survival. In this view, the entire planet maintains several homeostasis (the primary one being temperature homeostasis). Whether this sort of system is present on Earth is open to debate. However, some relatively simple homeostatic mechanisms are generally accepted. For example, it is sometimes claimed that when atmospheric carbon dioxide levels rise, certain plants may be able to grow better and thus act to remove more carbon dioxide from the atmosphere. However, warming has exacerbated droughts, making water the actual limiting factor on land. When sunlight is plentiful and the atmospheric temperature climbs, it has been claimed that the phytoplankton of the ocean surface waters, acting as global sunshine, and therefore heat sensors, may thrive and produce more dimethyl sulfide (DMS). The DMS molecules act as cloud condensation nuclei, which produce more clouds, and thus increase the atmospheric albedo, and this feeds back to lower the temperature of the atmosphere. However, rising sea temperature has stratified the oceans, separating warm, sunlit waters from cool, nutrient-rich waters. Thus, nutrients have become the limiting factor, and plankton levels have actually fallen over the past 50 years, not risen. As scientists discover more about Earth, vast numbers of positive and negative feedback loops are being discovered, that, together, maintain a metastable condition, sometimes within a very broad range of environmental conditions. Predictive Predictive homeostasis is an anticipatory response to an expected challenge in the future, such as the stimulation of insulin secretion by gut hormones which enter the blood in response to a meal. This insulin secretion occurs before the blood sugar level rises, lowering the blood sugar level in anticipation of a large influx into the blood of glucose resulting from the digestion of carbohydrates in the gut. Such anticipatory reactions are open loop systems which are based, essentially, on "guess work", and are not self-correcting. Anticipatory responses always require a closed loop negative feedback system to correct the 'over-shoots' and 'under-shoots' to which the anticipatory systems are prone. Other fields The term has come to be used in other fields, for example: Risk An actuary may refer to risk homeostasis, where (for example) people who have anti-lock brakes have no better safety record than those without anti-lock brakes, because the former unconsciously compensate for the safer vehicle via less-safe driving habits. Previous to the innovation of anti-lock brakes, certain maneuvers involved minor skids, evoking fear and avoidance: Now the anti-lock system moves the boundary for such feedback, and behavior patterns expand into the no-longer punitive area. It has also been suggested that ecological crises are an instance of risk homeostasis in which a particular behavior continues until proven dangerous or dramatic consequences actually occur. Stress Sociologists and psychologists may refer to stress homeostasis, the tendency of a population or an individual to stay at a certain level of stress, often generating artificial stresses if the "natural" level of stress is not enough. Jean-François Lyotard, a postmodern theorist, has applied this term to societal 'power centers' that he describes in The Postmodern Condition, as being 'governed by a principle of homeostasis,' for example, the scientific hierarchy, which will sometimes ignore a radical new discovery for years because it destabilizes previously accepted norms. Technology Familiar technological homeostatic mechanisms include: A thermostat operates by switching heaters or air-conditioners on and off in response to the output of a temperature sensor. Cruise control adjusts a car's throttle in response to changes in speed. An autopilot operates the steering controls of an aircraft or ship in response to deviation from a pre-set compass bearing or route. Process control systems in a chemical plant or oil refinery maintain fluid levels, pressures, temperature, chemical composition, etc. by controlling heaters, pumps and valves. The centrifugal governor of a steam engine, as designed by James Watt in 1788, reduces the throttle valve in response to increases in the engine speed, or opens the valve if the speed falls below the pre-set rate. Society and culture The use of sovereign power, codes of conduct, religious and cultural practices and other dynamic processes in a society can be described as a part of an evolved homeostatic system of regularizing life and maintaining an overall equilibrium that protects the security of the whole from internal and external imbalances or dangers. Healthy civic cultures can be said to have achieved an optimal homeostatic balance between multiple contradictory concerns such as in the tension between respect for individual rights and concern for the public good, or that between governmental effectiveness and responsiveness to the interests of citizens.
Biology and health sciences
Basics
Biology
13995
https://en.wikipedia.org/wiki/Heapsort
Heapsort
In computer science, heapsort is an efficient, comparison-based sorting algorithm that reorganizes an input array into a heap (a data structure where each node is greater than its children) and then repeatedly removes the largest node from that heap, placing it at the end of the array. Although somewhat slower in practice on most machines than a well-implemented quicksort, it has the advantages of very simple implementation and a more favorable worst-case runtime. Most real-world quicksort variants include an implementation of heapsort as a fallback should they detect that quicksort is becoming degenerate. Heapsort is an in-place algorithm, but it is not a stable sort. Heapsort was invented by J. W. J. Williams in 1964. The paper also introduced the binary heap as a useful data structure in its own right. In the same year, Robert W. Floyd published an improved version that could sort an array in-place, continuing his earlier research into the treesort algorithm. Overview The heapsort algorithm can be divided into two phases: heap construction, and heap extraction. The heap is an implicit data structure which takes no space beyond the array of objects to be sorted; the array is interpreted as a complete binary tree where each array element is a node and each node's parent and child links are defined by simple arithmetic on the array indexes. For a zero-based array, the root node is stored at index 0, and the nodes linked to node are iLeftChild(i) = 2⋅i + 1 iRightChild(i) = 2⋅i + 2 iParent(i) = floor((i−1) / 2) where the floor function rounds down to the preceding integer. For a more detailed explanation, see . This binary tree is a max-heap when each node is greater than or equal to both of its children. Equivalently, each node is less than or equal to its parent. This rule, applied throughout the tree, results in the maximum node being located at the root of the tree. In the first phase, a heap is built out of the data (see ). In the second phase, the heap is converted to a sorted array by repeatedly removing the largest element from the heap (the root of the heap), and placing it at the end of the array. The heap is updated after each removal to maintain the heap property. Once all objects have been removed from the heap, the result is a sorted array. Heapsort is normally performed in place. During the first phase, the array is divided into an unsorted prefix and a heap-ordered suffix (initially empty). Each step shrinks the prefix and expands the suffix. When the prefix is empty, this phase is complete. During the second phase, the array is divided into a heap-ordered prefix and a sorted suffix (initially empty). Each step shrinks the prefix and expands the suffix. When the prefix is empty, the array is sorted. Algorithm The heapsort algorithm begins by rearranging the array into a binary max-heap. The algorithm then repeatedly swaps the root of the heap (the greatest element remaining in the heap) with its last element, which is then declared to be part of the sorted suffix. Then the heap, which was damaged by replacing the root, is repaired so that the greatest element is again at the root. This repeats until only one value remains in the heap. The steps are: Call the function on the array. This builds a heap from an array in operations. Swap the first element of the array (the largest element in the heap) with the final element of the heap. Decrease the considered range of the heap by one. Call the function on the array to move the new first element to its correct place in the heap. Go back to step (2) until the remaining array is a single element. The operation is run once, and is in performance. The function is called times and requires work each time, due to its traversal starting from the root node. Therefore, the performance of this algorithm is . The heart of the algorithm is the function. This constructs binary heaps out of smaller heaps, and may be thought of in two equivalent ways: given two binary heaps, and a shared parent node which is not part of either heap, merge them into a single larger binary heap; or given a "damaged" binary heap, where the max-heap property (no child is greater than its parent) holds everywhere except possibly between the root node and its children, repair it to produce an undamaged heap. To establish the max-heap property at the root, up to three nodes must be compared (the root and its two children), and the greatest must be made the root. This is most easily done by finding the greatest child, then comparing that child to the root. There are three cases: If there are no children (the two original heaps are empty), the heap property trivially holds, and no further action is required. If root is greater than or equal to the greatest child, the heap property holds and likewise, no further action is required. If the root is less than the greatest child, exchange the two nodes. The heap property now holds at the newly promoted node (it is greater than or equal to both of its children, and in fact greater than any descendant), but may be violated between the newly demoted ex-root and its new children. To correct this, repeat the operation on the subtree rooted at the newly demoted ex-root. The number of iterations in any one call is bounded by the height of the tree, which is . Pseudocode The following is a simple way to implement the algorithm in pseudocode. Arrays are zero-based and is used to exchange two elements of the array. Movement 'down' means from the root towards the leaves, or from lower indices to higher. Note that during the sort, the largest element is at the root of the heap at , while at the end of the sort, the largest element is in . procedure heapsort(a, count) is input: an unordered array a of length count (Build the heap in array a so that largest value is at the root) heapify(a, count) (The following loop maintains the invariants that a[0:end−1] is a heap, and every element a[end:count−1] beyond end is greater than everything before it, i.e. a[end:count−1] is in sorted order.) end ← count while end > 1 do (the heap size is reduced by one) end ← end − 1 (a[0] is the root and largest value. The swap moves it in front of the sorted elements.) swap(a[end], a[0]) (the swap ruined the heap property, so restore it) siftDown(a, 0, end) The sorting routine uses two subroutines, and . The former is the common in-place heap construction routine, while the latter is a common subroutine for implementing . (Put elements of 'a' in heap order, in-place) procedure heapify(a, count) is (start is initialized to the first leaf node) (the last element in a 0-based array is at index count-1; find the parent of that element) start ← iParent(count-1) + 1 while start > 0 do (go to the last non-heap node) start ← start − 1 (sift down the node at index 'start' to the proper place such that all nodes below the start index are in heap order) siftDown(a, start, count) (after sifting down the root all nodes/elements are in heap order) (Repair the heap whose root element is at index 'start', assuming the heaps rooted at its children are valid) procedure siftDown(a, root, end) is while iLeftChild(root) < end do (While the root has at least one child) child ← iLeftChild(root) (Left child of root) (If there is a right child and that child is greater) if child+1 < end and a[child] < a[child+1] then child ← child + 1 if a[root] < a[child] then swap(a[root], a[child]) root ← child (repeat to continue sifting down the child now) else (The root holds the largest element. Since we may assume the heaps rooted at the children are valid, this means that we are done.) return The procedure operates by building small heaps and repeatedly merging them using . It starts with the leaves, observing that they are trivial but valid heaps by themselves, and then adds parents. Starting with element and working backwards, each internal node is made the root of a valid heap by sifting down. The last step is sifting down the first element, after which the entire array obeys the heap property. To see that this takes time, count the worst-case number of iterations. The last half of the array requires zero iterations, the preceding quarter requires at most one iteration, the eighth before that requires at most two iterations, the sixteenth before that requires at most three, and so on. Looked at a different way, if we assume every call requires the maximum number of iterations, the first half of the array requires one iteration, the first quarter requires one more (total 2), the first eighth requires yet another (total 3), and so on. This totals , where the infinite sum is a well-known geometric series whose sum is , thus the product is simply . The above is an approximation. The exact worst-case number of comparisons during the heap-construction phase of heapsort is known to be equal to , where is the number of 1 bits in the binary representation of and is the number of trailing 0 bits. Standard implementation Although it is convenient to think of the two phases separately, most implementations combine the two, allowing a single instance of to be expanded inline. Two variables (here, and ) keep track of the bounds of the heap area. The portion of the array before is unsorted, while the portion beginning at is sorted. Heap construction decreases until it is zero, after which heap extraction decreases until it is 1 and the array is entirely sorted. procedure heapsort(a, count) is input: an unordered array a of length count start ← floor(count/2) end ← count while end > 1 do if start > 0 then (Heap construction) start ← start − 1 else (Heap extraction) end ← end − 1 swap(a[end], a[0]) (The following is siftDown(a, start, end)) root ← start while iLeftChild(root) < end do child ← iLeftChild(root) (If there is a right child and that child is greater) if child+1 < end and a[child] < a[child+1] then child ← child + 1 if a[root] < a[child] then swap(a[root], a[child]) root ← child (repeat to continue sifting down the child now) else break (return to outer loop) Variations Williams' heap construction The description above uses Floyd's improved heap-construction algorithm, which operates in time and uses the same primitive as the heap-extraction phase. Although this algorithm, being both faster and simpler to program, is used by all practical heapsort implementations, Williams' original algorithm may be easier to understand, and is needed to implement a more general binary heap priority queue. Rather than merging many small heaps, Williams' algorithm maintains one single heap at the front of the array and repeatedly appends an additional element using a primitive. Being at the end of the array, the new element is a leaf and has no children to worry about, but may violate the heap property by being greater than its parent. In this case, exchange it with its parent and repeat the test until the parent is greater or there is no parent (we have reached the root). In pseudocode, this is: procedure siftUp(a, end) is input: a is the array, which heap-ordered up to end-1. end is the node to sift up. while end > 0 parent := iParent(end) if a[parent] < a[end] then (out of max-heap order) swap(a[parent], a[end]) end := parent (continue sifting up) else return procedure heapify(a, count) is (start with a trivial single-element heap) end := 1 while end < count (sift up the node at index end to the proper place such that all nodes above the end index are in heap order) siftUp(a, end) end := end + 1 (after sifting up the last node all nodes are in heap order) To understand why this algorithm can take asymptotically more time to build a heap ( vs. worst case), note that in Floyd's algorithm, almost all the calls to operations apply to very small heaps. Half the heaps are height-1 trivial heaps and can be skipped entirely, half of the remainder are height-2, and so on. Only two calls are on heaps of size , and only one operation is done on the full -element heap. The overall average operation takes time. In contrast, in Williams' algorithm most of the calls to are made on large heaps of height . Half of the calls are made with a heap size of or more, three-quarters are made with a heap size of or more, and so on. Although the average number of steps is similar to Floyd's technique, pre-sorted input will cause the worst case: each added node is sifted up to the root, so the average call to will require approximately iterations. Because it is dominated by the second heap-extraction phase, the heapsort algorithm itself has time complexity using either version of heapify. Bottom-up heapsort Bottom-up heapsort is a variant that reduces the number of comparisons required by a significant factor. While ordinary "top-down" heapsort requires comparisons worst-case and on average, the bottom-up variant requires comparisons on average, and in the worst case. If comparisons are cheap (e.g. integer keys) then the difference is unimportant, as top-down heapsort compares values that have already been loaded from memory. If, however, comparisons require a function call or other complex logic, then bottom-up heapsort is advantageous. This is accomplished by using a more elaborate procedure. The change improves the linear-time heap-building phase slightly, but is more significant in the second phase. Like top-down heapsort, each iteration of the second phase extracts the top of the heap, , and fills the gap it leaves with , then sifts this latter element down the heap. But this element came from the lowest level of the heap, meaning it is one of the smallest elements in the heap, so the sift-down will likely take many steps to move it back down. In top-down heapsort, each step of requires two comparisons, to find the minimum of three elements: the new node and its two children. Bottom-up heapsort conceptually replaces the root with a value of −∞ and sifts it down using only one comparison per level (since no child can possibly be less than −∞) until the leaves are reached, then replaces the −∞ with the correct value and sifts it up (again, using one comparison per level) until the correct position is found. This places the root in the same location as top-down , but fewer comparisons are required to find that location. For any single operation, the bottom-up technique is advantageous if the number of downward movements is at least of the height of the tree (when the number of comparisons is times the height for both techniques), and it turns out that this is more than true on average, even for worst-case inputs. A naïve implementation of this conceptual algorithm would cause some redundant data copying, as the sift-up portion undoes part of the sifting down. A practical implementation searches downward for a leaf where −∞ would be placed, then upward for where the root should be placed. Finally, the upward traversal continues to the root's starting position, performing no more comparisons but exchanging nodes to complete the necessary rearrangement. This optimized form performs the same number of exchanges as top-down . Because it goes all the way to the bottom and then comes back up, it is called heapsort with bounce by some authors. function leafSearch(a, i, end) is j ← i while iRightChild(j) < end do (Determine which of j's two children is the greater) if a[iRightChild(j)] > a[iLeftChild(j)] then j ← iRightChild(j) else j ← iLeftChild(j) (At the last level, there might be only one child) if iLeftChild(j) < end then j ← iLeftChild(j) return j The return value of the leafSearch is used in the modified siftDown routine: procedure siftDown(a, i, end) is j ← leafSearch(a, i, end) while a[i] > a[j] do j ← iParent(j) while j > i do swap(a[i], a[j]) j ← iParent(j) Bottom-up heapsort was announced as beating quicksort (with median-of-three pivot selection) on arrays of size ≥16000. A 2008 re-evaluation of this algorithm showed it to be no faster than top-down heapsort for integer keys, presumably because modern branch prediction nullifies the cost of the predictable comparisons that bottom-up heapsort manages to avoid. A further refinement does a binary search in the upward search, and sorts in a worst case of comparisons, approaching the information-theoretic lower bound of comparisons. A variant that uses two extra bits per internal node (n−1 bits total for an n-element heap) to cache information about which child is greater (two bits are required to store three cases: left, right, and unknown) uses less than compares. Other variations Ternary heapsort uses a ternary heap instead of a binary heap; that is, each element in the heap has three children. It is more complicated to program but does a constant number of times fewer swap and comparison operations. This is because each sift-down step in a ternary heap requires three comparisons and one swap, whereas in a binary heap, two comparisons and one swap are required. Two levels in a ternary heap cover 32 = 9 elements, doing more work with the same number of comparisons as three levels in the binary heap, which only cover 23 = 8. This is primarily of academic interest, or as a student exercise, as the additional complexity is not worth the minor savings, and bottom-up heapsort beats both. Memory-optimized heapsort improves heapsort's locality of reference by increasing the number of children even more. This increases the number of comparisons, but because all children are stored consecutively in memory, reduces the number of cache lines accessed during heap traversal, a net performance improvement. The standard implementation of Floyd's heap-construction algorithm causes a large number of cache misses once the size of the data exceeds that of the CPU cache. Better performance on large data sets can be obtained by merging in depth-first order, combining subheaps as soon as possible, rather than combining all subheaps on one level before proceeding to the one above. Out-of-place heapsort improves on bottom-up heapsort by eliminating the worst case, guaranteeing comparisons. When the maximum is taken, rather than fill the vacated space with an unsorted data value, fill it with a sentinel value, which never "bounces" back up. It turns out that this can be used as a primitive in an in-place (and non-recursive) "QuickHeapsort" algorithm. First, you perform a quicksort-like partitioning pass, but reversing the order of the partitioned data in the array. Suppose (without loss of generality) that the smaller partition is the one greater than the pivot, which should go at the end of the array, but our reversed partitioning step places it at the beginning. Form a heap out of the smaller partition and do out-of-place heapsort on it, exchanging the extracted maxima with values from the end of the array. These are less than the pivot, meaning less than any value in the heap, so serve as sentinel values. Once the heapsort is complete (and the pivot moved to just before the now-sorted end of the array), the order of the partitions has been reversed, and the larger partition at the beginning of the array may be sorted in the same way. (Because there is no non-tail recursion, this also eliminates quicksort's stack usage.) The smoothsort algorithm is a variation of heapsort developed by Edsger W. Dijkstra in 1981. Like heapsort, smoothsort's upper bound is . The advantage of smoothsort is that it comes closer to time if the input is already sorted to some degree, whereas heapsort averages regardless of the initial sorted state. Due to its complexity, smoothsort is rarely used. Levcopoulos and Petersson describe a variation of heapsort based on a heap of Cartesian trees. First, a Cartesian tree is built from the input in time, and its root is placed in a 1-element binary heap. Then we repeatedly extract the minimum from the binary heap, output the tree's root element, and add its left and right children (if any) which are themselves Cartesian trees, to the binary heap. As they show, if the input is already nearly sorted, the Cartesian trees will be very unbalanced, with few nodes having left and right children, resulting in the binary heap remaining small, and allowing the algorithm to sort more quickly than for inputs that are already nearly sorted. Several variants such as weak heapsort require comparisons in the worst case, close to the theoretical minimum, using one extra bit of state per node. While this extra bit makes the algorithms not truly in-place, if space for it can be found inside the element, these algorithms are simple and efficient, but still slower than binary heaps if key comparisons are cheap enough (e.g. integer keys) that a constant factor does not matter. Katajainen's "ultimate heapsort" requires no extra storage, performs comparisons, and a similar number of element moves. It is, however, even more complex and not justified unless comparisons are very expensive. Comparison with other sorts Heapsort primarily competes with quicksort, another very efficient general purpose in-place unstable comparison-based sort algorithm. Heapsort's primary advantages are its simple, non-recursive code, minimal auxiliary storage requirement, and reliably good performance: its best and worst cases are within a small constant factor of each other, and of the theoretical lower bound on comparison sorts. While it cannot do better than for pre-sorted inputs, it does not suffer from quicksort's worst case, either. Real-world quicksort implementations use a variety of heuristics to avoid the worst case, but that makes their implementation far more complex, and implementations such as introsort and pattern-defeating quicksort use heapsort as a last-resort fallback if they detect degenerate behaviour. Thus, their worst-case performance is slightly worse than if heapsort had been used from the beginning. Heapsort's primary disadvantages are its poor locality of reference and its inherently serial nature; the accesses to the implicit tree are widely scattered and mostly random, and there is no straightforward way to convert it to a parallel algorithm. The worst-case performance guarantees make heapsort popular in real-time computing, and systems concerned with maliciously chosen inputs such as the Linux kernel. The combination of small implementation and dependably "good enough" performance make it popular in embedded systems, and generally any application where sorting is not a performance bottleneck. heapsort is ideal for sorting a list of filenames for display, but a database management system would probably want a more aggressively optimized sorting algorithm. A well-implemented quicksort is usually 2–3 times faster than heapsort. Although quicksort requires fewer comparisons, this is a minor factor. (Results claiming twice as many comparisons are measuring the top-down version; see .) The main advantage of quicksort is its much better locality of reference: partitioning is a linear scan with good spatial locality, and the recursive subdivision has good temporal locality. With additional effort, quicksort can also be implemented in mostly branch-free code, and multiple CPUs can be used to sort subpartitions in parallel. Thus, quicksort is preferred when the additional performance justifies the implementation effort. The other major sorting algorithm is merge sort, but that rarely competes directly with heapsort because it is not in-place. Merge sort's requirement for extra space (roughly half the size of the input) is usually prohibitive except in the situations where merge sort has a clear advantage: When a stable sort is required When taking advantage of (partially) pre-sorted input Sorting linked lists (in which case merge sort requires minimal extra space) Parallel sorting; merge sort parallelizes even better than quicksort and can easily achieve close to linear speedup External sorting; merge sort has excellent locality of reference Example The examples sort the values { 6, 5, 3, 1, 8, 7, 2, 4 } in increasing order using both heap-construction algorithms. The elements being compared are shown in a bold font. There are typically two when sifting up, and three when sifting down, although there may be fewer when the top or bottom of the tree is reached. Heap construction (Williams' algorithm) Heap construction (Floyd's algorithm) Heap extraction
Mathematics
Algorithms
null
13996
https://en.wikipedia.org/wiki/Heap%20%28data%20structure%29
Heap (data structure)
In computer science, a heap is a tree-based data structure that satisfies the heap property: In a max heap, for any given node C, if P is the parent node of C, then the key (the value) of P is greater than or equal to the key of C. In a min heap, the key of P is less than or equal to the key of C. The node at the "top" of the heap (with no parents) is called the root node. The heap is one maximally efficient implementation of an abstract data type called a priority queue, and in fact, priority queues are often referred to as "heaps", regardless of how they may be implemented. In a heap, the highest (or lowest) priority element is always stored at the root. However, a heap is not a sorted structure; it can be regarded as being partially ordered. A heap is a useful data structure when it is necessary to repeatedly remove the object with the highest (or lowest) priority, or when insertions need to be interspersed with removals of the root node. A common implementation of a heap is the binary heap, in which the tree is a complete binary tree (see figure). The heap data structure, specifically the binary heap, was introduced by J. W. J. Williams in 1964, as a data structure for the heapsort sorting algorithm. Heaps are also crucial in several efficient graph algorithms such as Dijkstra's algorithm. When a heap is a complete binary tree, it has the smallest possible height—a heap with N nodes and a branches for each node always has loga N height. Note that, as shown in the graphic, there is no implied ordering between siblings or cousins and no implied sequence for an in-order traversal (as there would be in, e.g., a binary search tree). The heap relation mentioned above applies only between nodes and their parents, grandparents. The maximum number of children each node can have depends on the type of heap. Heaps are typically constructed in-place in the same array where the elements are stored, with their structure being implicit in the access pattern of the operations. Heaps differ in this way from other data structures with similar or in some cases better theoretic bounds such as Radix trees in that they require no additional memory beyond that used for storing the keys. Operations The common operations involving heaps are: Basic find-max (or find-min): find a maximum item of a max-heap, or a minimum item of a min-heap, respectively (a.k.a. peek) insert: adding a new key to the heap (a.k.a., push) extract-max (or extract-min): returns the node of maximum value from a max heap [or minimum value from a min heap] after removing it from the heap (a.k.a., pop) delete-max (or delete-min): removing the root node of a max heap (or min heap), respectively replace: pop root and push a new key. This is more efficient than a pop followed by a push, since it only needs to balance once, not twice, and is appropriate for fixed-size heaps. Creation create-heap: create an empty heap heapify: create a heap out of given array of elements merge (union): joining two heaps to form a valid new heap containing all the elements of both, preserving the original heaps. meld: joining two heaps to form a valid new heap containing all the elements of both, destroying the original heaps. Inspection size: return the number of items in the heap. is-empty: return true if the heap is empty, false otherwise. Internal increase-key or decrease-key: updating a key within a max- or min-heap, respectively delete: delete an arbitrary node (followed by moving last node and sifting to maintain heap) sift-up: move a node up in the tree, as long as needed; used to restore heap condition after insertion. Called "sift" because node moves up the tree until it reaches the correct level, as in a sieve. sift-down: move a node down in the tree, similar to sift-up; used to restore heap condition after deletion or replacement. Implementation Heaps are usually implemented with an array, as follows: Each element in the array represents a node of the heap, and The parent / child relationship is defined implicitly by the elements' indices in the array. For a binary heap, in the array, the first index contains the root element. The next two indices of the array contain the root's children. The next four indices contain the four children of the root's two child nodes, and so on. Therefore, given a node at index , its children are at indices and , and its parent is at index . This simple indexing scheme makes it efficient to move "up" or "down" the tree. Balancing a heap is done by sift-up or sift-down operations (swapping elements which are out of order). As we can build a heap from an array without requiring extra memory (for the nodes, for example), heapsort can be used to sort an array in-place. After an element is inserted into or deleted from a heap, the heap property may be violated, and the heap must be re-balanced by swapping elements within the array. Although different types of heaps implement the operations differently, the most common way is as follows: Insertion: Add the new element at the end of the heap, in the first available free space. If this will violate the heap property, sift up the new element (swim operation) until the heap property has been reestablished. Extraction: Remove the root and insert the last element of the heap in the root. If this will violate the heap property, sift down the new root (sink operation) to reestablish the heap property. Replacement: Remove the root and put the new element in the root and sift down. When compared to extraction followed by insertion, this avoids a sift up step. Construction of a binary (or d-ary) heap out of a given array of elements may be performed in linear time using the classic Floyd algorithm, with the worst-case number of comparisons equal to 2N − 2s2(N) − e2(N) (for a binary heap), where s2(N) is the sum of all digits of the binary representation of N and e2(N) is the exponent of 2 in the prime factorization of N. This is faster than a sequence of consecutive insertions into an originally empty heap, which is log-linear. Variants 2–3 heap B-heap Beap Binary heap Binomial heap Brodal queue d-ary heap Fibonacci heap K-D Heap Leaf heap Leftist heap Skew binomial heap Strict Fibonacci heap Min-max heap Pairing heap Radix heap Randomized meldable heap Skew heap Soft heap Ternary heap Treap Weak heap Comparison of theoretic bounds for variants Applications The heap data structure has many applications. Heapsort: One of the best sorting methods being in-place and with no quadratic worst-case scenarios. Selection algorithms: A heap allows access to the min or max element in constant time, and other selections (such as median or kth-element) can be done in sub-linear time on data that is in a heap. Graph algorithms: By using heaps as internal traversal data structures, run time will be reduced by polynomial order. Examples of such problems are Prim's minimal-spanning-tree algorithm and Dijkstra's shortest-path algorithm. Priority queue: A priority queue is an abstract concept like "a list" or "a map"; just as a list can be implemented with a linked list or an array, a priority queue can be implemented with a heap or a variety of other methods. K-way merge: A heap data structure is useful to merge many already-sorted input streams into a single sorted output stream. Examples of the need for merging include external sorting and streaming results from distributed data such as a log structured merge tree. The inner loop is obtaining the min element, replacing with the next element for the corresponding input stream, then doing a sift-down heap operation. (Alternatively the replace function.) (Using extract-max and insert functions of a priority queue are much less efficient.) Programming language implementations The C++ Standard Library provides the , and algorithms for heaps (usually implemented as binary heaps), which operate on arbitrary random access iterators. It treats the iterators as a reference to an array, and uses the array-to-heap conversion. It also provides the container adaptor , which wraps these facilities in a container-like class. However, there is no standard support for the replace, sift-up/sift-down, or decrease/increase-key operations. The Boost C++ libraries include a heaps library. Unlike the STL, it supports decrease and increase operations, and supports additional types of heap: specifically, it supports d-ary, binomial, Fibonacci, pairing and skew heaps. There is a generic heap implementation for C and C++ with D-ary heap and B-heap support. It provides an STL-like API. The standard library of the D programming language includes , which is implemented in terms of D's ranges. Instances can be constructed from any random-access range. exposes an input range interface that allows iteration with D's built-in statements and integration with the range-based API of the package. For Haskell there is the module. The Java platform (since version 1.5) provides a binary heap implementation with the class in the Java Collections Framework. This class implements by default a min-heap; to implement a max-heap, programmer should write a custom comparator. There is no support for the replace, sift-up/sift-down, or decrease/increase-key operations. Python has a module that implements a priority queue using a binary heap. The library exposes a heapreplace function to support k-way merging. PHP has both max-heap () and min-heap () as of version 5.3 in the Standard PHP Library. Perl has implementations of binary, binomial, and Fibonacci heaps in the distribution available on CPAN. The Go language contains a package with heap algorithms that operate on an arbitrary type that satisfies a given interface. That package does not support the replace, sift-up/sift-down, or decrease/increase-key operations. Apple's Core Foundation library contains a structure. Pharo has an implementation of a heap in the Collections-Sequenceable package along with a set of test cases. A heap is used in the implementation of the timer event loop. The Rust programming language has a binary max-heap implementation, , in the module of its standard library. .NET has PriorityQueue class which uses quaternary (d-ary) min-heap implementation. It is available from .NET 6.
Mathematics
Data structures and types
null
14004
https://en.wikipedia.org/wiki/Hour
Hour
An hour (symbol: h; also abbreviated hr) is a unit of time historically reckoned as of a day and defined contemporarily as exactly 3,600 seconds (SI). There are 60 minutes in an hour, and 24 hours in a day. The hour was initially established in the ancient Near East as a variable measure of of the night or daytime. Such seasonal hours, also known as temporal hours or unequal hours, varied by season and latitude. Equal hours or equinoctial hours were taken as of the day as measured from noon to noon; the minor seasonal variations of this unit were eventually smoothed by making it of the mean solar day. Since this unit was not constant due to long term variations in the Earth's rotation, the hour was finally separated from the Earth's rotation and defined in terms of the atomic or physical second. In the modern metric system, hours are an accepted unit of time defined as 3,600 atomic seconds. However, on rare occasions an hour may incorporate a positive or negative leap second, effectively making it appear to last 3,599 or 3,601 seconds, in order to keep UTC within 0.9 seconds of UT1, the latter of which is based on measurements of the mean solar day. Etymology Hour is a development of the Anglo-Norman and Middle English , first attested in the 13th century. It displaced tide tīd, 'time' and stound stund, span of time. The Anglo-Norman term was a borrowing of Old French , a variant of , which derived from Latin and Greek hṓrā (). Like Old English and , hṓrā was originally a vaguer word for any span of time, including seasons and years. Its Proto-Indo-European root has been reconstructed as ("year, summer"), making hour distantly cognate with year. The time of day is typically expressed in English in terms of hours. Whole hours on a 12-hour clock are expressed using the contracted phrase o'clock, from the older of the clock. (10 am and 10 pm are both read as "ten o'clock".) Hours on a 24-hour clock ("military time") are expressed as "hundred" or "hundred hours". (1000 is read "ten hundred" or "ten hundred hours"; 10 pm would be "twenty-two hundred".) Fifteen and thirty minutes past the hour is expressed as "a quarter past" or "after" and "half past", respectively, from their fraction of the hour. Fifteen minutes before the hour may be expressed as "a quarter to", "of", "till", or "before" the hour. (9:45 may be read "nine forty-five" or "a quarter till ten".) History Antiquity Ancient Egypt In ancient Egypt the flooding of the Nile was, and still is, an important annual event, crucial for agriculture. It was accompanied by the rise of Sirius before the sunrise, and the appearance of 12 constellations across the night sky, to which the Egyptians assigned some significance. Influenced by this, the Egyptians divided the night into 12 equal intervals. These were seasonal hours, shorter in the summer than in the winter. Subsequently, the day was divided into intervals as well, which eventually became more important than the nightly intervals. These subdivisions of a day spread to Greece, and later to Rome. Ancient Greece The ancient Greeks kept time differently than is done today. Instead of dividing the time between one midnight and the next into 24 equal hours, they divided the time from sunrise to sunset into 12 "seasonal hours" (their actual duration depending on season), and the time from sunset to the next sunrise again in 12 "seasonal hours". Initially, only the day was divided into 12 seasonal hours and the night into three or four night watches. By the Hellenistic period the night was also divided into 12 hours. The day-and-night () was probably first divided into 24 hours by Hipparchus of Nicaea. The Greek astronomer Andronicus of Cyrrhus oversaw the construction of a horologion called the Tower of the Winds in Athens during the first century BCE. This structure tracked a 24-hour day using both sundials and mechanical hour indicators. The canonical hours were inherited into early Christianity from Second Temple Judaism. By AD 60, the Didache recommends disciples to pray the Lord's Prayer three times a day; this practice found its way into the canonical hours as well. By the second and third centuries, such Church Fathers as Clement of Alexandria, Origen, and Tertullian wrote of the practice of Morning and Evening Prayer, and of the prayers at the third, sixth and ninth hours. In the early church, during the night before every feast, a vigil was kept. The word "Vigils", at first applied to the Night Office, comes from a Latin source, namely the Vigiliae or nocturnal watches or guards of the soldiers. The night from six o'clock in the evening to six o'clock in the morning was divided into four watches or vigils of three hours each, the first, the second, the third, and the fourth vigil. The Horae were originally personifications of seasonal aspects of nature, not of the time of day. The list of 12 Horae representing the 12 hours of the day is recorded only in Late Antiquity, by Nonnus. The first and twelfth of the Horae were added to the original set of ten: Auge (first light) Anatole (sunrise) Mousike (morning hour of music and study) Gymnastike (morning hour of exercise) Nymphe (morning hour of ablutions) Mesembria (noon) Sponde (libations poured after lunch) Elete (prayer) Akte (eating and pleasure) Hesperis (start of evening) Dysis (sunset) Arktos (night sky) Middle Ages Medieval astronomers such as al-Biruni and Sacrobosco, divided the hour into 60 minutes, each of 60 seconds; this derives from Babylonian astronomy, where the corresponding terms denoted the time required for the Sun's apparent motion through the ecliptic to describe one minute or second of arc, respectively. In present terms, the Babylonian degree of time was thus four minutes long, the "minute" of time was thus four seconds long and the "second" 1/15 of a second. In medieval Europe, the Roman hours continued to be marked on sundials but the more important units of time were the canonical hours of the Orthodox and Catholic Church. During daylight, these followed the pattern set by the three-hour bells of the Roman markets, which were succeeded by the bells of local churches. They rang prime at about 6am, terce at about 9am, sext at noon, nones at about 3pm, and vespers at either 6pm or sunset. Matins and lauds precede these irregularly in the morning hours; compline follows them irregularly before sleep; and the midnight office follows that. Vatican II ordered their reformation for the Catholic Church in 1963, though they continue to be observed in the Orthodox churches. When mechanical clocks began to be used to show hours of daylight or nighttime, their period needed to be changed every morning and evening (for example, by changing the length of their pendula). The use of 24 hours for the entire day meant hours varied much less and the clocks needed to be adjusted only a few times a month. Modernity The minor irregularities of the apparent solar day were smoothed by measuring time using the mean solar day, using the Sun's movement along the celestial equator rather than along the ecliptic. The irregularities of this time system were so minor that most clocks reckoning such hours did not need adjustment. However, scientific measurements eventually became precise enough to note the effect of tidal deceleration of the Earth by the Moon, which gradually lengthens the Earth's days. During the French Revolution, a general decimalisation of measures was enacted, including decimal time between 1794 and 1800. Under its provisions, the French hour () was of the day and divided formally into 100 decimal minutes () and informally into 10 tenths (). Mandatory use for all public records began in 1794, but was suspended six months later by the same 1795 legislation that first established the metric system. In spite of this, a few localities continued to use decimal time for six years for civil status records, until 1800, after Napoleon's Coup of 18 Brumaire. The metric system bases its measurements of time upon the second, defined since 1952 in terms of the Earth's rotation in AD1900. Its hours are a secondary unit computed as precisely 3,600 seconds. However, an hour of Coordinated Universal Time (UTC), used as the basis of most civil time, has lasted 3,601 seconds 27 times since 1972 in order to keep it within 0.9 seconds of universal time, which is based on measurements of the mean solar day at 0° longitude. The addition of these seconds accommodates the very gradual slowing of the rotation of the Earth. In modern life, the ubiquity of clocks and other timekeeping devices means that segmentation of days according to their hours is commonplace. Most forms of employment, whether wage or salaried labour, involve compensation based upon measured or expected hours worked. The fight for an eight-hour day was a part of labour movements around the world. Informal rush hours and happy hours cover the times of day when commuting slows down due to congestion or alcoholic drinks being available at discounted prices. The hour record for the greatest distance travelled by a cyclist within the span of an hour is one of cycling's greatest honours. Counting hours Many different ways of counting the hours have been used. Because sunrise, sunset, and, to a lesser extent, noon, are the conspicuous points in the day, starting to count at these times was, for most people in most early societies, much easier than starting at midnight. However, with accurate clocks and modern astronomical equipment (and the telegraph or similar means to transfer a time signal in a split-second), this issue is much less relevant. Astrolabes, sundials, and astronomical clocks sometimes show the hour length and count using some of these older definitions and counting methods. Counting from dawn In ancient and medieval cultures, the counting of hours generally started with sunrise. Before the widespread use of artificial light, societies were more concerned with the division between night and day, and daily routines often began when light was sufficient. "Babylonian hours" divide the day and night into 24 equal hours, reckoned from the time of sunrise. They are so named from the false belief of ancient authors that the Babylonians divided the day into 24 parts, beginning at sunrise. In fact, they divided the day into 12 parts (called kaspu or "double hours") or into 60 equal parts. Unequal hours Sunrise marked the beginning of the first hour, the middle of the day was at the end of the sixth hour and sunset at the end of the twelfth hour. This meant that the duration of hours varied with the season. In the Northern hemisphere, particularly in the more northerly latitudes, summer daytime hours were longer than winter daytime hours, each being one twelfth of the time between sunrise and sunset. These variable-length hours were variously known as temporal, unequal, or seasonal hours and were in use until the appearance of the mechanical clock, which furthered the adoption of equal length hours. This is also the system used in Jewish law and frequently called "Talmudic hour" (Sha'a Zemanit) in a variety of texts. The Talmudic hour is one twelfth of time elapsed from sunrise to sunset, day hours therefore being longer than night hours in the summer; in winter they reverse. The Indic day began at sunrise. The term hora was used to indicate an hour. The time was measured based on the length of the shadow at day time. A hora translated to 2.5 pe. There are 60 pe per day, 60 minutes per pe and 60 kshana (snap of a finger or instant) per minute. Pe was measured with a bowl with a hole placed in still water. Time taken for this graduated bowl was one pe. Kings usually had an officer in charge of this clock. Counting from sunset In so-called "Italian time", "Italian hours", or "old Czech time", the first hour started with the sunset Angelus bell (or at the end of dusk, i.e., half an hour after sunset, depending on local custom and geographical latitude). The hours were numbered from 1 to 24. For example, in Lugano, the sun rose in December during the 14th hour and noon was during the 19th hour; in June the sun rose during the 7th hour and noon was in the 15th hour. Sunset was always at the end of the 24th hour. The clocks in church towers struck only from 1 to 12, thus only during night or early morning hours. This manner of counting hours had the advantage that everyone could easily know how much time they had to finish their day's work without artificial light. It was already widely used in Italy by the 14th century and lasted until the mid-18th century; it was officially abolished in 1755, or in some regions customary until the mid-19th century. The system of Italian hours can be seen on a number of clocks in Europe, where the dial is numbered from 1 to 24 in either Roman or Arabic numerals. The St Mark's Clock in Venice, and the Orloj in Prague are famous examples. It was also used in Poland, Silesia, and Bohemia until the 17th century. Its replacement by the more practical division into twice twelve (equinoctial) hours (also called small clock or civic hours) began as early as the 16th century. The Islamic day begins at sunset. The first prayer of the day (maghrib) is to be performed between just after sunset and the end of twilight. Until 1968 Saudi Arabia used the system of counting 24 equal hours with the first hour starting at sunset. Counting from noon For many centuries, up to 1925, astronomers counted the hours and days from noon, because it was the easiest solar event to measure accurately. An advantage of this method (used in the Julian Date system, in which a new Julian Day begins at noon) is that the date doesn't change during a single night's observing. Counting from midnight In the modern 12-hour clock, counting the hours starts at midnight and restarts at noon. Hours are numbered 12, 1, 2, ..., 11. Solar noon is always close to 12 noon (ignoring artificial adjustments due to time zones and daylight saving time), differing according to the equation of time by as much as fifteen minutes either way. At the equinoxes sunrise is around 6 a.m. (, before noon), and sunset around 6 p.m. (, after noon). In the modern 24-hour clock, counting the hours starts at midnight, and hours are numbered from 0 to 23. Solar noon is always close to 12:00, again differing according to the equation of time. At the equinoxes sunrise is around 06:00, and sunset around 18:00. History of timekeeping in other cultures Egypt The ancient Egyptians began dividing the night into at some time before the compilation of the Dynasty V Pyramid Texts in the 24thcenturyBC. By 2150BC (Dynasty IX), diagrams of stars inside Egyptian coffin lids—variously known as "diagonal calendars" or "star clocks"—attest that there were exactly 12 of these. Clagett writes that it is "certain" this duodecimal division of the night followed the adoption of the Egyptian civil calendar, usually placed BC on the basis of analyses of the Sothic cycle, but a lunar calendar presumably long predated this and also would have had 12 months in each of its years. The coffin diagrams show that the Egyptians took note of the heliacal risings of 36 stars or constellations (now known as "decans"), one for each of the ten-day "weeks" of their civil calendar. (12 sets of alternate "triangle decans" were used for the 5 epagomenal days between years.) Each night, the rising of eleven of these decans were noted, separating the night into 12 divisions whose middle terms would have lasted about 40minutes each. (Another seven stars were noted by the Egyptians during the twilight and predawn periods, although they were not important for the hour divisions.) The original decans used by the Egyptians would have fallen noticeably out of their proper places over a span of several centuries. By the time of (BC), the priests at Karnak were using water clocks to determine the hours. These were filled to the brim at sunset and the hour determined by comparing the water level against one of its 12 gauges, one for each month of the year. During the New Kingdom, another system of decans was used, made up of 24 stars over the course of the year and 12 within any one night. The later division of the day into 12 hours was accomplished by sundials marked with ten equal divisions. The morning and evening periods when the sundials failed to note time were observed as the first and last hours. The Egyptian hours were closely connected both with the priesthood of the gods and with their divine services. By the New Kingdom, each hour was conceived as a specific region of the sky or underworld through which Ra's solar barge travelled. Protective deities were assigned to each and were used as the names of the hours. As the protectors and resurrectors of the sun, the goddesses of the night hours were considered to hold power over all lifespans and thus became part of Egyptian funerary rituals. Two fire-spitting cobras were said to guard the gates of each hour of the underworld, and Wadjet and the rearing cobra (uraeus) were also sometimes referenced as from their role protecting the dead through these gates. The Egyptian word for astronomer, used as a synonym for priest, was , "one of the wnwt", as it were "one of the hours". The earliest forms of include one or three stars, with the later solar hours including the determinative hieroglyph for "sun". East Asia Ancient China divided its day into 100 "marks" running from midnight to midnight. The system is said to have been used since remote antiquity, credited to the legendary Yellow Emperor, but is first attested in Han-era water clocks and in the 2nd-century history of that dynasty. It was measured with sundials and water clocks. Into the Eastern Han, the Chinese measured their day schematically, adding the 20-ke difference between the solstices evenly throughout the year, one every nine days. During the night, time was more commonly reckoned during the night by the "watches" of the guard, which were reckoned as a fifth of the time from sunset to sunrise. Imperial China continued to use ke and geng but also began to divide the day into 12 "double hours" named after the earthly branches and sometimes also known by the name of the corresponding animal of the Chinese zodiac. The first shi originally ran from 11pm to 1am but was reckoned as starting at midnight by the time of the History of Song, compiled during the early Yuan. These apparently began to be used during the Eastern Han that preceded the Three Kingdoms era, but the sections that would have covered them are missing from their official histories; they first appear in official use in the Tang-era Book of Sui. Variations of all these units were subsequently adopted by Japan and the other countries of the Sinosphere. The 12 shi supposedly began to be divided into 24 hours under the Tang, although they are first attested in the Ming-era Book of Yuan. In that work, the hours were known by the same earthly branches as the shi, with the first half noted as its "starting" and the second as "completed" or "proper" shi. In modern China, these are instead simply numbered and described as "little shi". The modern ke is now used to count quarter-hours, rather than a separate unit. As with the Egyptian night and daytime hours, the division of the day into 12 shi has been credited to the example set by the rough number of lunar cycles in a solar year, although the 12-year Jovian orbital cycle was more important to traditional Chinese and Babylonian reckoning of the zodiac. Southeast Asia In Thailand, Laos, and Cambodia, the traditional system of noting hours is the six-hour clock. This reckons each of a day's 24 hours apart from noon as part of a fourth of the day. The first hour of the first half of daytime was 7 am; 1 pm the first hour of the latter half of daytime; 7 pm the first hour of the first half of nighttime; and 1 am the first hour of the latter half of nighttime. This system existed in the Ayutthaya Kingdom, deriving its current phrasing from the practice of publicly announcing the daytime hours with a gong and the nighttime hours with a drum. It was abolished in Laos and Cambodia during their French occupation and is uncommon there now. The Thai system remains in informal use in the form codified in 1901 by King Chulalongkorn. India The Vedas and Puranas employed units of time based on the sidereal day (nakṣatra ahorātra). This was variously divided into 30 muhūrta-s of 48 minutes each or 60 dandas or nadī-s of 24 minutes each. The solar day was later similarly divided into 60 ghaṭikás of about the same duration, each divided in turn into 60 vinadis. The Sinhalese followed a similar system but called their sixtieth of a day a peya. Derived measures air changes per hour (ACH), a measure of the replacements of air within a defined space used for indoor air quality ampere hour (Ah), a measure of electrical charge used in electrochemistry BTU-hour, a measure of power used in the power industry and for air conditioners and heaters credit hour, a measure of an academic course's contracted instructional time per week for a semester horsepower-hour (hph), a measure of energy used in the railroad industry hour angle, a measure of the angle between the meridian plane and the hour circle passing through a certain point used in the equatorial coordinate system kilometres per hour (km/h), a measure of land speed kilowatt-hour (kWh), a measure of energy commonly used as an electrical billing unit knot (kn), a measure of nautical miles per hour, used for maritime and aerial speed man-hour, the amount of work performed by the average worker in one hour, used in productivity analysis metre per hour (m/h), a measure of slow speeds mile per hour (mph), a measure of land speed passengers per hour per direction (p/h/d), a measure of the capacity of public transportation systems pound per hour (PPH), a measure of mass flow rate used for engines' fuel flow work or working hour, a measure of working time used in various regulations, such as those distinguishing part- and full-time employment and those limiting truck drivers' working hours or hours of service
Physical sciences
Time
null
14006
https://en.wikipedia.org/wiki/Haemophilia
Haemophilia
Haemophilia (British English), or hemophilia (American English) (), is a mostly inherited genetic disorder that impairs the body's ability to make blood clots, a process needed to stop bleeding. This results in people bleeding for a longer time after an injury, easy bruising, and an increased risk of bleeding inside joints or the brain. Those with a mild case of the disease may have symptoms only after an accident or during surgery. Bleeding into a joint can result in permanent damage while bleeding in the brain can result in long term headaches, seizures, or an altered level of consciousness. There are two main types of haemophilia: haemophilia A, which occurs due to low amounts of clotting factor VIII, and haemophilia B, which occurs due to low levels of clotting factor IX. They are typically inherited from one's parents through an X chromosome carrying a nonfunctional gene. Most commonly found in men, haemophilia can affect women too, though very rarely. A woman would need to inherit two affected X chromosomes to be affected, whereas a man would only need one X chromosome affected. It is possible for a new mutation to occur during early development, or haemophilia may develop later in life due to antibodies forming against a clotting factor. Other types include haemophilia C, which occurs due to low levels of factor XI, Von Willebrand disease, which occurs due to low levels of a substance called von Willebrand factor, and parahaemophilia, which occurs due to low levels of factor V. Haemophilia A, B, and C prevent the intrinsic pathway from functioning properly; this clotting pathway is necessary when there is damage to the endothelium of a blood vessel. Acquired haemophilia is associated with cancers, autoimmune disorders, and pregnancy. Diagnosis is by testing the blood for its ability to clot and its levels of clotting factors. Prevention may occur by removing an egg, fertilising it, and testing the embryo before transferring it to the uterus. Human embryos in research can be regarded as the technical object/process. Missing blood clotting factors are replaced to treat haemophilia. This may be done on a regular basis or during bleeding episodes. Replacement may take place at home or in hospital. The clotting factors are made either from human blood or by recombinant methods. Up to 20% of people develop antibodies to the clotting factors which makes treatment more difficult. The medication desmopressin may be used in those with mild haemophilia A. Studies of gene therapy are in early human trials. Haemophilia A affects about 1 in 5,000–10,000, while haemophilia B affects about 1 in 40,000 males at birth. As haemophilia A and B are both X-linked recessive disorders, females are rarely severely affected. Some females with a nonfunctional gene on one of the X chromosomes may be mildly symptomatic. Haemophilia C occurs equally in both sexes and is mostly found in Ashkenazi Jews. In the 1800s haemophilia B was common within the royal families of Europe. The difference between haemophilia A and B was determined in 1952. Signs and symptoms Characteristic symptoms vary with severity. In general symptoms are internal or external bleeding episodes, which are called "bleeds". People with more severe haemophilia experience more severe and more frequent bleeds, while people with mild haemophilia usually experience more minor symptoms except after surgery or serious trauma. In cases of moderate haemophilia symptoms are variable which manifest along a spectrum between severe and mild forms. In both haemophilia A and B, there is spontaneous bleeding but a normal bleeding time, normal prothrombin time, normal thrombin time, but prolonged partial thromboplastin time. Internal bleeding is common in people with severe haemophilia and some individuals with moderate haemophilia. The most characteristic type of internal bleed is a joint bleed where blood enters into the joint spaces. This is most common with severe haemophiliacs and can occur spontaneously (without evident trauma). If not treated promptly, joint bleeds can lead to permanent joint damage and disfigurement. Bleeding into soft tissues such as muscles and subcutaneous tissues is less severe but can lead to damage and requires treatment. Children with mild to moderate haemophilia may not have any signs or symptoms at birth, especially if they do not undergo circumcision. Their first symptoms are often frequent and large bruises and haematomas from frequent bumps and falls as they learn to walk. Swelling and bruising from bleeding in the joints, soft tissue, and muscles may also occur. Children with mild haemophilia may not have noticeable symptoms for many years. Often, the first sign in very mild haemophiliacs is heavy bleeding from a dental procedure, an accident, or surgery. Females who are carriers usually have enough clotting factors from their one normal gene to prevent serious bleeding problems, though some may present as mild haemophiliacs. Complications Severe complications are much more common in cases of severe and moderate haemophilia. Complications may arise from the disease itself or from its treatment: Deep internal bleeding, e.g. deep-muscle bleeding, leading to swelling, numbness or pain of a limb. Joint damage from haemarthrosis (haemophilic arthropathy), potentially with severe pain, disfigurement, and even destruction of the joint and development of debilitating arthritis. Transfusion transmitted infection from blood transfusions that are given as treatment. Adverse reactions to clotting factor treatment, including the development of an immune inhibitor which renders factor replacement less effective. Intracranial haemorrhage is a serious medical emergency caused by the buildup of pressure inside the skull. It can cause disorientation, nausea, loss of consciousness, brain damage, and death. Haemophilic arthropathy is characterised by chronic proliferative synovitis and cartilage destruction. If an intra-articular bleed is not drained early, it may cause apoptosis of chondrocytes and affect the synthesis of proteoglycans. The hypertrophied and fragile synovial lining while attempting to eliminate excessive blood may be more likely to easily rebleed, leading to a vicious cycle of hemarthrosis-synovitis-hemarthrosis. In addition, iron deposition in the synovium may induce an inflammatory response activating the immune system and stimulating angiogenesis, resulting in cartilage and bone destruction. Genetics Typically, females possess two X-chromosomes, and males have one X and one Y-chromosome. Since the mutations causing the disease are X-linked recessive, a female carrying the defect on one of her X-chromosomes may not be affected by it, as the equivalent dominant allele on her other chromosome should express itself to produce the necessary clotting factors, due to X inactivation. Therefore, heterozygous females are just carriers of this genetic disposition. However, the Y-chromosome in the male has no gene for factors VIII or IX. If the genes responsible for production of factor VIII or factor IX present on a male's X-chromosome are deficient there is no equivalent on the Y-chromosome to cancel it out, so the deficient gene is not masked and the disorder will develop. Since a male receives his single X-chromosome from his mother, the son of a healthy female silently carrying the deficient gene will have a 50% chance of inheriting that gene from her and with it the disease; and if his mother is affected with haemophilia, he will have a 100% chance of being a haemophiliac. In contrast, for a female to inherit the disease, she must receive two deficient X-chromosomes, one from her mother and the other from her father (who must therefore be a haemophiliac himself). Hence, haemophilia is expressed far more commonly among males than females, while females, who must have two deficient X-chromosomes in order to have haemophilia, are far more likely to be silent carriers, survive childhood and to submit each of her genetic children to an at least 50% risk of receiving the deficient gene. However, it is possible for female carriers to become mild haemophiliacs due to lyonisation (inactivation) of the X-chromosomes. Haemophiliac daughters are more common than they once were, as improved treatments for the disease have allowed more haemophiliac males to survive to adulthood and become parents. Adult females may experience menorrhagia (heavy periods) due to the bleeding tendency. The pattern of inheritance is X-linked recessive. This type of pattern is also seen in colour blindness. A mother who is a carrier has a 50% chance of passing the faulty X-chromosome to her daughter, while an affected father will always pass on the affected gene to his daughters. A son cannot inherit the defective gene from his father. Genetic testing and genetic counselling is recommended for families with haemophilia. Prenatal testing, such as amniocentesis and chorionic villus sampling are available to pregnant women who may be carriers of the condition. As with all genetic disorders, it is also possible for a human to acquire it spontaneously through mutation, rather than inheriting it, because of a new mutation in one of their parents' gametes. Spontaneous mutations account for about 33% of all cases of haemophilia A. The most common mutation that causes severe cases of haemophilia A is an inversion within intron 22 of the factor VIII gene (F8) which is located near the tip of the X chromosome, leading to an abnormal crossover during meiosis. About 30% of cases of haemophilia B are the result of a spontaneous gene mutation. If a female gives birth to a haemophiliac son, either the female is a carrier for the blood disorder or the haemophilia was the result of a spontaneous mutation. Until modern direct DNA testing, however, it was impossible to determine if a female with only healthy children was a carrier or not. If a male has the disease and has children with a female who is not a carrier, his daughters will be carriers of haemophilia. His sons, however, will not be affected with the disease. The disease is X-linked and the father cannot pass haemophilia through the Y-chromosome. Males with the disorder are then no more likely to pass on the gene to their children than carrier females, though all daughters they sire will be carriers and all sons they father will not have haemophilia (unless the mother is a carrier) Severity There are numerous different mutations which cause each type of haemophilia. Due to differences in changes to the genes involved, people with haemophilia often have some level of active clotting factor. Individuals with less than 1% active factor are classified as having severe haemophilia, those with 1–5% active factor have moderate haemophilia, and those with mild haemophilia have between 5% and 40% of normal levels of active clotting factor. Diagnosis Haemophilia can be diagnosed before, during or after birth if there is a family history of the condition. Several options are available to parents. If there is no family history of haemophilia, it is usually only diagnosed when a child begins to walk or crawl. Affected children may experience joint bleeds or easy bruising. Mild haemophilia may only be discovered later, usually after an injury or a dental or surgical procedure. Before pregnancy Genetic testing and counselling are available to help determine the risk of passing the condition onto a child. This may involve testing a sample of tissue or blood to look for signs of the genetic mutation that causes haemophilia. During pregnancy A pregnant woman with a history of haemophilia in her family can test for the haemophilia gene. Such tests include: chorionic villus sampling (CVS): a small sample of the placenta is removed from the womb and tested for the haemophilia gene, usually during weeks 11–14 of pregnancy amniocentesis: a sample of amniotic fluid is taken for testing, usually during weeks 15–20 of pregnancy There is a small risk of these procedures causing problems such as miscarriage or premature labour, so the woman may discuss this with the doctor in charge of her care. After birth If haemophilia is suspected after a child has been born, a blood test can usually confirm the diagnosis. Blood from the umbilical cord can be tested at birth if there's a family history of haemophilia. A blood test will also be able to identify whether a child has haemophilia A or B, and how severe it is. Classification There are several types of haemophilia: haemophilia A, haemophilia B, haemophilia C, parahaemophilia, acquired haemophilia A, and acquired haemophilia B. Haemophilia A is a recessive X-linked genetic disorder resulting in a deficiency of functional clotting Factor VIII. Haemophilia B is also a recessive X-linked genetic disorder involving a lack of functional clotting Factor IX. Haemophilia C is an autosomal genetic disorder involving a lack of functional clotting Factor XI. Haemophilia C is not completely recessive, as heterozygous individuals also show increased bleeding. The type of haemophilia known as parahaemophilia is a mild and rare form and is due to a deficiency in factor V. This type can be inherited or acquired. A non-genetic form of haemophilia is caused by autoantibodies against factor VIII and so is known as acquired haemophilia A. It is a rare but potentially life-threatening bleeding disorder caused by the development of autoantibodies (inhibitors) directed against plasma coagulation factors. Acquired haemophilia can be associated with cancers, autoimmune disorders and following childbirth. Management There is no long-term cure. Treatment and prevention of bleeding episodes is done primarily by replacing the missing blood clotting factors. Clotting factors Clotting factors are usually not needed in mild haemophilia. In moderate haemophilia clotting factors are typically only needed when bleeding occurs or to prevent bleeding with certain events. In severe haemophilia preventive use is often recommended two or three times a week and may continue for life. Rapid treatment of bleeding episodes decreases damage to the body. Factor VIII is used in haemophilia A and factor IX in haemophilia B. Factor replacement can be either isolated from human plasma, recombinant, or a combination of the two. Some people develop antibodies (inhibitors) against the replacement factors given to them, so the amount of the factor has to be increased or non-human replacement products must be given, such as porcine factor VIII. If a person becomes refractory to replacement coagulation factor as a result of high levels of circulating inhibitors, this may be partially overcome with recombinant human factor VIII. In early 2008, the US Food and Drug Administration (FDA) approved an anti-haemophilic drug completely free of albumin, which made it the first anti-haemophilic drug in the US to use an entirely synthetic purification process. Since 1993 recombinant factor products (which are typically cultured in Chinese hamster ovary (CHO) tissue culture cells and involve little, if any human plasma products) have been available and have been widely used in wealthier western countries. While recombinant clotting factor products offer higher purity and safety, they are, like concentrate, extremely expensive, and not generally available in the developing world. In many cases, factor products of any sort are difficult to obtain in developing countries. Clotting factors are either given preventively or on-demand. Preventive use involves the infusion of clotting factor on a regular schedule in order to keep clotting levels sufficiently high to prevent spontaneous bleeding episodes. On-demand (or episodic) treatment involves treating bleeding episodes once they arise. In 2007, a trial comparing on-demand treatment of boys (< 30 months) with haemophilia A with prophylactic treatment (infusions of 25 IU/kg body weight of Factor VIII every other day) in respect to its effect on the prevention of joint-diseases. When the boys reached 6 years of age, 93% of those in the prophylaxis group and 55% of those in the episodic-therapy group had a normal index joint-structure on MRI. Preventative treatment, however, resulted in average costs of $300,000 per year. The author of an editorial published in the same issue of the NEJM supports the idea that prophylactic treatment not only is more effective than on demand treatment but also suggests that starting after the first serious joint-related haemorrhage may be more cost effective than waiting until the fixed age to begin. Most haemophiliacs in third world countries have limited or no access to commercial blood clotting factor products. Other Desmopressin (DDAVP) may be used in those with mild haemophilia A. Tranexamic acid or epsilon aminocaproic acid may be given along with clotting factors to prevent breakdown of clots. Pain medicines, steroids, and physical therapy may be used to reduce pain and swelling in an affected joint. In those with severe haemophilia A already receiving FVIII, emicizumab may provide some benefit. Different treatments are used to help those with an acquired form of haemophilia in addition to the normal clotting factors. Often the most effective treatment is corticosteroids which remove the auto-antibodies in half of people. As a secondary route of treatment, cyclophosphamide and cyclosporine are used and are proven effective for those who did not respond to the steroid treatments. In rare cases a third route or treatment is used, high doses of intravenous immunoglobulin or immunosorbent that works to help control bleeding instead of battling the auto-antibodies. Contraindications Anticoagulants such as heparin and warfarin are contraindicated for people with haemophilia as these can aggravate clotting difficulties. Also contraindicated are those drugs which have "blood thinning" side effects. For instance, medicines which contain aspirin, ibuprofen, or naproxen sodium should not be taken because they are well known to have the side effect of prolonged bleeding. Also contraindicated are activities with a high likelihood of trauma, such as motorcycling and skateboarding. Popular sports with very high rates of physical contact and injuries such as American football, hockey, boxing, wrestling, and rugby should be avoided by people with haemophilia. Other active sports like soccer, baseball, and basketball also have a high rate of injuries, but have overall less contact and should be undertaken cautiously and only in consultation with a doctor. Prognosis Like most aspects of the disorder, life expectancy varies with severity and adequate treatment. People with severe haemophilia who do not receive adequate, modern treatment have greatly shortened lifespans and often do not reach maturity. Prior to the 1960s when effective treatment became available, average life expectancy was only 11 years. By the 1980s the life span of the average haemophiliac receiving appropriate treatment was 50–60 years. Today with appropriate treatment, males with haemophilia typically have a near normal quality of life with an average lifespan approximately 10 years shorter than an unaffected male. Since the 1980s the primary leading cause of death of people with severe haemophilia has shifted from haemorrhage to HIV/AIDS acquired through treatment with contaminated blood products. The second leading cause of death related to severe haemophilia complications is intracranial haemorrhage which today accounts for one third of all deaths of people with haemophilia. Two other major causes of death include hepatitis infections causing cirrhosis and obstruction of air or blood flow due to soft tissue haemorrhage. Epidemiology Haemophilia frequency is about 1 instance in every 10,000 births (or 1 in 5,000 male births) for haemophilia A and 1 in 50,000 births for haemophilia B. About 18,000 people in the United States have haemophilia. Each year in the US, about 400 babies are born with the disorder. Haemophilia usually occurs in males and less often in females. It is estimated that about 2,500 Canadians have haemophilia A, and about 500 Canadians have haemophilia B. History Scientific discovery The excessive bleeding was known to ancient people. The Talmud instructs that a boy must not be circumcised if he had two brothers who died due to complications arising from their circumcisions, and Maimonides says that this excluded paternal half-brothers. This may have been due to a concern about haemophilia. The tenth century Arab surgeon Al-Zahrawi noted cases of excessive bleeding among men in a village. Several similar references to the disease later known as haemophilia appear throughout historical writings, though no term for inherited abnormal bleeding tendencies existed until the nineteenth century. In 1803, John Conrad Otto, a Philadelphian physician, wrote an account about "a hemorrhagic disposition existing in certain families" in which he called the affected males "bleeders". He recognised that the disorder was hereditary and that it affected mostly males and was passed down by healthy females. His paper was the second paper to describe important characteristics of an X-linked genetic disorder (the first paper being a description of colour blindness by John Dalton who studied his own family). Otto was able to trace the disease back to a woman who settled near Plymouth, New Hampshire, in 1720. The idea that affected males could pass the trait onto their unaffected daughters was not described until 1813 when John F. Hay published an account in The New England Journal of Medicine. In 1924, a Finnish doctor discovered a hereditary bleeding disorder similar to haemophilia localised in Åland, southwest of Finland. This bleeding disorder is called "Von Willebrand Disease". The term "haemophilia" is derived from the term "haemorrhaphilia" which was used in a description of the condition written by Friedrich Hopff in 1828, while he was a student at the University of Zurich. In 1937, Patek and Taylor, two doctors from Harvard University, discovered anti-haemophilic globulin. In 1947, Alfredo Pavlovsky, a doctor from Buenos Aires, found haemophilia A and haemophilia B to be separate diseases by doing a lab test. This test was done by transferring the blood of one haemophiliac to another haemophiliac. The fact that this corrected the clotting problem showed that there was more than one form of haemophilia. European royalty Haemophilia has featured prominently in European royalty and thus is sometimes known as 'the royal disease'. Queen Victoria passed the mutation for haemophilia B to her son Leopold and, through two of her daughters, Alice and Beatrice, to various royals across the continent, including the royal families of Spain, Germany, and Russia. In Russia, Tsarevich Alexei, the son and heir of Tsar Nicholas II, famously had haemophilia, which he had inherited from his mother, Empress Alexandra, one of Queen Victoria's granddaughters. The haemophilia of Alexei would result in the rise to prominence of the Russian mystic Grigori Rasputin at the imperial court. It was claimed that Rasputin was successful at treating Tsarevich Alexei's haemophilia. At the time, a common treatment administered by professional doctors was to use aspirin, which worsened rather than lessened the problem. It is believed that, by simply advising against the medical treatment, Rasputin could bring visible and significant improvement to the condition of Tsarevich Alexei. In Spain, Queen Victoria's youngest daughter, Princess Beatrice, had a daughter Victoria Eugenie of Battenberg, who later became Queen of Spain. Two of her sons were haemophiliacs and both died from minor car accidents. Her eldest son, Prince Alfonso of Spain, Prince of Asturias, died at the age of 31 from internal bleeding after his car hit a telephone booth. Her youngest son, Infante Gonzalo, died at age 19 from abdominal bleeding following a minor car accident in which he and his sister hit a wall while avoiding a cyclist. Neither appeared injured or sought immediate medical care and Gonzalo died two days later from internal bleeding. Treatment The method for the production of an antihaemophilic factor was discovered by Judith Graham Pool from Stanford University in 1964, and approved for commercial use in 1971 in the United States under the name Cryoprecipitated AHF. Together with the development of a system for transportation and storage of human plasma in 1965, this was the first time an efficient treatment for haemophilia became available. Blood contamination Up until late 1985 many people with haemophilia received clotting factor products that posed a risk of HIV and hepatitis C infection. The plasma used to create the products was not screened or tested, nor had most of the products been subject to any form of viral inactivation. Tens of thousands worldwide were infected as a result of contaminated factor products including more than 10,000 people in the United States, 3,500 British, 1,400 Japanese, 700 Canadians, 250 Irish, and 115 Iraqis. Infection via the tainted factor products had mostly stopped by 1986 by which time viral inactivation methods had largely been put into place, although some products were shown to still be dangerous in 1987. Research Gene therapy In those with severe haemophilia, gene therapy may reduce symptoms to those that a person with mild or moderate haemophilia might have. The best results have been found in haemophilia B. In 2016 early stage human research was ongoing with a few sites recruiting participants. In 2017 a gene therapy trial on nine people with haemophilia A reported that high doses did better than low doses. It is not currently an accepted treatment for haemophilia. In July 2022 results of a gene therapy candidate for haemophilia B called FLT180 were announced, it works using an adeno-associated virus (AAV) to restore the clotting factor IX (FIX) protein, normal levels of the protein were observed with low doses of the therapy but immunosuppression was necessitated to decrease the risk of vector-related immune responses. In November 2022, the first gene therapy treatment for haemophilia B was approved by the U.S. Food and Drug Administration, called Hemgenix. It is a single-dose treatment that gives the patient the genetic information required to produce Factor IX. In June 2023, the FDA approved the first gene therapy treatment for haemophilia A, called Roctavian. It was only approved for patients with severe cases, but it has been shown to reduce yearly bleeding episodes by 50%. It works similarly to Hemgenix, being administered by intravenous infusion that contains a gene for Factor VIII.
Biology and health sciences
Specific diseases
Health
14019
https://en.wikipedia.org/wiki/Horsepower
Horsepower
Horsepower (hp) is a unit of measurement of power, or the rate at which work is done, usually in reference to the output of engines or motors. There are many different standards and types of horsepower. Two common definitions used today are the imperial horsepower as in "hp" or "bhp" which is about , and the metric horsepower as in "cv" or "PS" which is approximately . The electric horsepower "hpE" is exactly , while the boiler horsepower is 9809.5 or 9811 watts, depending on the exact year. The term was adopted in the late 18th century by Scottish engineer James Watt to compare the output of steam engines with the power of draft horses. It was later expanded to include the output power of other power-generating machinery such as piston engines, turbines, and electric motors. The definition of the unit varied among geographical regions. Most countries now use the SI unit watt for measurement of power. With the implementation of the EU Directive 80/181/EEC on 1 January 2010, the use of horsepower in the EU is permitted only as a supplementary unit. History The development of the steam engine provided a reason to compare the output of horses with that of the engines that could replace them. In 1702, Thomas Savery wrote in The Miner's Friend: So that an engine which will raise as much water as two horses, working together at one time in such a work, can do, and for which there must be constantly kept ten or twelve horses for doing the same. Then I say, such an engine may be made large enough to do the work required in employing eight, ten, fifteen, or twenty horses to be constantly maintained and kept for doing such a work... The idea was later used by James Watt to help market his improved steam engine. He had previously agreed to take royalties of one-third of the savings in coal from the older Newcomen steam engines. This royalty scheme did not work with customers who did not have existing steam engines but used horses instead. Watt determined that a horse could turn a mill wheel 144 times in an hour (or 2.4 times a minute). The wheel was in radius; therefore, the horse travelled feet in one minute. Watt judged that the horse could pull with a force of . So: Engineering in History recounts that John Smeaton initially estimated that a horse could produce per minute. John Desaguliers had previously suggested per minute, and Thomas Tredgold suggested per minute. "Watt found by experiment in 1782 that a 'brewery horse' could produce per minute." James Watt and Matthew Boulton standardized that figure at per minute the next year. A common legend states that the unit was created when one of Watt's first customers, a brewer, specifically demanded an engine that would match a horse, and chose the strongest horse he had and driving it to the limit. In that legend, Watt accepted the challenge and built a machine that was actually even stronger than the figure achieved by the brewer, and the output of that machine became the horsepower. In 1993, R. D. Stevenson and R. J. Wassersug published correspondence in Nature summarizing measurements and calculations of peak and sustained work rates of a horse. Citing measurements made at the 1926 Iowa State Fair, they reported that the peak power over a few seconds has been measured to be as high as and also observed that for sustained activity, a work rate of about per horse is consistent with agricultural advice from both the 19th and 20th centuries and also consistent with a work rate of about four times the basal rate expended by other vertebrates for sustained activity. When considering human-powered equipment, a healthy human can produce about briefly (see orders of magnitude) and sustain about indefinitely; trained athletes can manage up to about briefly and for a period of several hours. The Jamaican sprinter Usain Bolt produced a maximum of 0.89 seconds into his 9.58 second sprint world record in 2009. In 2023 a group of engineers modified a dynamometer to be able to measure how much power a horse can produce. This horse was measured to . Calculating power When torque is in pound-foot units, rotational speed is in rpm, the resulting power in horsepower is The constant 5252 is the rounded value of (33,000 ft⋅lbf/min)/(2π rad/rev). When torque is in inch-pounds, The constant 63,025 is the approximation of Definitions Imperial horsepower Assuming the third CGPM (1901, CR 70) definition of standard gravity, , is used to define the pound-force as well as the kilogram force, and the international avoirdupois pound (1959), one imperial horsepower is: {| |- |1 hp |≡ 33,000 ft·lbf/min | colspan="2" |by definition |- | |= 550 ft⋅lbf/s |since |1 min = 60 s |- | |= 550 × 0.3048 × 0.45359237 m⋅kgf/s |since |1 ft ≡ 0.3048 m and 1 lb ≡ 0.45359237 kg |- | |= 76.0402249068 kgf⋅m/s | | |- | |= 76.0402249068 × 9.80665 kg⋅m2/s3 |since |g = 9.80665 m/s2 |- | |= 745.69987158227022 W ≈ 745.700 W |since |1 W ≡ 1 J/s = 1 N⋅m/s = 1 (kg⋅m/s2)⋅(m/s) |} Or given that 1 hp = 550 ft⋅lbf/s, 1 ft = 0.3048 m, 1 lbf ≈ 4.448 N, 1 J = 1 N⋅m, 1 W = 1 J/s: 1 hp ≈ 745.7 W Metric horsepower (PS, KM, cv, hk, pk, k, ks, ch) The various units used to indicate this definition (PS, KM, cv, hk, pk, k, ks and ch) all translate to horse power in English. British manufacturers often intermix metric horsepower and mechanical horsepower depending on the origin of the engine in question. DIN 66036 defines one metric horsepower (Pferdestärke, or PS) as the power to raise a mass of 75 kilograms against the Earth's gravitational force over a distance of one metre in one second: = 75 ⋅m/s = 1 PS. This is equivalent to 735.49875 W, or 98.6% of an imperial horsepower. In 1972, the PS was replaced by the kilowatt as the official power-measuring unit in EEC directives. Other names for the metric horsepower are the Italian , Dutch , the French , the Spanish and Portuguese , the Russian , the Swedish , the Finnish , the Estonian , the Norwegian and Danish , the Hungarian , the Czech and Slovak or ), the Serbo-Croatian , the Bulgarian , the Macedonian , the Polish (), Slovenian , the Ukrainian , the Romanian , and the German . In the 19th century, revolutionary-era France had its own unit used to replace the cheval vapeur (horsepower); based on a 100 kgf⋅m/s standard, it was called the poncelet and was abbreviated p. Tax horsepower Tax or fiscal horsepower is a non-linear rating of a motor vehicle for tax purposes. Tax horsepower ratings were originally more or less directly related to the size of the engine; but as of 2000, many countries changed over to systems based on emissions, so are not directly comparable to older ratings. The Citroën 2CV is named for its French fiscal horsepower rating, "deux chevaux" (2CV). Electrical horsepower Nameplates on electrical motors show their power output, not the power input (the power delivered at the shaft, not the power consumed to drive the motor). This power output is ordinarily stated in watts or kilowatts. In the United States, the power output is stated in horsepower which, for this purpose, is defined as exactly 746 watts. Wattage is calculated by multiplying voltage by amperage. Hydraulic horsepower Hydraulic horsepower can represent the power available within hydraulic machinery, power through the down-hole nozzle of a drilling rig, or can be used to estimate the mechanical power needed to generate a known hydraulic flow rate. It may be calculated as where pressure is in psi, and flow rate is in US gallons per minute. Drilling rigs are powered mechanically by rotating the drill pipe from above. Hydraulic power is still needed though, as 1 500 to 5 000 W are required to push mud through the drill bit to clear waste rock. Additional hydraulic power may also be used to drive a down-hole mud motor to power directional drilling. When using SI units, the equation becomes coherent and there is no dividing constant. where pressure is in pascals (Pa), and flow rate is in cubic metres per second (m3). Boiler horsepower Boiler horsepower is a boiler's capacity to deliver steam to a steam engine and is not the same unit of power as the 550 ft lb/s definition. One boiler horsepower is equal to the thermal energy rate required to evaporate of fresh water at in one hour. In the early days of steam use, the boiler horsepower was roughly comparable to the horsepower of engines fed by the boiler. The term "boiler horsepower" was originally developed at the Philadelphia Centennial Exhibition in 1876, where the best steam engines of that period were tested. The average steam consumption of those engines (per output horsepower) was determined to be the evaporation of of water per hour, based on feed water at , and saturated steam generated at . This original definition is equivalent to a boiler heat output of . A few years later in 1884, the ASME re-defined the boiler horsepower as the thermal output equal to the evaporation of 34.5 pounds per hour of water "from and at" . This considerably simplified boiler testing, and provided more accurate comparisons of the boilers at that time. This revised definition is equivalent to a boiler heat output of . Present industrial practice is to define "boiler horsepower" as a boiler thermal output equal to , which is very close to the original and revised definitions. Boiler horsepower is still used to measure boiler output in industrial boiler engineering in the US. Boiler horsepower is abbreviated BHP, which is also used in many places to symbolize brake horsepower. Drawbar power Drawbar power (dbp) is the power a railway locomotive has available to haul a train or an agricultural tractor to pull an implement. This is a measured figure rather than a calculated one. A special railway car called a dynamometer car coupled behind the locomotive keeps a continuous record of the drawbar pull exerted, and the speed. From these, the power generated can be calculated. To determine the maximum power available, a controllable load is required; it is normally a second locomotive with its brakes applied, in addition to a static load. If the drawbar force () is measured in pounds-force (lbf) and speed () is measured in miles per hour (mph), then the drawbar power () in horsepower (hp) is Example: How much power is needed to pull a drawbar load of 2,025 pounds-force at 5 miles per hour? The constant 375 is because 1 hp = 375 lbf⋅mph. If other units are used, the constant is different. When using coherent SI units (watts, newtons, and metres per second), no constant is needed, and the formula becomes . This formula may also be used to calculate the power of a jet engine, using the speed of the jet and the thrust required to maintain that speed. Example: how much power is generated with a thrust of 4000 pounds at 400 miles per hour? RAC horsepower (taxable horsepower) This measure was instituted by the Royal Automobile Club and was used to denote the power of early 20th-century British cars. Many cars took their names from this figure (hence the Austin Seven and Riley Nine), while others had names such as "40/50 hp", which indicated the RAC figure followed by the true measured power. Taxable horsepower does not reflect developed horsepower; rather, it is a calculated figure based on the engine's bore size, number of cylinders, and a (now archaic) presumption of engine efficiency. As new engines were designed with ever-increasing efficiency, it was no longer a useful measure, but was kept in use by UK regulations, which used the rating for tax purposes. The United Kingdom was not the only country that used the RAC rating; many states in Australia used RAC hp to determine taxation. The RAC formula was sometimes applied in British colonies as well, such as Kenya (British East Africa). where D is the diameter (or bore) of the cylinder in inches, n is the number of cylinders. Since taxable horsepower was computed based on bore and number of cylinders, not based on actual displacement, it gave rise to engines with "undersquare" dimensions (bore smaller than stroke), which tended to impose an artificially low limit on rotational speed, hampering the potential power output and efficiency of the engine. The situation persisted for several generations of four- and six-cylinder British engines: For example, Jaguar's 3.4-litre XK engine of the 1950s had six cylinders with a bore of and a stroke of , where most American automakers had long since moved to oversquare (large bore, short stroke) V8 engines. See, for example, the early Chrysler Hemi engine. Measurement The power of an engine may be measured or estimated at several points in the transmission of the power from its generation to its application. A number of names are used for the power developed at various stages in this process, but none is a clear indicator of either the measurement system or definition used. In general: nominal horsepower is derived from the size of the engine and the piston speed and is only accurate at a steam pressure of ; indicated or gross horsepower is the theoretical capability of the engine [PLAN/ 33000]; brake/net/crankshaft horsepower (power delivered directly to and measured at the engine's crankshaft) equals indicated horsepower minus frictional losses within the engine (bearing drag, rod and crankshaft windage losses, oil film drag, etc.); shaft horsepower (power delivered to and measured at the output shaft of the transmission, when present in the system) equals crankshaft horsepower minus frictional losses in the transmission (bearings, gears, oil drag, windage, etc.); effective or true (thp), commonly referred to as wheel horsepower (whp), equals shaft horsepower minus frictional losses in the universal joint/s, differential, wheel bearings, tire and chain, (if present). All the above assumes that no power inflation factors have been applied to any of the readings. Engine designers use expressions other than horsepower to denote objective targets or performance, such as brake mean effective pressure (BMEP). This is a coefficient of theoretical brake horsepower and cylinder pressures during combustion. Nominal horsepower Nominal horsepower (nhp) is an early 19th-century rule of thumb used to estimate the power of steam engines. It assumed a steam pressure of . Nominal horsepower = 7 × area of piston in square inches × equivalent piston speed in feet per minute/33,000. For paddle ships, the Admiralty rule was that the piston speed in feet per minute was taken as 129.7 × (stroke)1/3.38. For screw steamers, the intended piston speed was used. The stroke (or length of stroke) was the distance moved by the piston measured in feet. For the nominal horsepower to equal the actual power it would be necessary for the mean steam pressure in the cylinder during the stroke to be and for the piston speed to be that generated by the assumed relationship for paddle ships. The French Navy used the same definition of nominal horse power as the Royal Navy. Indicated horsepower Indicated horsepower (ihp) is the theoretical power of a reciprocating engine if it is completely frictionless in converting the expanding gas energy (piston pressure × displacement) in the cylinders. It is calculated from the pressures developed in the cylinders, measured by a device called an engine indicator – hence indicated horsepower. As the piston advances throughout its stroke, the pressure against the piston generally decreases, and the indicator device usually generates a graph of pressure vs stroke within the working cylinder. From this graph the amount of work performed during the piston stroke may be calculated. Indicated horsepower was a better measure of engine power than nominal horsepower (nhp) because it took account of steam pressure. But unlike later measures such as shaft horsepower (shp) and brake horsepower (bhp), it did not take into account power losses due to the machinery internal frictional losses, such as a piston sliding within the cylinder, plus bearing friction, transmission and gear box friction, etc. Brake horsepower Brake horsepower (bhp) is the power measured using a brake type (load) dynamometer at a specified location, such as the crankshaft, output shaft of the transmission, rear axle or rear wheels. In Europe, the DIN 70020 standard tests the engine fitted with all ancillaries and the exhaust system as used in the car. The older American standard (SAE gross horsepower, referred to as bhp) used an engine without alternator, water pump, and other auxiliary components such as power steering pump, muffled exhaust system, etc., so the figures were higher than the European figures for the same engine. The newer American standard (referred to as SAE net horsepower) tests an engine with all the auxiliary components (see "Engine power test standards" below). Brake refers to the device which is used to provide an equal braking force, load to balance, or equal an engine's output force and hold it at a desired rotational speed. During testing, the output torque and rotational speed are measured to determine the brake horsepower. Horsepower was originally measured and calculated by use of the "indicator diagram" (a James Watt invention of the late 18th century), and later by means of a Prony brake connected to the engine's output shaft. Modern dynamometers use any of several braking methods to measure the engine's brake horsepower, the actual output of the engine itself, before losses to the drivetrain. Shaft horsepower Shaft horsepower (shp) is the power delivered to a propeller or turbine shaft. Shaft horsepower is a common rating for turboshaft and turboprop engines, industrial turbines, and some marine applications. Equivalent shaft horsepower (eshp) is sometimes used to rate turboprop engines. It includes the equivalent power derived from residual jet thrust from the turbine exhaust. of residual jet thrust is estimated to be produced from one unit of horsepower. Engine power test standards There exist a number of different standards determining how the power and torque of an automobile engine is measured and corrected. Correction factors are used to adjust power and torque measurements to standard atmospheric conditions, to provide a more accurate comparison between engines as they are affected by the pressure, humidity, and temperature of ambient air. Some standards are described below. Society of Automotive Engineers/SAE International Early "SAE horsepower" In the early twentieth century, a so-called "SAE horsepower" was sometimes quoted for U.S. automobiles. This long predates the Society of Automotive Engineers (SAE) horsepower measurement standards and was another name for the industry standard ALAM or NACC horsepower figure and the same as the British RAC horsepower also used for tax purposes. Alliance for Automotive Innovation is the current successor of ALAM and NACC. SAE gross power Prior to the 1972 model year, American automakers rated and advertised their engines in brake horsepower, bhp, which was a version of brake horsepower called SAE gross horsepower because it was measured according to Society of Automotive Engineers (SAE) standards (J245 and J1995) that call for a stock test engine without accessories (such as dynamo/alternator, radiator fan, water pump), and sometimes fitted with long tube test headers in lieu of the OEM exhaust manifolds. This contrasts with both SAE net power and DIN 70020 standards, which account for engine accessories (but not transmission losses). The atmospheric correction standards for barometric pressure, humidity and temperature for SAE gross power testing were relatively idealistic. SAE net power In the United States, the term bhp fell into disuse in 1971–1972, as automakers began to quote power in terms of SAE net horsepower in accord with SAE standard J1349. Like SAE gross and other brake horsepower protocols, SAE net hp is measured at the engine's crankshaft, and so does not account for transmission losses. However, similar to the DIN 70020 standard, SAE net power testing protocol calls for standard production-type belt-driven accessories, air cleaner, emission controls, exhaust system, and other power-consuming accessories. This produces ratings in closer alignment with the power produced by the engine as it is actually configured and sold. SAE certified power In 2005, the SAE introduced "SAE Certified Power" with SAE J2723. To attain certification the test must follow the SAE standard in question, take place in an ISO 9000/9002 certified facility and be witnessed by an SAE approved third party. A few manufacturers such as Honda and Toyota switched to the new ratings immediately. The rating for Toyota's Camry 3.0 L 1MZ-FE V6 fell from . The company's Lexus ES 330 and Camry SE V6 (3.3 L V6) were previously rated at but the ES 330 dropped to while the Camry declined to . The first engine certified under the new program was the 7.0 L LS7 used in the 2006 Chevrolet Corvette Z06. Certified power rose slightly from . While Toyota and Honda are retesting their entire vehicle lineups, other automakers generally are retesting only those with updated powertrains. For example, the 2006 Ford Five Hundred is rated at , the same as that of 2005 model. However, the 2006 rating does not reflect the new SAE testing procedure, as Ford did not opt to incur the extra expense of retesting its existing engines. Over time, most automakers are expected to comply with the new guidelines. SAE tightened its horsepower rules to eliminate the opportunity for engine manufacturers to manipulate factors affecting performance such as how much oil was in the crankcase, engine control system calibration, and whether an engine was tested with high octane fuel. In some cases, such can add up to a change in horsepower ratings. Deutsches Institut für Normung 70020 (DIN 70020) DIN 70020 is a German DIN standard for measuring road vehicle horsepower. DIN hp is measured at the engine's output shaft as a form of metric horsepower rather than mechanical horsepower. Similar to SAE net power rating, and unlike SAE gross power, DIN testing measures the engine as installed in the vehicle, with cooling system, charging system and stock exhaust system all connected. DIN hp is often abbreviated as "PS", derived from the German word Pferdestärke (literally, "horsepower"). CUNA A test standard by Italian CUNA (Commissione Tecnica per l'Unificazione nell'Automobile, Technical Commission for Automobile Unification), a federated entity of standards organisation UNI, was formerly used in Italy. CUNA prescribed that the engine be tested with all accessories necessary to its running fitted (such as the water pump), while all others – such as alternator/dynamo, radiator fan, and exhaust manifold – could be omitted. All calibration and accessories had to be as on production engines. Economic Commission for Europe R24 ECE R24 is a UN standard for the approval of compression ignition engine emissions, installation and measurement of engine power. It is similar to DIN 70020 standard, but with different requirements for connecting an engine's fan during testing causing it to absorb less power from the engine. Economic Commission for Europe R85 ECE R85 is a UN standard for the approval of internal combustion engines with regard to the measurement of the net power. 80/1269/EEC 80/1269/EEC of 16 December 1980 is a European Union standard for road vehicle engine power. International Organization for Standardization The International Organization for Standardization (ISO) publishes several standards for measuring engine horsepower. ISO 14396 specifies the additional and method requirement for determining the power of reciprocating internal combustion engines when presented for an ISO 8178 exhaust emission test. It applies to reciprocating internal combustion engines for land, rail and marine use excluding engines of motor vehicles primarily designed for road use. ISO 1585 is an engine net power test code intended for road vehicles. ISO 2534 is an engine gross power test code intended for road vehicles. ISO 4164 is an engine net power test code intended for mopeds. ISO 4106 is an engine net power test code intended for motorcycles. ISO 9249 is an engine net power test code intended for earth moving machines. Japanese Industrial Standard D 1001 JIS D 1001 is a Japanese net, and gross, engine power test code for automobiles or trucks having a spark ignition, diesel engine, or fuel injection engine.
Physical sciences
Power
Basics and measurement
14022
https://en.wikipedia.org/wiki/Haber%20process
Haber process
The Haber process, also called the Haber–Bosch process, is the main industrial procedure for the production of ammonia. It converts atmospheric nitrogen (N2) to ammonia (NH3) by a reaction with hydrogen (H2) using finely divided iron metal as a catalyst: This reaction is thermodynamically favorable at room temperature, but the kinetics are prohibitively slow. At high temperatures at which catalysts are active enough that the reaction proceeds to equilibrium, the reaction is reactant-favored rather than product-favored. As a result, high pressures are needed to drive the reaction forward. The German chemists Fritz Haber and Carl Bosch developed the process in the first decade of the 20th century, and its improved efficiency over existing methods such as the Birkeland-Eyde and Frank-Caro processes was a major advancement in the industrial production of ammonia. The Haber process can be combined with steam reforming to produce ammonia with just three chemical inputs: water, natural gas, and atmospheric nitrogen. Both Haber and Bosch were eventually awarded the Nobel Prize in Chemistry: Haber in 1918 for ammonia synthesis specifically, and Bosch in 1931 for related contributions to high-pressure chemistry. History During the 19th century, the demand rapidly increased for nitrates and ammonia for use as fertilizers, which supply plants with the nutrients they need to grow, and for industrial feedstocks. The main source was mining niter deposits and guano from tropical islands. At the beginning of the 20th century these reserves were thought insufficient to satisfy future demands, and research into new potential sources of ammonia increased. Although atmospheric nitrogen (N2) is abundant, comprising ~78% of the air, it is exceptionally stable and does not readily react with other chemicals. Haber, with his assistant Robert Le Rossignol, developed the high-pressure devices and catalysts needed to demonstrate the Haber process at a laboratory scale. They demonstrated their process in the summer of 1909 by producing ammonia from the air, drop by drop, at the rate of about per hour. The process was purchased by the German chemical company BASF, which assigned Carl Bosch the task of scaling up Haber's tabletop machine to industrial scale. He succeeded in 1910. Haber and Bosch were later awarded Nobel Prizes, in 1918 and 1931 respectively, for their work in overcoming the chemical and engineering problems of large-scale, continuous-flow, high-pressure technology. Ammonia was first manufactured using the Haber process on an industrial scale in 1913 in BASF's Oppau plant in Germany, reaching 20 tonnes/day in 1914. During World War I, the production of munitions required large amounts of nitrate. The Allied powers had access to large deposits of sodium nitrate in Chile (Chile saltpetre) controlled by British companies. India had large supplies too, but it was also controlled by the British. Moreover, even if German commercial interests had nominal legal control of such resources, the Allies controlled the sea lanes and imposed a highly effective blockade which would have prevented such supplies from reaching Germany. The Haber process proved so essential to the German war effort that it is considered virtually certain Germany would have been defeated in a matter of months without it. Synthetic ammonia from the Haber process was used for the production of nitric acid, a precursor to the nitrates used in explosives. The original Haber–Bosch reaction chambers used osmium as the catalyst, but this was available in extremely small quantities. Haber noted that uranium was almost as effective and easier to obtain than osmium. In 1909, BASF researcher Alwin Mittasch discovered a much less expensive iron-based catalyst that is still used. A major contributor to the discovery of this catalysis was Gerhard Ertl. The most popular catalysts are based on iron promoted with K2O, CaO, SiO2, and Al2O3. During the interwar years, alternative processes were developed, most notably the Casale process, the Claude process, and the Mont-Cenis process developed by the Friedrich Uhde Ingenieurbüro. Luigi Casale and Georges Claude proposed to increase the pressure of the synthesis loop to , thereby increasing the single-pass ammonia conversion and making nearly complete liquefaction at ambient temperature feasible. Claude proposed to have three or four converters with liquefaction steps in series, thereby avoiding recycling. Most plants continue to use the original Haber process ( and ), albeit with improved single-pass conversion and lower energy consumption due to process and catalyst optimization. Process Combined with the energy needed to produce hydrogen and purified atmospheric nitrogen, ammonia production is energy-intensive, accounting for 1% to 2% of global energy consumption, 3% of global carbon emissions, and 3% to 5% of natural gas consumption. Hydrogen required for ammonia synthesis is most often produced through gasification of carbon-containing material, mostly natural gas, but other potential carbon sources include coal, petroleum, peat, biomass, or waste. As of 2012, the global production of ammonia produced from natural gas using the steam reforming process was 72%, however in China as of 2022 natural gas and coal were responsible for 20% and 75% respectively. Hydrogen can also be produced from water and electricity using electrolysis: at one time, most of Europe's ammonia was produced from the Hydro plant at Vemork. Other possibilities include biological hydrogen production or photolysis, but at present, steam reforming of natural gas is the most economical means of mass-producing hydrogen. The choice of catalyst is important for synthesizing ammonia. In 2012, Hideo Hosono's group found that Ru-loaded calcium-aluminum oxide C12A7: electride works well as a catalyst and pursued more efficient formation. This method is implemented in a small plant for ammonia synthesis in Japan. In 2019, Hosono's group found another catalyst, a novel perovskite oxynitride-hydride , that works at lower temperature and without costly ruthenium. Hydrogen production The major source of hydrogen is methane. Steam reforming of natural gas extracts hydrogen from methane in a high-temperature and pressure tube inside a reformer with a nickel catalyst. Other fossil fuel sources include coal, heavy fuel oil and naphtha. Green hydrogen is produced without fossil fuels or carbon dioxide emissions from biomass, water electrolysis and thermochemical (solar or another heat source) water splitting. Starting with a natural gas () feedstock, the steps are as follows; Remove sulfur compounds from the feedstock, because sulfur deactivates the catalysts used in subsequent steps. Sulfur removal requires catalytic hydrogenation to convert sulfur compounds in the feedstocks to gaseous hydrogen sulfide (hydrodesulfurization, hydrotreating): H2 + RSH -> RH + H2S Hydrogen sulfide is adsorbed and removed by passing it through beds of zinc oxide where it is converted to solid zinc sulfide: H2S + ZnO -> ZnS + H2O Catalytic steam reforming of the sulfur-free feedstock forms hydrogen plus carbon monoxide: CH4 + H2O -> CO + 3 H2 Catalytic shift conversion converts the carbon monoxide to carbon dioxide and more hydrogen: CO + H2O -> CO2 + H2 Carbon dioxide is removed either by absorption in aqueous ethanolamine solutions or by adsorption in pressure swing adsorbers (PSA) using proprietary solid adsorption media. The final step in producing hydrogen is to use catalytic methanation to remove residual carbon monoxide or carbon dioxide: CO + 3 H2 -> CH4 + H2O CO2 + 4 H2 -> CH4 + 2 H2O Ammonia production The hydrogen is catalytically reacted with nitrogen (derived from process air) to form anhydrous liquid ammonia. It is difficult and expensive, as lower temperatures result in slower reaction kinetics (hence a slower reaction rate) and high pressure requires high-strength pressure vessels that resist hydrogen embrittlement. Diatomic nitrogen is bound together by a triple bond, which makes it relatively inert. Yield and efficiency are low, meaning that the ammonia must be extracted and the gases reprocessed for the reaction to proceed at an acceptable pace. This step is known as the ammonia synthesis loop: 3 H2 + N2 -> 2 NH3 The gases (nitrogen and hydrogen) are passed over four beds of catalyst, with cooling between each pass to maintain a reasonable equilibrium constant. On each pass, only about 15% conversion occurs, but unreacted gases are recycled, and eventually conversion of 97% is achieved. Due to the nature of the (typically multi-promoted magnetite) catalyst used in the ammonia synthesis reaction, only low levels of oxygen-containing (especially CO, CO2 and H2O) compounds can be tolerated in the hydrogen/nitrogen mixture. Relatively pure nitrogen can be obtained by air separation, but additional oxygen removal may be required. Because of relatively low single pass conversion rates (typically less than 20%), a large recycle stream is required. This can lead to the accumulation of inerts in the gas. Nitrogen gas (N2) is unreactive because the atoms are held together by triple bonds. The Haber process relies on catalysts that accelerate the scission of these bonds. Two opposing considerations are relevant: the equilibrium position and the reaction rate. At room temperature, the equilibrium is in favor of ammonia, but the reaction does not proceed at a detectable rate due to its high activation energy. Because the reaction is exothermic, the equilibrium constant decreases with increasing temperature following Le Châtelier's principle. It becomes unity at around . Above this temperature, the equilibrium quickly becomes unfavorable at atmospheric pressure, according to the Van 't Hoff equation. Lowering the temperature is unhelpful because the catalyst requires a temperature of at least 400 °C to be efficient. Increased pressure favors the forward reaction because 4 moles of reactant produce 2 moles of product, and the pressure used () alters the equilibrium concentrations to give a substantial ammonia yield. The reason for this is evident in the equilibrium relationship: where is the fugacity coefficient of species , is the mole fraction of the same species, is the reactor pressure, and is standard pressure, typically . Economically, reactor pressurization is expensive: pipes, valves, and reaction vessels need to be strong enough, and safety considerations affect operating at 20 MPa. Compressors take considerable energy, as work must be done on the (compressible) gas. Thus, the compromise used gives a single-pass yield of around 15%. While removing the ammonia from the system increases the reaction yield, this step is not used in practice, since the temperature is too high; instead it is removed from the gases leaving the reaction vessel. The hot gases are cooled under high pressure, allowing the ammonia to condense and be removed as a liquid. Unreacted hydrogen and nitrogen gases are returned to the reaction vessel for another round. While most ammonia is removed (typically down to 2–5 mol.%), some ammonia remains in the recycle stream. In academic literature, a more complete separation of ammonia has been proposed by absorption in metal halides, metal-organic frameworks or zeolites. Such a process is called an absorbent-enhanced Haber process or adsorbent-enhanced Haber–Bosch process. Pressure/temperature The steam reforming, shift conversion, carbon dioxide removal, and methanation steps each operate at absolute pressures of about 25 to 35 bar, while the ammonia synthesis loop operates at temperatures of and pressures ranging from 60 to 180 bar depending upon the method used. The resulting ammonia must then be separated from the residual hydrogen and nitrogen at temperatures of . Catalysts The Haber–Bosch process relies on catalysts to accelerate N2 hydrogenation. The catalysts are heterogeneous solids that interact with gaseous reagents. The catalyst typically consists of finely divided iron bound to an iron oxide carrier containing promoters possibly including aluminium oxide, potassium oxide, calcium oxide, potassium hydroxide, molybdenum, and magnesium oxide. Iron-based catalysts The iron catalyst is obtained from finely ground iron powder, which is usually obtained by reduction of high-purity magnetite (Fe3O4). The pulverized iron is oxidized to give magnetite or wüstite (FeO, ferrous oxide) particles of a specific size. The magnetite (or wüstite) particles are then partially reduced, removing some of the oxygen. The resulting catalyst particles consist of a core of magnetite, encased in a shell of wüstite, which in turn is surrounded by an outer shell of metallic iron. The catalyst maintains most of its bulk volume during the reduction, resulting in a highly porous high-surface-area material, which enhances its catalytic effectiveness. Minor components include calcium and aluminium oxides, which support the iron catalyst and help it maintain its surface area. These oxides of Ca, Al, K, and Si are unreactive to reduction by hydrogen. The production of the catalyst requires a particular melting process in which used raw materials must be free of catalyst poisons and the promoter aggregates must be evenly distributed in the magnetite melt. Rapid cooling of the magnetite, which has an initial temperature of about 3500 °C, produces the desired precursor. Unfortunately, the rapid cooling ultimately forms a catalyst of reduced abrasion resistance. Despite this disadvantage, the method of rapid cooling is often employed. The reduction of the precursor magnetite to α-iron is carried out directly in the production plant with synthesis gas. The reduction of the magnetite proceeds via the formation of wüstite (FeO) so that particles with a core of magnetite become surrounded by a shell of wüstite. The further reduction of magnetite and wüstite leads to the formation of α-iron, which forms together with the promoters the outer shell. The involved processes are complex and depend on the reduction temperature: At lower temperatures, wüstite disproportionates into an iron phase and a magnetite phase; at higher temperatures, the reduction of the wüstite and magnetite to iron dominates. The α-iron forms primary crystallites with a diameter of about 30 nanometers. These crystallites form a bimodal pore system with pore diameters of about 10 nanometers (produced by the reduction of the magnetite phase) and of 25 to 50 nanometers (produced by the reduction of the wüstite phase). With the exception of cobalt oxide, the promoters are not reduced. During the reduction of the iron oxide with synthesis gas, water vapor is formed. This water vapor must be considered for high catalyst quality as contact with the finely divided iron would lead to premature aging of the catalyst through recrystallization, especially in conjunction with high temperatures. The vapor pressure of the water in the gas mixture produced during catalyst formation is thus kept as low as possible, target values are below 3 gm−3. For this reason, the reduction is carried out at high gas exchange, low pressure, and low temperatures. The exothermic nature of the ammonia formation ensures a gradual increase in temperature. The reduction of fresh, fully oxidized catalyst or precursor to full production capacity takes four to ten days. The wüstite phase is reduced faster and at lower temperatures than the magnetite phase (Fe3O4). After detailed kinetic, microscopic, and X-ray spectroscopic investigations it was shown that wüstite reacts first to metallic iron. This leads to a gradient of iron(II) ions, whereby these diffuse from the magnetite through the wüstite to the particle surface and precipitate there as iron nuclei. A high-activity novel catalyst based on this phenomenon was discovered in the 1980s at the Zhejiang University of Technology and commercialized by 2003. Pre-reduced, stabilized catalysts occupy a significant market share. They are delivered showing the fully developed pore structure, but have been oxidized again on the surface after manufacture and are therefore no longer pyrophoric. The reactivation of such pre-reduced catalysts requires only 30 to 40 hours instead of several days. In addition to the short start-up time, they have other advantages such as higher water resistance and lower weight. Catalysts other than iron Many efforts have been made to improve the Haber–Bosch process. Many metals were tested as catalysts. The requirement for suitability is the dissociative adsorption of nitrogen (i. e. the nitrogen molecule must be split into nitrogen atoms upon adsorption). If the binding of the nitrogen is too strong, the catalyst is blocked and the catalytic ability is reduced (self-poisoning). The elements in the periodic table to the left of the iron group show such strong bonds. Further, the formation of surface nitrides makes, for example, chromium catalysts ineffective. Metals to the right of the iron group, in contrast, adsorb nitrogen too weakly for ammonia synthesis. Haber initially used catalysts based on osmium and uranium. Uranium reacts to its nitride during catalysis, while osmium oxide is rare. According to theoretical and practical studies, improvements over pure iron are limited. The activity of iron catalysts is increased by the inclusion of cobalt. Ruthenium Ruthenium forms highly active catalysts. Allowing milder operating pressures and temperatures, Ru-based materials are referred to as second-generation catalysts. Such catalysts are prepared by the decomposition of triruthenium dodecacarbonyl on graphite. A drawback of activated-carbon-supported ruthenium-based catalysts is the methanation of the support in the presence of hydrogen. Their activity is strongly dependent on the catalyst carrier and the promoters. A wide range of substances can be used as carriers, including carbon, magnesium oxide, aluminium oxide, zeolites, spinels, and boron nitride. Ruthenium-activated carbon-based catalysts have been used industrially in the KBR Advanced Ammonia Process (KAAP) since 1992. The carbon carrier is partially degraded to methane; however, this can be mitigated by a special treatment of the carbon at 1500 °C, thus prolonging the catalyst lifetime. In addition, the finely dispersed carbon poses a risk of explosion. For these reasons and due to its low acidity, magnesium oxide has proven to be a good choice of carrier. Carriers with acidic properties extract electrons from ruthenium, make it less reactive, and have the undesirable effect of binding ammonia to the surface. Catalyst poisons Catalyst poisons lower catalyst activity. They are usually impurities in the synthesis gas. Permanent poisons cause irreversible loss of catalytic activity, while temporary poisons lower the activity while present. Sulfur compounds, phosphorus compounds, arsenic compounds, and chlorine compounds are permanent poisons. Oxygenic compounds like water, carbon monoxide, carbon dioxide, and oxygen are temporary poisons. Although chemically inert components of the synthesis gas mixture such as noble gases or methane are not strictly poisons, they accumulate through the recycling of the process gases and thus lower the partial pressure of the reactants, which in turn slows conversion. Industrial production Synthesis parameters The reaction is: The reaction is an exothermic equilibrium reaction in which the gas volume is reduced. The equilibrium constant Keq of the reaction (see table) and obtained from: Since the reaction is exothermic, the equilibrium of the reaction shifts at lower temperatures to the ammonia side. Furthermore, four volumetric units of the raw materials produce two volumetric units of ammonia. According to Le Chatelier's principle, higher pressure favours ammonia. High pressure is necessary to ensure sufficient surface coverage of the catalyst with nitrogen. For this reason, a ratio of nitrogen to hydrogen of 1 to 3, a pressure of 250 to 350 bar, a temperature of 450 to 550 °C and α iron are optimal. The catalyst ferrite (α-Fe) is produced in the reactor by the reduction of magnetite with hydrogen. The catalyst has its highest efficiency at temperatures of about 400 to 500 °C. Even though the catalyst greatly lowers the activation energy for the cleavage of the triple bond of the nitrogen molecule, high temperatures are still required for an appropriate reaction rate. At the industrially used reaction temperature of 450 to 550 °C an optimum between the decomposition of ammonia into the starting materials and the effectiveness of the catalyst is achieved. The formed ammonia is continuously removed from the system. The volume fraction of ammonia in the gas mixture is about 20%. The inert components, especially the noble gases such as argon, should not exceed a certain content in order not to reduce the partial pressure of the reactants too much. To remove the inert gas components, part of the gas is removed and the argon is separated in a gas separation plant. The extraction of pure argon from the circulating gas is carried out using the Linde process. Large-scale implementation Modern ammonia plants produce more than 3000 tons per day in one production line. The following diagram shows the set-up of a modern (designed in the early 1960s by Kellogg) "single-train" Haber–Bosch plant: Depending on its origin, the synthesis gas must first be freed from impurities such as hydrogen sulfide or organic sulphur compounds, which act as a catalyst poison. High concentrations of hydrogen sulfide, which occur in synthesis gas from carbonization coke, are removed in a wet cleaning stage such as the sulfosolvan process, while low concentrations are removed by adsorption on activated carbon. Organosulfur compounds are separated by pressure swing adsorption together with carbon dioxide after CO conversion. To produce hydrogen by steam reforming, methane reacts with water vapor using a nickel oxide-alumina catalyst in the primary reformer to form carbon monoxide and hydrogen. The energy required for this, the enthalpy ΔH, is 206 kJ/mol. The methane gas reacts in the primary reformer only partially. To increase the hydrogen yield and keep the content of inert components (i. e. methane) as low as possible, the remaining methane gas is converted in a second step with oxygen to hydrogen and carbon monoxide in the secondary reformer. The secondary reformer is supplied with air as the oxygen source. Also, the required nitrogen for the subsequent ammonia synthesis is added to the gas mixture. In the third step, the carbon monoxide is oxidized to carbon dioxide, which is called CO conversion or water-gas shift reaction. Carbon monoxide and carbon dioxide would form carbamates with ammonia, which would clog (as solids) pipelines and apparatus within a short time. In the following process step, the carbon dioxide must therefore be removed from the gas mixture. In contrast to carbon monoxide, carbon dioxide can easily be removed from the gas mixture by gas scrubbing with triethanolamine. The gas mixture then still contains methane and noble gases such as argon, which, however, behave inertly. The gas mixture is then compressed to operating pressure by turbo compressors. The resulting compression heat is dissipated by heat exchangers; it is used to preheat raw gases. The actual production of ammonia takes place in the ammonia reactor. The first reactors were bursting under high pressure because the atomic hydrogen in the carbonaceous steel partially recombined into methane and produced cracks in the steel. Bosch, therefore, developed tube reactors consisting of a pressure-bearing steel tube in which a low-carbon iron lining tube was inserted and filled with the catalyst. Hydrogen that diffused through the inner steel pipe escaped to the outside via thin holes in the outer steel jacket, the so-called Bosch holes. A disadvantage of the tubular reactors was the relatively high-pressure loss, which had to be applied again by compression. The development of hydrogen-resistant chromium-molybdenum steels made it possible to construct single-walled pipes. Modern ammonia reactors are designed as multi-storey reactors with a low-pressure drop, in which the catalysts are distributed as fills over about ten storeys one above the other. The gas mixture flows through them one after the other from top to bottom. Cold gas is injected from the side for cooling. A disadvantage of this reactor type is the incomplete conversion of the cold gas mixture in the last catalyst bed. Alternatively, the reaction mixture between the catalyst layers is cooled using heat exchangers, whereby the hydrogen-nitrogen mixture is preheated to the reaction temperature. Reactors of this type have three catalyst beds. In addition to good temperature control, this reactor type has the advantage of better conversion of the raw material gases compared to reactors with cold gas injection. Uhde has developed and is using an ammonia converter with three radial flow catalyst beds and two internal heat exchangers instead of axial flow catalyst beds. This further reduces the pressure drop in the converter. The reaction product is continuously removed for maximum yield. The gas mixture is cooled to 450 °C in a heat exchanger using water, freshly supplied gases, and other process streams. The ammonia also condenses and is separated in a pressure separator. Unreacted nitrogen and hydrogen are then compressed back to the process by a circulating gas compressor, supplemented with fresh gas, and fed to the reactor. In a subsequent distillation, the product ammonia is purified. Mechanism Elementary steps The mechanism of ammonia synthesis contains the following seven elementary steps: transport of the reactants from the gas phase through the boundary layer to the surface of the catalyst. pore diffusion to the reaction center adsorption of reactants reaction desorption of product transport of the product through the pore system back to the surface transport of the product into the gas phase Transport and diffusion (the first and last two steps) are fast compared to adsorption, reaction, and desorption because of the shell structure of the catalyst. It is known from various investigations that the rate-determining step of the ammonia synthesis is the dissociation of nitrogen. In contrast, exchange reactions between hydrogen and deuterium on the Haber–Bosch catalysts still take place at temperatures of at a measurable rate; the exchange between deuterium and hydrogen on the ammonia molecule also takes place at room temperature. Since the adsorption of both molecules is rapid, it cannot determine the speed of ammonia synthesis. In addition to the reaction conditions, the adsorption of nitrogen on the catalyst surface depends on the microscopic structure of the catalyst surface. Iron has different crystal surfaces, whose reactivity is very different. The Fe(111) and Fe(211) surfaces have by far the highest activity. The explanation for this is that only these surfaces have so-called C7 sites – these are iron atoms with seven closest neighbours. The dissociative adsorption of nitrogen on the surface follows the following scheme, where S* symbolizes an iron atom on the surface of the catalyst: N2 → S*–N2 (γ-species) → S*–N2–S* (α-species) → 2 S*–N (β-species, surface nitride) The adsorption of nitrogen is similar to the chemisorption of carbon monoxide. On a Fe(111) surface, the adsorption of nitrogen first leads to an adsorbed γ-species with an adsorption energy of 24 kJmol−1 and an N-N stretch vibration of 2100 cm−1. Since the nitrogen is isoelectronic to carbon monoxide, it adsorbs in an on-end configuration in which the molecule is bound perpendicular to the metal surface at one nitrogen atom. This has been confirmed by photoelectron spectroscopy. Ab-initio-MO calculations have shown that, in addition to the σ binding of the free electron pair of nitrogen to the metal, there is a π binding from the d orbitals of the metal to the π* orbitals of nitrogen, which strengthens the iron-nitrogen bond. The nitrogen in the α state is more strongly bound with 31 kJmol−1. The resulting N–N bond weakening could be experimentally confirmed by a reduction of the wave numbers of the N–N stretching oscillation to 1490 cm−1. Further heating of the Fe(111) area covered by α-N2 leads to both desorption and the emergence of a new band at 450 cm−1. This represents a metal-nitrogen oscillation, the β state. A comparison with vibration spectra of complex compounds allows the conclusion that the N2 molecule is bound "side-on", with an N atom in contact with a C7 site. This structure is called "surface nitride". The surface nitride is very strongly bound to the surface. Hydrogen atoms (Hads), which are very mobile on the catalyst surface, quickly combine with it. Infrared spectroscopically detected surface imides (NHad), surface amides (NH2,ad) and surface ammoniacates (NH3,ad) are formed, the latter decay under NH3 release (desorption). The individual molecules were identified or assigned by X-ray photoelectron spectroscopy (XPS), high-resolution electron energy loss spectroscopy (HREELS) and Ir Spectroscopy. On the basis of these experimental findings, the reaction mechanism is believed to involve the following steps (see also figure): N2 (g) → N2 (adsorbed) N2 (adsorbed) → 2 N (adsorbed) H2 (g) → H2 (adsorbed) H2 (adsorbed) → 2 H (adsorbed) N (adsorbed) + 3 H (adsorbed) → NH3 (adsorbed) NH3 (adsorbed) → NH3 (g) Reaction 5 occurs in three steps, forming NH, NH2, and then NH3. Experimental evidence points to reaction 2 as being slow, rate-determining step. This is not unexpected, since that step breaks the nitrogen triple bond, the strongest of the bonds broken in the process. As with all Haber–Bosch catalysts, nitrogen dissociation is the rate-determining step for ruthenium-activated carbon catalysts. The active center for ruthenium is a so-called B5 site, a 5-fold coordinated position on the Ru(0001) surface where two ruthenium atoms form a step edge with three ruthenium atoms on the Ru(0001) surface. The number of B5 sites depends on the size and shape of the ruthenium particles, the ruthenium precursor and the amount of ruthenium used. The reinforcing effect of the basic carrier used in the ruthenium catalyst is similar to the promoter effect of alkali metals used in the iron catalyst. Energy diagram An energy diagram can be created based on the Enthalpy of Reaction of the individual steps. The energy diagram can be used to compare homogeneous and heterogeneous reactions: Due to the high activation energy of the dissociation of nitrogen, the homogeneous gas phase reaction is not realizable. The catalyst avoids this problem as the energy gain resulting from the binding of nitrogen atoms to the catalyst surface overcompensates for the necessary dissociation energy so that the reaction is finally exothermic. Nevertheless, the dissociative adsorption of nitrogen remains the rate-determining step: not because of the activation energy, but mainly because of the unfavorable pre-exponential factor of the rate constant. Although hydrogenation is endothermic, this energy can easily be applied by the reaction temperature (about 700 K). Economic and environmental aspects When first invented, the Haber process competed against another industrial process, the cyanamide process. However, the cyanamide process consumed large amounts of electrical power and was more labor-intensive than the Haber process. As of 2018, the Haber process produces 230 million tonnes of anhydrous ammonia per year. The ammonia is used mainly as a nitrogen fertilizer as ammonia itself, in the form of ammonium nitrate, and as urea. The Haber process consumes 3–5% of the world's natural gas production (around 1–2% of the world's energy supply). In combination with advances in breeding, herbicides, and pesticides, these fertilizers have helped to increase the productivity of agricultural land: The energy-intensity of the process contributes to climate change and other environmental problems such as the leaching of nitrates into groundwater, rivers, ponds, and lakes; expanding dead zones in coastal ocean waters, resulting from recurrent eutrophication; atmospheric deposition of nitrates and ammonia affecting natural ecosystems; higher emissions of nitrous oxide (N2O), now the third most important greenhouse gas following CO2 and CH4. The Haber–Bosch process is one of the largest contributors to a buildup of reactive nitrogen in the biosphere, causing an anthropogenic disruption to the nitrogen cycle. Since nitrogen use efficiency is typically less than 50%, farm runoff from heavy use of fixed industrial nitrogen disrupts biological habitats. Nearly 50% of the nitrogen found in human tissues originated from the Haber–Bosch process. Thus, the Haber process serves as the "detonator of the population explosion", enabling the global population to increase from 1.6 billion in 1900 to 7.7 billion by November 2018. Reverse fuel cell technology converts electric energy, water and nitrogen into ammonia without a separate hydrogen electrolysis process. The use of synthetic nitrogen fertilisers reduces the incentive for farmers to use more sustainable crop rotations which include legumes for their natural nitrogen-fixing ability.
Physical sciences
Chemical reactions
null
14029
https://en.wikipedia.org/wiki/Histone
Histone
In biology, histones are highly basic proteins abundant in lysine and arginine residues that are found in eukaryotic cell nuclei and in most Archaeal phyla. They act as spools around which DNA winds to create structural units called nucleosomes. Nucleosomes in turn are wrapped into 30-nanometer fibers that form tightly packed chromatin. Histones prevent DNA from becoming tangled and protect it from DNA damage. In addition, histones play important roles in gene regulation and DNA replication. Without histones, unwound DNA in chromosomes would be very long. For example, each human cell has about 1.8 meters of DNA if completely stretched out; however, when wound about histones, this length is reduced to about 9 micrometers (0.09 mm) of 30 nm diameter chromatin fibers. There are five families of histones, which are designated H1/H5 (linker histones), H2, H3, and H4 (core histones). The nucleosome core is formed of two H2A-H2B dimers and a H3-H4 tetramer. The tight wrapping of DNA around histones, is to a large degree, a result of electrostatic attraction between the positively charged histones and negatively charged phosphate backbone of DNA. Histones may be chemically modified through the action of enzymes to regulate gene transcription. The most common modifications are the methylation of arginine or lysine residues or the acetylation of lysine. Methylation can affect how other proteins such as transcription factors interact with the nucleosomes. Lysine acetylation eliminates a positive charge on lysine thereby weakening the electrostatic attraction between histone and DNA, resulting in partial unwinding of the DNA, making it more accessible for gene expression. Classes and variants Five major families of histone proteins exist: H1/H5, H2A, H2B, H3, and H4. Histones H2A, H2B, H3 and H4 are known as the core or nucleosomal histones, while histones H1/H5 are known as the linker histones. The core histones all exist as dimers, which are similar in that they all possess the histone fold domain: three alpha helices linked by two loops. It is this helical structure that allows for interaction between distinct dimers, particularly in a head-tail fashion (also called the handshake motif). The resulting four distinct dimers then come together to form one octameric nucleosome core, approximately 63 Angstroms in diameter (a solenoid (DNA)-like particle). Around 146 base pairs (bp) of DNA wrap around this core particle 1.65 times in a left-handed super-helical turn to give a particle of around 100 Angstroms across. The linker histone H1 binds the nucleosome at the entry and exit sites of the DNA, thus locking the DNA into place and allowing the formation of higher order structure. The most basic such formation is the 10 nm fiber or beads on a string conformation. This involves the wrapping of DNA around nucleosomes with approximately 50 base pairs of DNA separating each pair of nucleosomes (also referred to as linker DNA). Higher-order structures include the 30 nm fiber (forming an irregular zigzag) and 100 nm fiber, these being the structures found in normal cells. During mitosis and meiosis, the condensed chromosomes are assembled through interactions between nucleosomes and other regulatory proteins. Histones are subdivided into canonical replication-dependent histones, whose genes are expressed during the S-phase of the cell cycle and replication-independent histone variants, expressed during the whole cell cycle. In mammals, genes encoding canonical histones are typically clustered along chromosomes in 4 different highly-conserved loci, lack introns and use a stem loop structure at the 3' end instead of a polyA tail. Genes encoding histone variants are usually not clustered, have introns and their mRNAs are regulated with polyA tails. Complex multicellular organisms typically have a higher number of histone variants providing a variety of different functions. Recent data are accumulating about the roles of diverse histone variants highlighting the functional links between variants and the delicate regulation of organism development. Histone variants proteins from different organisms, their classification and variant specific features can be found in "HistoneDB 2.0 - Variants" database. Several pseudogenes have also been discovered and identified in very close sequences of their respective functional ortholog genes. The following is a list of human histone proteins, genes and pseudogenes: Structure The nucleosome core is formed of two H2A-H2B dimers and a H3-H4 tetramer, forming two nearly symmetrical halves by tertiary structure (C2 symmetry; one macromolecule is the mirror image of the other). The H2A-H2B dimers and H3-H4 tetramer also show pseudodyad symmetry. The 4 'core' histones (H2A, H2B, H3 and H4) are relatively similar in structure and are highly conserved through evolution, all featuring a 'helix turn helix turn helix' motif (DNA-binding protein motif that recognize specific DNA sequence). They also share the feature of long 'tails' on one end of the amino acid structure - this being the location of post-translational modification (see below). Archaeal histone only contains a H3-H4 like dimeric structure made out of a single type of unit. Such dimeric structures can stack into a tall superhelix ("hypernucleosome") onto which DNA coils in a manner similar to nucleosome spools. Only some archaeal histones have tails. The distance between the spools around which eukaryotic cells wind their DNA has been determined to range from 59 to 70 Å. In all, histones make five types of interactions with DNA: Salt bridges and hydrogen bonds between side chains of basic amino acids (especially lysine and arginine) and phosphate oxygens on DNA Helix-dipoles form alpha-helixes in H2B, H3, and H4 cause a net positive charge to accumulate at the point of interaction with negatively charged phosphate groups on DNA Hydrogen bonds between the DNA backbone and the amide group on the main chain of histone proteins Nonpolar interactions between the histone and deoxyribose sugars on DNA Non-specific minor groove insertions of the H3 and H2B N-terminal tails into two minor grooves each on the DNA molecule The highly basic nature of histones, aside from facilitating DNA-histone interactions, contributes to their water solubility. Histones are subject to post translational modification by enzymes primarily on their N-terminal tails, but also in their globular domains. Such modifications include methylation, citrullination, acetylation, phosphorylation, SUMOylation, ubiquitination, and ADP-ribosylation. This affects their function of gene regulation. In general, genes that are active have less bound histone, while inactive genes are highly associated with histones during interphase. It also appears that the structure of histones has been evolutionarily conserved, as any deleterious mutations would be severely maladaptive. All histones have a highly positively charged N-terminus with many lysine and arginine residues. Evolution and species distribution Core histones are found in the nuclei of eukaryotic cells and in most Archaeal phyla, but not in bacteria. The unicellular algae known as dinoflagellates were previously thought to be the only eukaryotes that completely lack histones, but later studies showed that their DNA still encodes histone genes. Unlike the core histones, homologs of the lysine-rich linker histone (H1) proteins are found in bacteria, otherwise known as nucleoprotein HC1/HC2. It has been proposed that core histone proteins are evolutionarily related to the helical part of the extended AAA+ ATPase domain, the C-domain, and to the N-terminal substrate recognition domain of Clp/Hsp100 proteins. Despite the differences in their topology, these three folds share a homologous helix-strand-helix (HSH) motif. It's also proposed that they may have evolved from ribosomal proteins (RPS6/RPS15), both being short and basic proteins. Archaeal histones may well resemble the evolutionary precursors to eukaryotic histones. Histone proteins are among the most highly conserved proteins in eukaryotes, emphasizing their important role in the biology of the nucleus. In contrast mature sperm cells largely use protamines to package their genomic DNA, most likely because this allows them to achieve an even higher packaging ratio. There are some variant forms in some of the major classes. They share amino acid sequence homology and core structural similarity to a specific class of major histones but also have their own feature that is distinct from the major histones. These minor histones usually carry out specific functions of the chromatin metabolism. For example, histone H3-like CENPA is associated with only the centromere region of the chromosome. Histone H2A variant H2A.Z is associated with the promoters of actively transcribed genes and also involved in the prevention of the spread of silent heterochromatin. Furthermore, H2A.Z has roles in chromatin for genome stability. Another H2A variant H2A.X is phosphorylated at S139 in regions around double-strand breaks and marks the region undergoing DNA repair. Histone H3.3 is associated with the body of actively transcribed genes. Function Compacting DNA strands Histones act as spools around which DNA winds. This enables the compaction necessary to fit the large genomes of eukaryotes inside cell nuclei: the compacted molecule is 40,000 times shorter than an unpacked molecule. Chromatin regulation Histones undergo posttranslational modifications that alter their interaction with DNA and nuclear proteins. The H3 and H4 histones have long tails protruding from the nucleosome, which can be covalently modified at several places. Modifications of the tail include methylation, acetylation, phosphorylation, ubiquitination, SUMOylation, citrullination, and ADP-ribosylation. The core of the histones H2A and H2B can also be modified. Combinations of modifications, known as histone marks, are thought to constitute a code, the so-called "histone code". Histone modifications act in diverse biological processes such as gene regulation, DNA repair, chromosome condensation (mitosis) and spermatogenesis (meiosis). The common nomenclature of histone modifications is: The name of the histone (e.g., H3) The single-letter amino acid abbreviation (e.g., K for Lysine) and the amino acid position in the protein The type of modification (Me: methyl, P: phosphate, Ac: acetyl, Ub: ubiquitin) The number of modifications (only Me is known to occur in more than one copy per residue. 1, 2 or 3 is mono-, di- or tri-methylation) So H3K4me1 denotes the monomethylation of the 4th residue (a lysine) from the start (i.e., the N-terminal) of the H3 protein. Modification A huge catalogue of histone modifications have been described, but a functional understanding of most is still lacking. Collectively, it is thought that histone modifications may underlie a histone code, whereby combinations of histone modifications have specific meanings. However, most functional data concerns individual prominent histone modifications that are biochemically amenable to detailed study. Chemistry Lysine methylation The addition of one, two, or many methyl groups to lysine has little effect on the chemistry of the histone; methylation leaves the charge of the lysine intact and adds a minimal number of atoms so steric interactions are mostly unaffected. However, proteins containing Tudor, chromo or PHD domains, amongst others, can recognise lysine methylation with exquisite sensitivity and differentiate mono, di and tri-methyl lysine, to the extent that, for some lysines (e.g.: H4K20) mono, di and tri-methylation appear to have different meanings. Because of this, lysine methylation tends to be a very informative mark and dominates the known histone modification functions. Glutamine serotonylation Recently it has been shown, that the addition of a serotonin group to the position 5 glutamine of H3, happens in serotonergic cells such as neurons. This is part of the differentiation of the serotonergic cells. This post-translational modification happens in conjunction with the H3K4me3 modification. The serotonylation potentiates the binding of the general transcription factor TFIID to the TATA box. Arginine methylation What was said above of the chemistry of lysine methylation also applies to arginine methylation, and some protein domains—e.g., Tudor domains—can be specific for methyl arginine instead of methyl lysine. Arginine is known to be mono- or di-methylated, and methylation can be symmetric or asymmetric, potentially with different meanings. Arginine citrullination Enzymes called peptidylarginine deiminases (PADs) hydrolyze the imine group of arginines and attach a keto group, so that there is one less positive charge on the amino acid residue. This process has been involved in the activation of gene expression by making the modified histones less tightly bound to DNA and thus making the chromatin more accessible. PADs can also produce the opposite effect by removing or inhibiting mono-methylation of arginine residues on histones and thus antagonizing the positive effect arginine methylation has on transcriptional activity. Lysine acetylation Addition of an acetyl group has a major chemical effect on lysine as it neutralises the positive charge. This reduces electrostatic attraction between the histone and the negatively charged DNA backbone, loosening the chromatin structure; highly acetylated histones form more accessible chromatin and tend to be associated with active transcription. Lysine acetylation appears to be less precise in meaning than methylation, in that histone acetyltransferases tend to act on more than one lysine; presumably this reflects the need to alter multiple lysines to have a significant effect on chromatin structure. The modification includes H3K27ac. Serine/threonine/tyrosine phosphorylation Addition of a negatively charged phosphate group can lead to major changes in protein structure, leading to the well-characterised role of phosphorylation in controlling protein function. It is not clear what structural implications histone phosphorylation has, but histone phosphorylation has clear functions as a post-translational modification, and binding domains such as BRCT have been characterised. Effects on transcription Most well-studied histone modifications are involved in control of transcription. Actively transcribed genes Two histone modifications are particularly associated with active transcription: Trimethylation of H3 lysine 4 (H3K4me3) This trimethylation occurs at the promoter of active genes and is performed by the COMPASS complex. Despite the conservation of this complex and histone modification from yeast to mammals, it is not entirely clear what role this modification plays. However, it is an excellent mark of active promoters and the level of this histone modification at a gene's promoter is broadly correlated with transcriptional activity of the gene. The formation of this mark is tied to transcription in a rather convoluted manner: early in transcription of a gene, RNA polymerase II undergoes a switch from initiating' to 'elongating', marked by a change in the phosphorylation states of the RNA polymerase II C terminal domain (CTD). The same enzyme that phosphorylates the CTD also phosphorylates the Rad6 complex, which in turn adds a ubiquitin mark to H2B K123 (K120 in mammals). H2BK123Ub occurs throughout transcribed regions, but this mark is required for COMPASS to trimethylate H3K4 at promoters. Trimethylation of H3 lysine 36 (H3K36me3) This trimethylation occurs in the body of active genes and is deposited by the methyltransferase Set2. This protein associates with elongating RNA polymerase II, and H3K36Me3 is indicative of actively transcribed genes. H3K36Me3 is recognised by the Rpd3 histone deacetylase complex, which removes acetyl modifications from surrounding histones, increasing chromatin compaction and repressing spurious transcription. Increased chromatin compaction prevents transcription factors from accessing DNA, and reduces the likelihood of new transcription events being initiated within the body of the gene. This process therefore helps ensure that transcription is not interrupted. Repressed genes Three histone modifications are particularly associated with repressed genes: Trimethylation of H3 lysine 27 (H3K27me3) This histone modification is deposited by the polycomb complex PRC2. It is a clear marker of gene repression, and is likely bound by other proteins to exert a repressive function. Another polycomb complex, PRC1, can bind H3K27me3 and adds the histone modification H2AK119Ub which aids chromatin compaction. Based on this data it appears that PRC1 is recruited through the action of PRC2, however, recent studies show that PRC1 is recruited to the same sites in the absence of PRC2. Di and tri-methylation of H3 lysine 9 (H3K9me2/3) H3K9me2/3 is a well-characterised marker for heterochromatin, and is therefore strongly associated with gene repression. The formation of heterochromatin has been best studied in the yeast Schizosaccharomyces pombe, where it is initiated by recruitment of the RNA-induced transcriptional silencing (RITS) complex to double stranded RNAs produced from centromeric repeats. RITS recruits the Clr4 histone methyltransferase which deposits H3K9me2/3. This process is called histone methylation. H3K9Me2/3 serves as a binding site for the recruitment of Swi6 (heterochromatin protein 1 or HP1, another classic heterochromatin marker) which in turn recruits further repressive activities including histone modifiers such as histone deacetylases and histone methyltransferases. Trimethylation of H4 lysine 20 (H4K20me3) This modification is tightly associated with heterochromatin, although its functional importance remains unclear. This mark is placed by the Suv4-20h methyltransferase, which is at least in part recruited by heterochromatin protein 1. Bivalent promoters Analysis of histone modifications in embryonic stem cells (and other stem cells) revealed many gene promoters carrying both H3K4Me3 and H3K27Me3, in other words these promoters display both activating and repressing marks simultaneously. This peculiar combination of modifications marks genes that are poised for transcription; they are not required in stem cells, but are rapidly required after differentiation into some lineages. Once the cell starts to differentiate, these bivalent promoters are resolved to either active or repressive states depending on the chosen lineage. Other functions DNA damage repair Marking sites of DNA damage is an important function for histone modifications. Without a repair marker, DNA would get destroyed by damage accumulated from sources such as the ultraviolet radiation of the sun. Phosphorylation of H2AX at serine 139 (γH2AX) Phosphorylated H2AX (also known as gamma H2AX) is a marker for DNA double strand breaks, and forms part of the response to DNA damage. H2AX is phosphorylated early after detection of DNA double strand break, and forms a domain extending many kilobases either side of the damage. Gamma H2AX acts as a binding site for the protein MDC1, which in turn recruits key DNA repair proteins (this complex topic is well reviewed in) and as such, gamma H2AX forms a vital part of the machinery that ensures genome stability. Acetylation of H3 lysine 56 (H3K56Ac) H3K56Acx is required for genome stability. H3K56 is acetylated by the p300/Rtt109 complex, but is rapidly deacetylated around sites of DNA damage. H3K56 acetylation is also required to stabilise stalled replication forks, preventing dangerous replication fork collapses. Although in general mammals make far greater use of histone modifications than microorganisms, a major role of H3K56Ac in DNA replication exists only in fungi, and this has become a target for antibiotic development. Trimethylation of H3 lysine 36 (H3K36me3) H3K36me3 has the ability to recruit the MSH2-MSH6 (hMutSα) complex of the DNA mismatch repair pathway. Consistently, regions of the human genome with high levels of H3K36me3 accumulate less somatic mutations due to mismatch repair activity. Chromosome condensation Phosphorylation of H3 at serine 10 (phospho-H3S10) The mitotic kinase aurora B phosphorylates histone H3 at serine 10, triggering a cascade of changes that mediate mitotic chromosome condensation. Condensed chromosomes therefore stain very strongly for this mark, but H3S10 phosphorylation is also present at certain chromosome sites outside mitosis, for example in pericentric heterochromatin of cells during G2. H3S10 phosphorylation has also been linked to DNA damage caused by R-loop formation at highly transcribed sites. Phosphorylation H2B at serine 10/14 (phospho-H2BS10/14) Phosphorylation of H2B at serine 10 (yeast) or serine 14 (mammals) is also linked to chromatin condensation, but for the very different purpose of mediating chromosome condensation during apoptosis. This mark is not simply a late acting bystander in apoptosis as yeast carrying mutations of this residue are resistant to hydrogen peroxide-induced apoptotic cell death. Addiction Epigenetic modifications of histone tails in specific regions of the brain are of central importance in addictions. Once particular epigenetic alterations occur, they appear to be long lasting "molecular scars" that may account for the persistence of addictions. Cigarette smokers (about 15% of the US population) are usually addicted to nicotine. After 7 days of nicotine treatment of mice, acetylation of both histone H3 and histone H4 was increased at the FosB promoter in the nucleus accumbens of the brain, causing 61% increase in FosB expression. This would also increase expression of the splice variant Delta FosB. In the nucleus accumbens of the brain, Delta FosB functions as a "sustained molecular switch" and "master control protein" in the development of an addiction. About 7% of the US population is addicted to alcohol. In rats exposed to alcohol for up to 5 days, there was an increase in histone 3 lysine 9 acetylation in the pronociceptin promoter in the brain amygdala complex. This acetylation is an activating mark for pronociceptin. The nociceptin/nociceptin opioid receptor system is involved in the reinforcing or conditioning effects of alcohol. Methamphetamine addiction occurs in about 0.2% of the US population. Chronic methamphetamine use causes methylation of the lysine in position 4 of histone 3 located at the promoters of the c-fos and the C-C chemokine receptor 2 (ccr2) genes, activating those genes in the nucleus accumbens (NAc). c-fos is well known to be important in addiction. The ccr2 gene is also important in addiction, since mutational inactivation of this gene impairs addiction. Synthesis The first step of chromatin structure duplication is the synthesis of histone proteins: H1, H2A, H2B, H3, H4. These proteins are synthesized during S phase of the cell cycle. There are different mechanisms which contribute to the increase of histone synthesis. Yeast Yeast carry one or two copies of each histone gene, which are not clustered but rather scattered throughout chromosomes. Histone gene transcription is controlled by multiple gene regulatory proteins such as transcription factors which bind to histone promoter regions. In budding yeast, the candidate gene for activation of histone gene expression is SBF. SBF is a transcription factor that is activated in late G1 phase, when it dissociates from its repressor Whi5. This occurs when Whi5 is phosphorylated by Cdc8 which is a G1/S Cdk. Suppression of histone gene expression outside of S phases is dependent on Hir proteins which form inactive chromatin structure at the locus of histone genes, causing transcriptional activators to be blocked. Metazoan In metazoans the increase in the rate of histone synthesis is due to the increase in processing of pre-mRNA to its mature form as well as decrease in mRNA degradation; this results in an increase of active mRNA for translation of histone proteins. The mechanism for mRNA activation has been found to be the removal of a segment of the 3' end of the mRNA strand, and is dependent on association with stem-loop binding protein (SLBP). SLBP also stabilizes histone mRNAs during S phase by blocking degradation by the 3'hExo nuclease. SLBP levels are controlled by cell-cycle proteins, causing SLBP to accumulate as cells enter S phase and degrade as cells leave S phase. SLBP are marked for degradation by phosphorylation at two threonine residues by cyclin dependent kinases, possibly cyclin A/ cdk2, at the end of S phase. Metazoans also have multiple copies of histone genes clustered on chromosomes which are localized in structures called Cajal bodies as determined by genome-wide chromosome conformation capture analysis (4C-Seq). Link between cell-cycle control and synthesis Nuclear protein Ataxia-Telangiectasia (NPAT), also known as nuclear protein coactivator of histone transcription, is a transcription factor which activates histone gene transcription on chromosomes 1 and 6 of human cells. NPAT is also a substrate of cyclin E-Cdk2, which is required for the transition between G1 phase and S phase. NPAT activates histone gene expression only after it has been phosphorylated by the G1/S-Cdk cyclin E-Cdk2 in early S phase. This shows an important regulatory link between cell-cycle control and histone synthesis. History Histones were discovered in 1884 by Albrecht Kossel. The word "histone" dates from the late 19th century and is derived from the German word "Histon", a word itself of uncertain origin, perhaps from Ancient Greek ἵστημι (hístēmi, “make stand”) or ἱστός (histós, “loom”). In the early 1960s, before the types of histones were known and before histones were known to be highly conserved across taxonomically diverse organisms, James F. Bonner and his collaborators began a study of these proteins that were known to be tightly associated with the DNA in the nucleus of higher organisms. Bonner and his postdoctoral fellow Ru Chih C. Huang showed that isolated chromatin would not support RNA transcription in the test tube, but if the histones were extracted from the chromatin, RNA could be transcribed from the remaining DNA. Their paper became a citation classic. Paul T'so and James Bonner had called together a World Congress on Histone Chemistry and Biology in 1964, in which it became clear that there was no consensus on the number of kinds of histone and that no one knew how they would compare when isolated from different organisms. Bonner and his collaborators then developed methods to separate each type of histone, purified individual histones, compared amino acid compositions in the same histone from different organisms, and compared amino acid sequences  of the same histone from different organisms in collaboration with Emil Smith from UCLA. For example, they found Histone IV sequence to be highly conserved between peas and calf thymus. However, their work on the biochemical characteristics of individual histones did not reveal how the histones interacted with each other or with DNA to which they were tightly bound. Also in the 1960s, Vincent Allfrey and Alfred Mirsky had suggested, based on their analyses of histones, that acetylation and methylation of histones could provide a transcriptional control mechanism, but did not have available the kind of detailed analysis that later investigators were able to conduct to show how such regulation could be gene-specific. Until the early 1990s, histones were dismissed by most as inert packing material for eukaryotic nuclear DNA, a view based in part on the models of Mark Ptashne and others, who believed that transcription was activated by protein-DNA and protein-protein interactions on largely naked DNA templates, as is the case in bacteria. During the 1980s, Yahli Lorch and Roger Kornberg showed that a nucleosome on a core promoter prevents the initiation of transcription in vitro, and Michael Grunstein demonstrated that histones repress transcription in vivo, leading to the idea of the nucleosome as a general gene repressor. Relief from repression is believed to involve both histone modification and the action of chromatin-remodeling complexes. Vincent Allfrey and Alfred Mirsky had earlier proposed a role of histone modification in transcriptional activation, regarded as a molecular manifestation of epigenetics. Michael Grunstein and David Allis found support for this proposal, in the importance of histone acetylation for transcription in yeast and the activity of the transcriptional activator Gcn5 as a histone acetyltransferase. The discovery of the H5 histone appears to date back to the 1970s, and it is now considered an isoform of Histone H1.
Biology and health sciences
Proteins
Biology
14034
https://en.wikipedia.org/wiki/Heroin
Heroin
Heroin, also known as diacetylmorphine and diamorphine among other names, is a morphinan opioid substance synthesized from the dried latex of the opium poppy; it is mainly used as a recreational drug for its euphoric effects. Heroin is used medically in several countries to relieve pain, such as during childbirth or a heart attack, as well as in opioid replacement therapy. Medical-grade diamorphine is used as a pure hydrochloride salt. Various white and brown powders sold illegally around the world as heroin are routinely diluted with cutting agents. Black tar heroin is a variable admixture of morphine derivatives—predominantly 6-MAM (6-monoacetylmorphine), which is the result of crude acetylation during clandestine production of street heroin. Heroin is typically injected, usually into a vein, but it can also be snorted, smoked, or inhaled. In a clinical context, the route of administration is most commonly intravenous injection; it may also be given by intramuscular or subcutaneous injection, as well as orally in the form of tablets. The onset of effects is usually rapid and lasts for a few hours. Common side effects include respiratory depression (decreased breathing), dry mouth, drowsiness, impaired mental function, constipation, and addiction. Use by injection can also result in abscesses, infected heart valves, blood-borne infections, and pneumonia. After a history of long-term use, opioid withdrawal symptoms can begin within hours of the last use. When given by injection into a vein, heroin has two to three times the effect of a similar dose of morphine. It typically appears in the form of a white or brown powder. Treatment of heroin addiction often includes behavioral therapy and medications. Medications can include buprenorphine, methadone, or naltrexone. A heroin overdose may be treated with naloxone. As of 2015, an estimated 17 million people use opiates, of which heroin is the most common, and opioid use resulted in 122,000 deaths; also, as of 2015, the total number of heroin users worldwide is believed to have increased in Africa, the Americas, and Asia since 2000. In the United States, approximately 1.6 percent of people have used heroin at some point. When people die from overdosing on a drug, the drug is usually an opioid and often heroin. Heroin was first made by C. R. Alder Wright in 1874 from morphine, a natural product of the opium poppy. Internationally, heroin is controlled under Schedules I and IV of the Single Convention on Narcotic Drugs, and it is generally illegal to make, possess, or sell without a license. About 448 tons of heroin were made in 2016. In 2015, Afghanistan produced about 66% of the world's opium. Illegal heroin is often mixed with other substances such as sugar, starch, caffeine, quinine, or other opioids like fentanyl. Uses Recreational Bayer's original trade name of heroin is typically used in non-medical settings. It is used as a recreational drug for the euphoria it induces. Anthropologist Michael Agar once described heroin as "the perfect whatever drug." Tolerance develops quickly, and increased doses are needed in order to achieve the same effects. Its popularity with recreational drug users, compared to morphine, reportedly stems from its perceived different effects. Short-term addiction studies by the same researchers demonstrated that tolerance developed at a similar rate to both heroin and morphine. When compared to the opioids hydromorphone, fentanyl, oxycodone, and pethidine (meperidine), former addicts showed a strong preference for heroin and morphine, suggesting that heroin and morphine are particularly susceptible to misuse and causing dependence. Morphine and heroin were also much more likely to produce euphoria and other positive subjective effects when compared to these other opioids. Medical uses In the United States, heroin is not accepted as medically useful. Under the generic name diamorphine, heroin is prescribed as a strong pain medication in the United Kingdom, where it is administered via oral, subcutaneous, intramuscular, intrathecal, intranasal or intravenous routes. It may be prescribed for the treatment of acute pain, such as in severe physical trauma, myocardial infarction, post-surgical pain and chronic pain, including end-stage terminal illnesses. In other countries it is more common to use morphine or other strong opioids in these situations. In 2004, the National Institute for Health and Clinical Excellence produced guidance on the management of caesarean section, which recommended the use of intrathecal or epidural diamorphine for post-operative pain relief. For women who have had intrathecal opioids, there should be a minimum hourly observation of respiratory rate, sedation and pain scores for at least 12 hours for diamorphine and 24 hours for morphine. Women should be offered diamorphine (0.3–0.4 mg intrathecally) for intra- and postoperative analgesia because it reduces the need for supplemental analgesia after a caesarean section. Epidural diamorphine (2.5–5 mg) is a suitable alternative. Diamorphine continues to be widely used in palliative care in the UK, where it is commonly given by the subcutaneous route, often via a syringe driver if patients cannot easily swallow morphine solution. The advantage of diamorphine over morphine is that diamorphine is more fat soluble and therefore more potent by injection, so smaller doses of it are needed for the same effect on pain. Both of these factors are advantageous if giving high doses of opioids via the subcutaneous route, which is often necessary for palliative care. It is also used in the palliative management of bone fractures and other trauma, especially in children. In the trauma context, it is primarily given by nose in hospital; although a prepared nasal spray is available. It has traditionally been made by the attending physician, generally from the same "dry" ampoules as used for injection. In children, Ayendi nasal spray is available at 720 micrograms and 1600 micrograms per 50 microlitres actuation of the spray, which may be preferable as a non-invasive alternative in pediatric care, avoiding the fear of injection in children. Maintenance therapy A number of European countries prescribe heroin for treatment of heroin addiction. The initial Swiss HAT (heroin-assisted treatment) trial ("PROVE" study) was conducted as a prospective cohort study with some 1,000 participants in 18 treatment centers between 1994 and 1996, at the end of 2004, 1,200 patients were enrolled in HAT in 23 treatment centers across Switzerland. Diamorphine may be used as a maintenance drug to assist the treatment of opiate addiction, normally in long-term chronic intravenous (IV) heroin users. It is only prescribed following exhaustive efforts at treatment via other means. It is sometimes thought that heroin users can walk into a clinic and walk out with a prescription, but the process takes many weeks before a prescription for diamorphine is issued. Though this is somewhat controversial among proponents of a zero-tolerance drug policy, it has proven superior to methadone in improving the social and health situations of addicts. The UK Department of Health's Rolleston Committee Report in 1926 established the British approach to diamorphine prescription to users, which was maintained for the next 40 years: dealers were prosecuted, but doctors could prescribe diamorphine to users when withdrawing. In 1964, the Brain Committee recommended that only selected approved doctors working at approved specialized centres be allowed to prescribe diamorphine and cocaine to users. The law was made more restrictive in 1968. Beginning in the 1970s, the emphasis shifted to abstinence and the use of methadone; currently, only a small number of users in the UK are prescribed diamorphine. In 1994, Switzerland began a trial diamorphine maintenance program for users that had failed multiple withdrawal programs. The aim of this program was to maintain the health of the user by avoiding medical problems stemming from the illicit use of diamorphine. The first trial in 1994 involved 340 users, although enrollment was later expanded to 1000, based on the apparent success of the program. The trials proved diamorphine maintenance to be superior to other forms of treatment in improving the social and health situation for this group of patients. It has also been shown to save money, despite high treatment expenses, as it significantly reduces costs incurred by trials, incarceration, health interventions and delinquency. Patients appear twice daily at a treatment center, where they inject their dose of diamorphine under the supervision of medical staff. They are required to contribute about 450 Swiss francs per month to the treatment costs. A national referendum in November 2008 showed 68% of voters supported the plan, introducing diamorphine prescription into federal law. The previous trials were based on time-limited executive ordinances. The success of the Swiss trials led German, Dutch, and Canadian cities to try out their own diamorphine prescription programs. Some Australian cities (such as Sydney) have instituted legal diamorphine supervised injecting centers, in line with other wider harm minimization programs. Since January 2009, Denmark has prescribed diamorphine to a few addicts who have tried methadone and buprenorphine without success. Beginning in February 2010, addicts in Copenhagen and Odense became eligible to receive free diamorphine. Later in 2010, other cities including Århus and Esbjerg joined the scheme. It was estimated that around 230 addicts would be able to receive free diamorphine. However, Danish addicts would only be able to inject heroin according to the policy set by Danish National Board of Health. Of the estimated 1500 drug users who did not benefit from the then-current oral substitution treatment, approximately 900 would not be in the target group for treatment with injectable diamorphine, either because of "massive multiple drug abuse of non-opioids" or "not wanting treatment with injectable diamorphine". In July 2009, the German Bundestag passed a law allowing diamorphine prescription as a standard treatment for addicts; a large-scale trial of diamorphine prescription had been authorized in the country in 2002. On 26 August 2016, Health Canada issued regulations amending prior regulations it had issued under the Controlled Drugs and Substances Act; the "New Classes of Practitioners Regulations", the "Narcotic Control Regulations", and the "Food and Drug Regulations", to allow doctors to prescribe diamorphine to people who have a severe opioid addiction who have not responded to other treatments. The prescription heroin can be accessed by doctors through Health Canada's Special Access Programme (SAP) for "emergency access to drugs for patients with serious or life-threatening conditions when conventional treatments have failed, are unsuitable, or are unavailable." Routes of administration The onset of heroin's effects depends upon the route of administration. Smoking is the fastest route of drug administration, although intravenous injection results in a quicker rise in blood concentration. These are followed by suppository (anal or vaginal insertion), insufflation (snorting), and ingestion (swallowing). A 2002 study suggests that a fast onset of action increases the reinforcing effects of addictive drugs. Ingestion does not produce a rush as a forerunner to the high experienced with the use of heroin, which is most pronounced with intravenous use. While the onset of the rush induced by injection can occur in as little as a few seconds, the oral route of administration requires approximately half an hour before the high sets in. Thus, with both higher the dosage of heroin used and faster the route of administration used, the higher the potential risk for psychological dependence/addiction. Large doses of heroin can cause fatal respiratory depression, and the drug has been used for suicide or as a murder weapon. The serial killer Harold Shipman used diamorphine on his victims, and the subsequent Shipman Inquiry led to a tightening of the regulations surrounding the storage, prescribing and destruction of controlled drugs in the UK. Because significant tolerance to respiratory depression develops quickly with continued use and is lost just as quickly during withdrawal, it is often difficult to determine whether a heroin lethal overdose was accidental, suicide or homicide. Examples include the overdose deaths of Sid Vicious, Janis Joplin, Tim Buckley, Hillel Slovak, Layne Staley, Bradley Nowell, Ted Binion, and River Phoenix. By mouth Use of heroin by mouth is less common than other methods of administration, mainly because there is little to no "rush", and the effects are less potent. Heroin is entirely converted to morphine by means of first-pass metabolism, resulting in deacetylation when ingested. Heroin's oral bioavailability is both dose-dependent (as is morphine's) and significantly higher than oral use of morphine itself, reaching up to 64.2% for high doses and 45.6% for low doses; opiate-naive users showed far less absorption of the drug at low doses, having bioavailabilities of only up to 22.9%. The maximum plasma concentration of morphine following oral administration of heroin was around twice as much as that of oral morphine. Injection Injection, also known as "slamming", "banging", "shooting up", "digging" or "mainlining", is a popular method which carries relatively greater risks than other methods of administration. Heroin base (commonly found in Europe), when prepared for injection, will only dissolve in water when mixed with an acid (most commonly citric acid powder or lemon juice) and heated. Heroin in the east-coast United States is most commonly found in the hydrochloride salt form, requiring just water (and no heat) to dissolve. Users tend to initially inject in the easily accessible arm veins, but as these veins collapse over time, users resort to more dangerous areas of the body, such as the femoral vein in the groin. Some medical professionals have expressed concern over this route of administration, as they suspect that it can lead to deep vein thrombosis. Intravenous users can use a variable single dose range using a hypodermic needle. The dose of heroin used for recreational purposes is dependent on the frequency and level of use. As with the injection of any drug, if a group of users share a common needle without sterilization procedures, blood-borne diseases, such as HIV/AIDS or hepatitis, can be transmitted. The use of a common dispenser for water for the use in the preparation of the injection, as well as the sharing of spoons and filters can also cause the spread of blood-borne diseases. Many countries now supply small sterile spoons and filters for single use in order to prevent the spread of disease. Smoking Smoking heroin refers to vaporizing it to inhale the resulting fumes, rather than burning and inhaling the smoke. It is commonly smoked in glass pipes made from glassblown Pyrex tubes and light bulbs. Heroin may be smoked from aluminium foil that is heated by a flame underneath it, with the resulting smoke inhaled through a tube of rolled up foil, a method also known as "chasing the dragon". Insufflation Another popular route to intake heroin is insufflation (snorting), where a user crushes the heroin into a fine powder and then gently inhales it (sometimes with a straw or a rolled-up banknote, as with cocaine) into the nose, where heroin is absorbed through the soft tissue in the mucous membrane of the sinus cavity and straight into the bloodstream. This method of administration redirects first-pass metabolism, with a quicker onset and higher bioavailability than oral administration, though the duration of action is shortened. This method is sometimes preferred by users who do not want to prepare and administer heroin for injection or smoking but still want to experience a fast onset. Snorting heroin becomes an often unwanted route, once a user begins to inject the drug. The user may still get high on the drug from snorting, and experience a nod, but will not get a rush. A "rush" is caused by a large amount of heroin entering the body at once. When the drug is taken in through the nose, the user does not get the rush because the drug is absorbed slowly rather than instantly. Heroin for pain has been mixed with sterile water on site by the attending physician, and administered using a syringe with a nebulizer tip. Heroin may be used for fractures, burns, finger-tip injuries, suturing, and wound re-dressing, but is inappropriate in head injuries. Suppository Little research has been focused on the suppository (anal insertion) or pessary (vaginal insertion) methods of administration, also known as "plugging". These methods of administration are commonly carried out using an oral syringe. Heroin can be dissolved and withdrawn into an oral syringe which may then be lubricated and inserted into the anus or vagina before the plunger is pushed. The rectum or the vaginal canal is where the majority of the drug would likely be taken up, through the membranes lining their walls. Adverse effects Heroin is classified as a hard drug in terms of drug harmfulness. Like most opioids, unadulterated heroin may lead to adverse effects. The purity of street heroin varies greatly, leading to overdoses when the purity is higher than expected. Short-term effects Users report an intense rush, an acute transcendent state of euphoria, which occurs while diamorphine is being metabolized into 6-monoacetylmorphine (6-MAM) and morphine in the brain. Some believe that heroin produces more euphoria than other opioids; one possible explanation is the presence of 6-monoacetylmorphine, a metabolite unique to heroin – although a more likely explanation is the rapidity of onset. While other opioids of recreational use produce only morphine, heroin also leaves 6-MAM, also a psycho-active metabolite. However, this perception is not supported by the results of clinical studies comparing the physiological and subjective effects of injected heroin and morphine in individuals formerly addicted to opioids; these subjects showed no preference for one drug over the other. Equipotent injected doses had comparable action courses, with no difference in subjects' self-rated feelings of euphoria, ambition, nervousness, relaxation, drowsiness, or sleepiness. The rush is usually accompanied by a warm flushing of the skin, dry mouth, and a heavy feeling in the extremities. Nausea, vomiting, and severe itching may also occur. After the initial effects, users usually will be drowsy for several hours; mental function is clouded; heart function slows, and breathing is also severely slowed, sometimes enough to be life-threatening. Slowed breathing can also lead to coma and permanent brain damage. Heroin use has also been associated with myocardial infarction. Long-term effects Repeated heroin use changes the physical structure and physiology of the brain, creating long-term imbalances in neuronal and hormonal systems that are not easily reversed. Studies have shown some deterioration of the brain's white matter due to heroin use, which may affect decision-making abilities, the ability to regulate behavior, and responses to stressful situations. Heroin also produces profound degrees of tolerance and physical dependence. Tolerance occurs when more and more of the drug is required to achieve the same effects. With physical dependence, the body adapts to the presence of the drug, and withdrawal symptoms occur if use is reduced abruptly. Injection Intravenous use of heroin (and any other substance) with needles and syringes or other related equipment may lead to: Contracting blood-borne pathogens such as HIV and hepatitis via the sharing of needles Contracting bacterial or fungal endocarditis and possibly venous sclerosis Abscesses Poisoning from contaminants added to "cut" or dilute heroin Decreased kidney function (nephropathy), although it is not currently known if this is because of adulterants or infectious diseases Withdrawal The withdrawal syndrome from heroin may begin within as little as two hours of discontinuation of the drug; however, this time frame can fluctuate with the degree of tolerance as well as the amount of the last consumed dose, and more typically begins within 6–24 hours after cessation. Symptoms may include sweating, malaise, anxiety, depression, akathisia, priapism, extra sensitivity of the genitals in females, general feeling of heaviness, excessive yawning or sneezing, rhinorrhea, insomnia, cold sweats, chills, severe muscle and bone aches, nausea, vomiting, diarrhea, cramps, watery eyes, fever, cramp-like pains, and involuntary spasms in the limbs (thought to be an origin of the term "kicking the habit"). Overdose Heroin overdose is usually treated with the opioid antagonist naloxone. This reverses the effects of heroin and causes an immediate return of consciousness but may result in withdrawal symptoms. The half-life of naloxone is shorter than some opioids, such that it may need to be given multiple times until the opioid has been metabolized by the body. Between 2012 and 2015, heroin was the leading cause of drug-related deaths in the United States. Since then, fentanyl has been a more common cause of drug-related deaths. Depending on drug interactions and numerous other factors, death from overdose can take anywhere from several minutes to several hours. Death usually occurs due to lack of oxygen resulting from the lack of breathing caused by the opioid. Heroin overdoses can occur because of an unexpected increase in the dose or purity or because of diminished opioid tolerance. However, many fatalities reported as overdoses are probably caused by interactions with other depressant drugs such as alcohol or benzodiazepines. Since heroin can cause nausea and vomiting, a significant number of deaths attributed to heroin overdose are caused by aspiration of vomit by an unconscious person. Some sources quote the median lethal dose (for an average 75 kg opiate-naive individual) as being between 75 and 600 mg. Illicit heroin is of widely varying and unpredictable purity. This means that the user may prepare what they consider to be a moderate dose while actually taking far more than intended. Also, tolerance typically decreases after a period of abstinence. If this occurs and the user takes a dose comparable to their previous use, the user may experience drug effects that are much greater than expected, potentially resulting in an overdose. It has been speculated that an unknown portion of heroin-related deaths are the result of an overdose or allergic reaction to quinine, which may sometimes be used as a cutting agent. Pharmacology When taken orally, heroin undergoes extensive first-pass metabolism via deacetylation, making it a prodrug for the systemic delivery of morphine. When the drug is injected, however, it avoids this first-pass effect, very rapidly crossing the blood–brain barrier because of the presence of the acetyl groups, which render it much more fat soluble than morphine itself. Once in the brain, it then is deacetylated variously into the inactive 3-monoacetylmorphine and the active 6-monoacetylmorphine (6-MAM), and then to morphine, which bind to μ-opioid receptors, resulting in the drug's euphoric, analgesic (pain relief), and anxiolytic (anti-anxiety) effects; heroin itself exhibits relatively low affinity for the μ receptor. Analgesia follows from the activation of the μ receptor G-protein coupled receptor, which indirectly hyperpolarizes the neuron, reducing the release of nociceptive neurotransmitters, and hence, causes analgesia and increased pain tolerance. Unlike hydromorphone and oxymorphone, however, administered intravenously, heroin creates a larger histamine release, similar to morphine, resulting in the feeling of a greater subjective "body high" to some, but also instances of pruritus (itching) when they first start using. Normally, GABA, which is released from inhibitory neurones, inhibits the release of dopamine. Opiates, like heroin and morphine, decrease the inhibitory activity of such neurones. This causes increased release of dopamine in the brain which is the reason for euphoric and rewarding effects of heroin. Both morphine and 6-MAM are μ-opioid agonists that bind to receptors present throughout the brain, spinal cord, and gut of all mammals. The μ-opioid receptor also binds endogenous opioid peptides such as β-endorphin, leu-enkephalin, and met-enkephalin. Repeated use of heroin results in a number of physiological changes, including an increase in the production of μ-opioid receptors (upregulation). These physiological alterations lead to tolerance and dependence, so that stopping heroin use results in uncomfortable symptoms including pain, anxiety, muscle spasms, and insomnia called the opioid withdrawal syndrome. Depending on usage it has an onset 4–24 hours after the last dose of heroin. Morphine also binds to δ- and κ-opioid receptors. There is also evidence that 6-MAM binds to a subtype of μ-opioid receptors that are also activated by the morphine metabolite morphine-6β-glucuronide but not morphine itself. The third subtype of third opioid type is the mu-3 receptor, which may be a commonality to other six-position monoesters of morphine. The contribution of these receptors to the overall pharmacology of heroin remains unknown. A subclass of morphine derivatives, namely the 3,6 esters of morphine, with similar effects and uses, includes the clinically used strong analgesics nicomorphine (Vilan), and dipropanoylmorphine; there is also the latter's dihydromorphine analogue, diacetyldihydromorphine (Paralaudin). Two other 3,6 diesters of morphine invented in 1874–75 along with diamorphine, dibenzoylmorphine and acetylpropionylmorphine, were made as substitutes after it was outlawed in 1925 and, therefore, sold as the first "designer drugs" until they were outlawed by the League of Nations in 1930. Chemistry Diamorphine is produced from acetylation of morphine derived from natural opium sources, generally using acetic anhydride. The major metabolites of diamorphine, 6-MAM, morphine, morphine-3-glucuronide, and morphine-6-glucuronide, may be quantitated in blood, plasma or urine to monitor for use, confirm a diagnosis of poisoning, or assist in a medicolegal death investigation. Most commercial opiate screening tests cross-react appreciably with these metabolites, as well as with other biotransformation products likely to be present following usage of street-grade diamorphine such as 6-Monoacetylcodeine and codeine. However, chromatographic techniques can easily distinguish and measure each of these substances. When interpreting the results of a test, it is important to consider the diamorphine usage history of the individual, since a chronic user can develop tolerance to doses that would incapacitate an opiate-naive individual, and the chronic user often has high baseline values of these metabolites in his system. Furthermore, some testing procedures employ a hydrolysis step before quantitation that converts many of the metabolic products to morphine, yielding a result that may be 2 times larger than with a method that examines each product individually. History The opium poppy was cultivated in lower Mesopotamia as long ago as 3400 BC. The chemical analysis of opium in the 19th century revealed that most of its activity could be ascribed to the alkaloids codeine and morphine. Diamorphine was first synthesized in 1874 by C. R. Alder Wright, an English chemist working at St. Mary's Hospital Medical School in London who had been experimenting combining morphine with various acids. He boiled anhydrous morphine alkaloid with acetic anhydride for several hours and produced a more potent, acetylated form of morphine which is now called diacetylmorphine or morphine diacetate. He sent the compound to F. M. Pierce of Owens College in Manchester for analysis. Pierce told Wright: Wright's invention did not lead to any further developments, and diamorphine became popular only after it was independently re-synthesized 23 years later by chemist Felix Hoffmann. Hoffmann was working at Bayer pharmaceutical company in Elberfeld, Germany, and his supervisor Heinrich Dreser instructed him to acetylate morphine with the objective of producing codeine, a constituent of the opium poppy that is pharmacologically similar to morphine but less potent and less addictive. Instead, the experiment produced an acetylated form of morphine one and a half to two times more potent than morphine itself. Hoffmann synthesized heroin on August 21, 1897, just eleven days after he had synthesized aspirin. The head of Bayer's research department reputedly coined the drug's new name of "heroin", based on the German heroisch which means "heroic, strong" (from the ancient Greek word "heros, ήρως"). Bayer scientists were not the first to make heroin, but their scientists discovered ways to make it, and Bayer led the commercialization of heroin. Bayer marketed diacetylmorphine as an over-the-counter drug under the trademark name Heroin. It was developed chiefly as a morphine substitute for cough suppressants that did not have morphine's addictive side-effects. Morphine at the time was a popular recreational drug, and Bayer wished to find a similar but non-addictive substitute to market. However, contrary to Bayer's advertising as a "non-addictive morphine substitute", heroin would soon have one of the highest rates of addiction among its users. From 1898 through to 1910, diamorphine was marketed under the trademark name Heroin as a non-addictive morphine substitute and cough suppressant. In the 11th edition of Encyclopædia Britannica (1910), the article on morphine states: "In the cough of phthisis minute doses [of morphine] are of service, but in this particular disease morphine is frequently better replaced by codeine or by heroin, which checks irritable coughs without the narcotism following upon the administration of morphine." In the US, the Harrison Narcotics Tax Act was passed in 1914 to control the sale and distribution of diacetylmorphine and other opioids, which allowed the drug to be prescribed and sold for medical purposes. In 1924, the United States Congress banned its sale, importation, or manufacture. It is now a Schedule I substance, which makes it illegal for non-medical use in signatory nations of the Single Convention on Narcotic Drugs treaty, including the United States. The Health Committee of the League of Nations banned diacetylmorphine in 1925, although it took more than three years for this to be implemented. In the meantime, the first designer drugs, viz. 3,6 diesters and 6 monoesters of morphine and acetylated analogues of closely related drugs like hydromorphone and dihydromorphine, were produced in massive quantities to fill the worldwide demand for diacetylmorphine—this continued until 1930 when the Committee banned diacetylmorphine analogues with no therapeutic advantage over drugs already in use, the first major legislation of this type. Bayer lost some of its trademark rights to heroin (as well as aspirin) under the 1919 Treaty of Versailles following the German defeat in World War I. Use of heroin by jazz musicians in particular was prevalent in the mid-twentieth century, including Billie Holiday, saxophonists Charlie Parker and Art Pepper, trumpeter and vocalist Chet Baker, guitarist Joe Pass and piano player/singer Ray Charles; a "staggering number of jazz musicians were addicts". It was also a problem with many rock musicians, particularly from the late 1960s through the 1990s. Pete Doherty is also a self-confessed user of heroin. Nirvana lead singer Kurt Cobain's heroin addiction was well documented. Pantera frontman Phil Anselmo turned to heroin while touring during the 1990s to cope with his back pain. James Taylor, Taylor Hawkins, Jimmy Page, John Lennon, Eric Clapton, Johnny Winter, Keith Richards, Shaun Ryder, Shane MacGowan and Janis Joplin also used heroin. Many musicians have made songs referencing their heroin usage. Society and culture Names "Diamorphine" is the Recommended International Nonproprietary Name and British Approved Name. Other synonyms for heroin include: diacetylmorphine, and morphine diacetate. Heroin is also known by many street names including dope, H, smack, junk, horse, skag, brown, and unga, among others. Legal status Asia In Hong Kong, diamorphine is regulated under Schedule 1 of Hong Kong's Chapter 134 Dangerous Drugs Ordinance. It is available by prescription. Anyone supplying diamorphine without a valid prescription can be fined $5,000,000 (HKD) and imprisoned for life. The penalty for trafficking or manufacturing diamorphine is a $5,000,000 (HKD) fine and life imprisonment. Possession of diamorphine without a license from the Department of Health is illegal with a $1,000,000 (HKD) fine and 7 years of jail time. Europe In the Netherlands, diamorphine is a List I drug of the Opium Law. It is available for prescription under tight regulation exclusively to long-term addicts for whom methadone maintenance treatment has failed. It cannot be used to treat severe pain or other illnesses. In the United Kingdom, diamorphine is available by prescription, though it is a restricted Class A drug. According to the 50th edition of the British National Formulary (BNF), diamorphine hydrochloride may be used in the treatment of acute pain, myocardial infarction, acute pulmonary oedema, and chronic pain. The treatment of chronic non-malignant pain must be supervised by a specialist. The BNF notes that all opioid analgesics cause dependence and tolerance but that this is "no deterrent in the control of pain in terminal illness". When used in the palliative care of cancer patients, diamorphine is often injected using a syringe driver. In Switzerland, heroin is produced in injectable or tablet form under the name Diaphin by a private company under contract to the Swiss government. Swiss-produced heroin has been imported into Canada with government approval. Australia In Australia, diamorphine is listed as a schedule 9 prohibited substance under the Poisons Standard (October 2015). The state of Western Australia, in its Poisons Act 1964 (Reprint 6: amendments as at 10 Sep 2004), described a schedule 9 drug as: "Poisons that are drugs of abuse, the manufacture, possession, sale or use of which should be prohibited by law except for amounts which may be necessary for educational, experimental or research purposes conducted with the approval of the Governor." North America In Canada, diamorphine is a controlled substance under Schedule I of the Controlled Drugs and Substances Act (CDSA). Any person seeking or obtaining diamorphine without disclosing authorization 30 days before obtaining another prescription from a practitioner is guilty of an indictable offense and subject to imprisonment for a term not exceeding seven years. Possession of diamorphine for the purpose of trafficking is an indictable offense and subject to imprisonment for life. In the United States, diamorphine is a Schedule I drug according to the Controlled Substances Act of 1970, making it illegal to possess without a DEA license. Possession of more than 100 grams of diamorphine or a mixture containing diamorphine is punishable with a minimum mandatory sentence of 5 years of imprisonment in a federal prison. In 2021, the US state of Oregon became the first state to decriminalize the use of heroin after voters passed Ballot Measure 110 in 2020. This measure will allow people with small amounts to avoid arrest. Turkey Turkey maintains strict laws against the use, possession or trafficking of illegal drugs. If convicted under these offences, one could receive a heavy fine or a prison sentence of 4 to 24 years. Misuse of prescription medication Misused prescription medicine, such as opioids, can lead to heroin use and dependence. The number of death from illegal opioid overdose follows the increasing number of death caused by prescription opioid overdoses. Prescription opioids are relatively easy to obtain. This may ultimately lead to heroin injection because heroin is cheaper than prescribed pills. Economics Production Diamorphine is produced from acetylation of morphine derived from natural opium sources. One such method of heroin production involves isolation of the water-soluble components of raw opium, including morphine, in a strongly basic aqueous solution, followed by recrystallization of the morphine base by addition of ammonium chloride. The solid morphine base is then filtered out. The morphine base is then reacted with acetic anhydride, which forms heroin. This highly impure brown heroin base may then undergo further purification steps, which produces a white-colored product; the final products have a different appearance depending on purity and have different names. Heroin purity has been classified into four grades. No.4 is the purest form – white powder (salt) to be easily dissolved and injected. No.3 is "brown sugar" for smoking (base). No.1 and No.2 are unprocessed raw heroin (salt or base). Trafficking Traffic is heavy worldwide, with the biggest producer being Afghanistan. According to a U.N. sponsored survey, in 2004, Afghanistan accounted for production of 87 percent of the world's diamorphine. Afghan opium kills around 100,000 people annually. In 2003 The Independent reported: Opium production in that country has increased rapidly since, reaching an all-time high in 2006. War in Afghanistan once again appeared as a facilitator of the trade. Some 3.3 million Afghans are involved in producing opium. At present, opium poppies are mostly grown in Afghanistan (), and in Southeast Asia, especially in the region known as the Golden Triangle straddling Burma (), Thailand, Vietnam, Laos () and Yunnan province in China. There is also cultivation of opium poppies in Pakistan (), Mexico () and in Colombia (). According to the DEA, the majority of the heroin consumed in the United States comes from Mexico (50%) and Colombia (43–45%) via Mexican criminal cartels such as Sinaloa Cartel. However, these statistics may be significantly unreliable, the DEA's 50/50 split between Colombia and Mexico is contradicted by the amount of hectares cultivated in each country and in 2014, the DEA claimed most of the heroin in the US came from Colombia. , the Sinaloa Cartel is the most active drug cartel involved in smuggling illicit drugs such as heroin into the United States and trafficking them throughout the United States. According to the Royal Canadian Mounted Police, 90% of the heroin seized in Canada (where the origin was known) came from Afghanistan. Pakistan is the destination and transit point for 40 percent of the opiates produced in Afghanistan, other destinations of Afghan opiates are Russia, Europe and Iran. A conviction for trafficking heroin carries the death penalty in most Southeast Asian, some East Asian and Middle Eastern countries (see Use of death penalty worldwide for details), among which Malaysia, Singapore and Thailand are the strictest. The penalty applies even to citizens of countries where the penalty is not in place, sometimes causing controversy when foreign visitors are arrested for trafficking, for example, the arrest of nine Australians in Bali, the death sentence given to Nola Blake in Thailand in 1987, or the hanging of an Australian citizen Van Tuong Nguyen in Singapore. Trafficking history The origins of the present international illegal heroin trade can be traced back to laws passed in many countries in the early 1900s that closely regulated the production and sale of opium and its derivatives including heroin. At first, heroin flowed from countries where it was still legal into countries where it was no longer legal. By the mid-1920s, heroin production had been made illegal in many parts of the world. An illegal trade developed at that time between heroin labs in China (mostly in Shanghai and Tianjin) and other nations. The weakness of the government in China and conditions of civil war enabled heroin production to take root there. Chinese triad gangs eventually came to play a major role in the illicit heroin trade. The French Connection route started in the 1930s. Heroin trafficking was virtually eliminated in the US during World War II because of temporary trade disruptions caused by the war. Japan's war with China had cut the normal distribution routes for heroin and the war had generally disrupted the movement of opium. After World War II, the Mafia took advantage of the weakness of the postwar Italian government and set up heroin labs in Sicily which was located along the historic route opium took westward into Europe and the United States. Large-scale international heroin production effectively ended in China with the victory of the communists in the civil war in the late 1940s. The elimination of Chinese production happened at the same time that Sicily's role in the trade developed. Although it remained legal in some countries until after World War II, health risks, addiction, and widespread recreational use led most western countries to declare heroin a controlled substance by the latter half of the 20th century. In the late 1960s and early 1970s, the CIA supported anti-Communist Chinese Nationalists settled near the Sino-Burmese border and Hmong tribesmen in Laos. This helped the development of the Golden Triangle opium production region, which supplied about one-third of heroin consumed in the US after the 1973 American withdrawal from Vietnam. In 1999, Burma, the heartland of the Golden Triangle, was the second-largest producer of heroin, after Afghanistan. The Soviet-Afghan war led to increased production in the Pakistani-Afghan border regions, as US-backed mujaheddin militants raised money for arms from selling opium, contributing heavily to the modern Golden Crescent creation. By 1980, 60 percent of the heroin sold in the US originated in Afghanistan. It increased international production of heroin at lower prices in the 1980s. The trade shifted away from Sicily in the late 1970s as various criminal organizations violently fought with each other over the trade. The fighting also led to a stepped-up government law enforcement presence in Sicily. Following the discovery at a Jordanian airport of a toner cartridge that had been modified into an improvised explosive device, the resultant increased level of airfreight scrutiny led to a major shortage (drought) of heroin from October 2010 until April 2011. This was reported in most of mainland Europe and the UK which led to a price increase of approximately 30 percent in the cost of street heroin and increased demand for diverted methadone. The number of addicts seeking treatment also increased significantly during this period. Other heroin droughts (shortages) have been attributed to cartels restricting supply in order to force a price increase and also to a fungus that attacked the opium crop of 2009. Many people thought that the American government had introduced pathogens into the Afghanistan atmosphere in order to destroy the opium crop and thus starve insurgents of income. On 13 March 2012, Haji Bagcho, with ties to the Taliban, was convicted by a US District Court of conspiracy, distribution of heroin for importation into the United States and narco-terrorism. Based on heroin production statistics compiled by the United Nations Office on Drugs and Crime, in 2006, Bagcho's activities accounted for approximately 20 percent of the world's total production for that year. Street price The European Monitoring Centre for Drugs and Drug Addiction reports that the retail price of brown heroin varies from €14.5 per gram in Turkey to €110 per gram in Sweden, with most European countries reporting typical prices of €35–40 per gram. The price of white heroin is reported only by a few European countries and ranged between €27 and €110 per gram. The United Nations Office on Drugs and Crime claims in its 2008 World Drug Report that typical US retail prices are US$172 per gram. Research Researchers are attempting to reproduce the biosynthetic pathway that produces morphine in genetically engineered yeast. In June 2015 the S-reticuline could be produced from sugar and R-reticuline could be converted to morphine, but the intermediate reaction could not be performed.
Biology and health sciences
Recreational drugs
Health
14052
https://en.wikipedia.org/wiki/Hyperbola
Hyperbola
In mathematics, a hyperbola is a type of smooth curve lying in a plane, defined by its geometric properties or by equations for which it is the solution set. A hyperbola has two pieces, called connected components or branches, that are mirror images of each other and resemble two infinite bows. The hyperbola is one of the three kinds of conic section, formed by the intersection of a plane and a double cone. (The other conic sections are the parabola and the ellipse. A circle is a special case of an ellipse.) If the plane intersects both halves of the double cone but does not pass through the apex of the cones, then the conic is a hyperbola. Besides being a conic section, a hyperbola can arise as the locus of points whose difference of distances to two fixed foci is constant, as a curve for each point of which the rays to two fixed foci are reflections across the tangent line at that point, or as the solution of certain bivariate quadratic equations such as the reciprocal relationship In practical applications, a hyperbola can arise as the path followed by the shadow of the tip of a sundial's gnomon, the shape of an open orbit such as that of a celestial object exceeding the escape velocity of the nearest gravitational body, or the scattering trajectory of a subatomic particle, among others. Each branch of the hyperbola has two arms which become straighter (lower curvature) further out from the center of the hyperbola. Diagonally opposite arms, one from each branch, tend in the limit to a common line, called the asymptote of those two arms. So there are two asymptotes, whose intersection is at the center of symmetry of the hyperbola, which can be thought of as the mirror point about which each branch reflects to form the other branch. In the case of the curve the asymptotes are the two coordinate axes. Hyperbolas share many of the ellipses' analytical properties such as eccentricity, focus, and directrix. Typically the correspondence can be made with nothing more than a change of sign in some term. Many other mathematical objects have their origin in the hyperbola, such as hyperbolic paraboloids (saddle surfaces), hyperboloids ("wastebaskets"), hyperbolic geometry (Lobachevsky's celebrated non-Euclidean geometry), hyperbolic functions (sinh, cosh, tanh, etc.), and gyrovector spaces (a geometry proposed for use in both relativity and quantum mechanics which is not Euclidean). Etymology and history The word "hyperbola" derives from the Greek , meaning "over-thrown" or "excessive", from which the English term hyperbole also derives. Hyperbolae were discovered by Menaechmus in his investigations of the problem of doubling the cube, but were then called sections of obtuse cones. The term hyperbola is believed to have been coined by Apollonius of Perga () in his definitive work on the conic sections, the Conics. The names of the other two general conic sections, the ellipse and the parabola, derive from the corresponding Greek words for "deficient" and "applied"; all three names are borrowed from earlier Pythagorean terminology which referred to a comparison of the side of rectangles of fixed area with a given line segment. The rectangle could be "applied" to the segment (meaning, have an equal length), be shorter than the segment or exceed the segment. Definitions As locus of points A hyperbola can be defined geometrically as a set of points (locus of points) in the Euclidean plane: The midpoint of the line segment joining the foci is called the center of the hyperbola. The line through the foci is called the major axis. It contains the vertices , which have distance to the center. The distance of the foci to the center is called the focal distance or linear eccentricity. The quotient is the eccentricity . The equation can be viewed in a different way (see diagram): If is the circle with midpoint and radius , then the distance of a point of the right branch to the circle equals the distance to the focus : is called the circular directrix (related to focus ) of the hyperbola. In order to get the left branch of the hyperbola, one has to use the circular directrix related to . This property should not be confused with the definition of a hyperbola with help of a directrix (line) below. Hyperbola with equation If the xy-coordinate system is rotated about the origin by the angle and new coordinates are assigned, then . The rectangular hyperbola (whose semi-axes are equal) has the new equation . Solving for yields Thus, in an xy-coordinate system the graph of a function with equation is a rectangular hyperbola entirely in the first and third quadrants with the coordinate axes as asymptotes, the line as major axis , the center and the semi-axis the vertices the semi-latus rectum and radius of curvature at the vertices the linear eccentricity and the eccentricity the tangent at point A rotation of the original hyperbola by results in a rectangular hyperbola entirely in the second and fourth quadrants, with the same asymptotes, center, semi-latus rectum, radius of curvature at the vertices, linear eccentricity, and eccentricity as for the case of rotation, with equation the semi-axes the line as major axis, the vertices Shifting the hyperbola with equation so that the new center is yields the new equation and the new asymptotes are and . The shape parameters remain unchanged. By the directrix property The two lines at distance from the center and parallel to the minor axis are called directrices of the hyperbola (see diagram). For an arbitrary point of the hyperbola the quotient of the distance to one focus and to the corresponding directrix (see diagram) is equal to the eccentricity: The proof for the pair follows from the fact that and satisfy the equation The second case is proven analogously. The inverse statement is also true and can be used to define a hyperbola (in a manner similar to the definition of a parabola): For any point (focus), any line (directrix) not through and any real number with the set of points (locus of points), for which the quotient of the distances to the point and to the line is is a hyperbola. (The choice yields a parabola and if an ellipse.) Proof Let and assume is a point on the curve. The directrix has equation . With , the relation produces the equations and The substitution yields This is the equation of an ellipse () or a parabola () or a hyperbola (). All of these non-degenerate conics have, in common, the origin as a vertex (see diagram). If , introduce new parameters so that , and then the equation above becomes which is the equation of a hyperbola with center , the x-axis as major axis and the major/minor semi axis . Construction of a directrix Because of point of directrix (see diagram) and focus are inverse with respect to the circle inversion at circle (in diagram green). Hence point can be constructed using the theorem of Thales (not shown in the diagram). The directrix is the perpendicular to line through point . Alternative construction of : Calculation shows, that point is the intersection of the asymptote with its perpendicular through (see diagram). As plane section of a cone The intersection of an upright double cone by a plane not through the vertex with slope greater than the slope of the lines on the cone is a hyperbola (see diagram: red curve). In order to prove the defining property of a hyperbola (see above) one uses two Dandelin spheres , which are spheres that touch the cone along circles and the intersecting (hyperbola) plane at points and It turns out: are the foci of the hyperbola. Let be an arbitrary point of the intersection curve. The generatrix of the cone containing intersects circle at point and circle at a point . The line segments and are tangential to the sphere and, hence, are of equal length. The line segments and are tangential to the sphere and, hence, are of equal length. The result is: is independent of the hyperbola point because no matter where point is, have to be on circles and line segment has to cross the apex. Therefore, as point moves along the red curve (hyperbola), line segment simply rotates about apex without changing its length. Pin and string construction The definition of a hyperbola by its foci and its circular directrices (see above) can be used for drawing an arc of it with help of pins, a string and a ruler: Choose the foci and one of the circular directrices, for example (circle with radius ) A ruler is fixed at point free to rotate around . Point is marked at distance . A string gets its one end pinned at point on the ruler and its length is made . The free end of the string is pinned to point . Take a pen and hold the string tight to the edge of the ruler. Rotating the ruler around prompts the pen to draw an arc of the right branch of the hyperbola, because of (see the definition of a hyperbola by circular directrices). Steiner generation of a hyperbola The following method to construct single points of a hyperbola relies on the Steiner generation of a non degenerate conic section: For the generation of points of the hyperbola one uses the pencils at the vertices . Let be a point of the hyperbola and . The line segment is divided into n equally-spaced segments and this division is projected parallel with the diagonal as direction onto the line segment (see diagram). The parallel projection is part of the projective mapping between the pencils at and needed. The intersection points of any two related lines and are points of the uniquely defined hyperbola. Remarks: The subdivision could be extended beyond the points and in order to get more points, but the determination of the intersection points would become more inaccurate. A better idea is extending the points already constructed by symmetry (see animation). The Steiner generation exists for ellipses and parabolas, too. The Steiner generation is sometimes called a parallelogram method because one can use other points rather than the vertices, which starts with a parallelogram instead of a rectangle. Inscribed angles for hyperbolas and the 3-point-form A hyperbola with equation is uniquely determined by three points with different x- and y-coordinates. A simple way to determine the shape parameters uses the inscribed angle theorem for hyperbolas: Analogous to the inscribed angle theorem for circles one gets the A consequence of the inscribed angle theorem for hyperbolas is the As an affine image of the unit hyperbola Another definition of a hyperbola uses affine transformations: Parametric representation An affine transformation of the Euclidean plane has the form , where is a regular matrix (its determinant is not 0) and is an arbitrary vector. If are the column vectors of the matrix , the unit hyperbola is mapped onto the hyperbola is the center, a point of the hyperbola and a tangent vector at this point. Vertices In general the vectors are not perpendicular. That means, in general are not the vertices of the hyperbola. But point into the directions of the asymptotes. The tangent vector at point is Because at a vertex the tangent is perpendicular to the major axis of the hyperbola one gets the parameter of a vertex from the equation and hence from which yields The formulae and were used. The two vertices of the hyperbola are Implicit representation Solving the parametric representation for by Cramer's rule and using , one gets the implicit representation Hyperbola in space The definition of a hyperbola in this section gives a parametric representation of an arbitrary hyperbola, even in space, if one allows to be vectors in space. As an affine image of the hyperbola Because the unit hyperbola is affinely equivalent to the hyperbola , an arbitrary hyperbola can be considered as the affine image (see previous section) of the hyperbola is the center of the hyperbola, the vectors have the directions of the asymptotes and is a point of the hyperbola. The tangent vector is At a vertex the tangent is perpendicular to the major axis. Hence and the parameter of a vertex is is equivalent to and are the vertices of the hyperbola. The following properties of a hyperbola are easily proven using the representation of a hyperbola introduced in this section. Tangent construction The tangent vector can be rewritten by factorization: This means that This property provides a way to construct the tangent at a point on the hyperbola. This property of a hyperbola is an affine version of the 3-point-degeneration of Pascal's theorem. Area of the grey parallelogram The area of the grey parallelogram in the above diagram is and hence independent of point . The last equation follows from a calculation for the case, where is a vertex and the hyperbola in its canonical form Point construction For a hyperbola with parametric representation (for simplicity the center is the origin) the following is true: The simple proof is a consequence of the equation . This property provides a possibility to construct points of a hyperbola if the asymptotes and one point are given. This property of a hyperbola is an affine version of the 4-point-degeneration of Pascal's theorem. Tangent–asymptotes triangle For simplicity the center of the hyperbola may be the origin and the vectors have equal length. If the last assumption is not fulfilled one can first apply a parameter transformation (see above) in order to make the assumption true. Hence are the vertices, span the minor axis and one gets and . For the intersection points of the tangent at point with the asymptotes one gets the points The area of the triangle can be calculated by a 2 × 2 determinant: (see rules for determinants). is the area of the rhombus generated by . The area of a rhombus is equal to one half of the product of its diagonals. The diagonals are the semi-axes of the hyperbola. Hence: Reciprocation of a circle The reciprocation of a circle B in a circle C always yields a conic section such as a hyperbola. The process of "reciprocation in a circle C" consists of replacing every line and point in a geometrical figure with their corresponding pole and polar, respectively. The pole of a line is the inversion of its closest point to the circle C, whereas the polar of a point is the converse, namely, a line whose closest point to C is the inversion of the point. The eccentricity of the conic section obtained by reciprocation is the ratio of the distances between the two circles' centers to the radius r of reciprocation circle C. If B and C represent the points at the centers of the corresponding circles, then Since the eccentricity of a hyperbola is always greater than one, the center B must lie outside of the reciprocating circle C. This definition implies that the hyperbola is both the locus of the poles of the tangent lines to the circle B, as well as the envelope of the polar lines of the points on B. Conversely, the circle B is the envelope of polars of points on the hyperbola, and the locus of poles of tangent lines to the hyperbola. Two tangent lines to B have no (finite) poles because they pass through the center C of the reciprocation circle C; the polars of the corresponding tangent points on B are the asymptotes of the hyperbola. The two branches of the hyperbola correspond to the two parts of the circle B that are separated by these tangent points. Quadratic equation A hyperbola can also be defined as a second-degree equation in the Cartesian coordinates in the plane, provided that the constants and satisfy the determinant condition This determinant is conventionally called the discriminant of the conic section. A special case of a hyperbola—the degenerate hyperbola consisting of two intersecting lines—occurs when another determinant is zero: This determinant is sometimes called the discriminant of the conic section. The general equation's coefficients can be obtained from known semi-major axis semi-minor axis center coordinates , and rotation angle (the angle from the positive horizontal axis to the hyperbola's major axis) using the formulae: These expressions can be derived from the canonical equation by a translation and rotation of the coordinates Given the above general parametrization of the hyperbola in Cartesian coordinates, the eccentricity can be found using the formula in Conic section#Eccentricity in terms of coefficients. The center of the hyperbola may be determined from the formulae In terms of new coordinates, and the defining equation of the hyperbola can be written The principal axes of the hyperbola make an angle with the positive -axis that is given by Rotating the coordinate axes so that the -axis is aligned with the transverse axis brings the equation into its canonical form The major and minor semiaxes and are defined by the equations where and are the roots of the quadratic equation For comparison, the corresponding equation for a degenerate hyperbola (consisting of two intersecting lines) is The tangent line to a given point on the hyperbola is defined by the equation where and are defined by The normal line to the hyperbola at the same point is given by the equation The normal line is perpendicular to the tangent line, and both pass through the same point From the equation the left focus is and the right focus is where is the eccentricity. Denote the distances from a point to the left and right foci as and For a point on the right branch, and for a point on the left branch, This can be proved as follows: If is a point on the hyperbola the distance to the left focal point is To the right focal point the distance is If is a point on the right branch of the hyperbola then and Subtracting these equations one gets If is a point on the left branch of the hyperbola then and Subtracting these equations one gets In Cartesian coordinates Equation If Cartesian coordinates are introduced such that the origin is the center of the hyperbola and the x-axis is the major axis, then the hyperbola is called east-west-opening and the foci are the points , the vertices are . For an arbitrary point the distance to the focus is and to the second focus . Hence the point is on the hyperbola if the following condition is fulfilled Remove the square roots by suitable squarings and use the relation to obtain the equation of the hyperbola: This equation is called the canonical form of a hyperbola, because any hyperbola, regardless of its orientation relative to the Cartesian axes and regardless of the location of its center, can be transformed to this form by a change of variables, giving a hyperbola that is congruent to the original (see below). The axes of symmetry or principal axes are the transverse axis (containing the segment of length 2a with endpoints at the vertices) and the conjugate axis (containing the segment of length 2b perpendicular to the transverse axis and with midpoint at the hyperbola's center). As opposed to an ellipse, a hyperbola has only two vertices: . The two points on the conjugate axes are not on the hyperbola. It follows from the equation that the hyperbola is symmetric with respect to both of the coordinate axes and hence symmetric with respect to the origin. Eccentricity For a hyperbola in the above canonical form, the eccentricity is given by Two hyperbolas are geometrically similar to each other – meaning that they have the same shape, so that one can be transformed into the other by rigid left and right movements, rotation, taking a mirror image, and scaling (magnification) – if and only if they have the same eccentricity. Asymptotes Solving the equation (above) of the hyperbola for yields It follows from this that the hyperbola approaches the two lines for large values of . These two lines intersect at the center (origin) and are called asymptotes of the hyperbola With the help of the second figure one can see that The perpendicular distance from a focus to either asymptote is (the semi-minor axis). From the Hesse normal form of the asymptotes and the equation of the hyperbola one gets: The product of the distances from a point on the hyperbola to both the asymptotes is the constant which can also be written in terms of the eccentricity e as From the equation of the hyperbola (above) one can derive: The product of the slopes of lines from a point P to the two vertices is the constant In addition, from (2) above it can be shown that The product of the distances from a point on the hyperbola to the asymptotes along lines parallel to the asymptotes is the constant Semi-latus rectum The length of the chord through one of the foci, perpendicular to the major axis of the hyperbola, is called the latus rectum. One half of it is the semi-latus rectum . A calculation shows The semi-latus rectum may also be viewed as the radius of curvature at the vertices. Tangent The simplest way to determine the equation of the tangent at a point is to implicitly differentiate the equation of the hyperbola. Denoting dy/dx as y′, this produces With respect to , the equation of the tangent at point is A particular tangent line distinguishes the hyperbola from the other conic sections. Let f be the distance from the vertex V (on both the hyperbola and its axis through the two foci) to the nearer focus. Then the distance, along a line perpendicular to that axis, from that focus to a point P on the hyperbola is greater than 2f. The tangent to the hyperbola at P intersects that axis at point Q at an angle ∠PQV of greater than 45°. Rectangular hyperbola In the case the hyperbola is called rectangular (or equilateral), because its asymptotes intersect at right angles. For this case, the linear eccentricity is , the eccentricity and the semi-latus rectum . The graph of the equation is a rectangular hyperbola. Parametric representation with hyperbolic sine/cosine Using the hyperbolic sine and cosine functions , a parametric representation of the hyperbola can be obtained, which is similar to the parametric representation of an ellipse: which satisfies the Cartesian equation because Further parametric representations are given in the section Parametric equations below. Conjugate hyperbola For the hyperbola , change the sign on the right to obtain the equation of the conjugate hyperbola: (which can also be written as ). A hyperbola and its conjugate may have diameters which are conjugate. In the theory of special relativity, such diameters may represent axes of time and space, where one hyperbola represents events at a given spatial distance from the center, and the other represents events at a corresponding temporal distance from the center. and also specify conjugate hyperbolas. In polar coordinates Origin at the focus The polar coordinates used most commonly for the hyperbola are defined relative to the Cartesian coordinate system that has its origin in a focus and its x-axis pointing toward the origin of the "canonical coordinate system" as illustrated in the first diagram. In this case the angle is called true anomaly. Relative to this coordinate system one has that and Origin at the center With polar coordinates relative to the "canonical coordinate system" (see second diagram) one has that For the right branch of the hyperbola the range of is Eccentricity When using polar coordinates, the eccentricity of the hyperbola can be expressed as where is the limit of the angular coordinate. As approaches this limit, r approaches infinity and the denominator in either of the equations noted above approaches zero, hence: Parametric equations A hyperbola with equation can be described by several parametric equations: Through hyperbolic trigonometric functions As a rational representation Through circular trigonometric functions With the tangent slope as parameter: A parametric representation, which uses the slope of the tangent at a point of the hyperbola can be obtained analogously to the ellipse case: Replace in the ellipse case by and use formulae for the hyperbolic functions. One gets Here, is the upper, and the lower half of the hyperbola. The points with vertical tangents (vertices ) are not covered by the representation. The equation of the tangent at point is This description of the tangents of a hyperbola is an essential tool for the determination of the orthoptic of a hyperbola. Hyperbolic functions Just as the trigonometric functions are defined in terms of the unit circle, so also the hyperbolic functions are defined in terms of the unit hyperbola, as shown in this diagram. In a unit circle, the angle (in radians) is equal to twice the area of the circular sector which that angle subtends. The analogous hyperbolic angle is likewise defined as twice the area of a hyperbolic sector. Let be twice the area between the axis and a ray through the origin intersecting the unit hyperbola, and define as the coordinates of the intersection point. Then the area of the hyperbolic sector is the area of the triangle minus the curved region past the vertex at : which simplifies to the area hyperbolic cosine Solving for yields the exponential form of the hyperbolic cosine: From one gets and its inverse the area hyperbolic sine: Other hyperbolic functions are defined according to the hyperbolic cosine and hyperbolic sine, so for example Properties Reflection property The tangent at a point bisects the angle between the lines This is called the optical property or reflection property of a hyperbola. Proof Let be the point on the line with the distance to the focus (see diagram, is the semi major axis of the hyperbola). Line is the bisector of the angle between the lines . In order to prove that is the tangent line at point , one checks that any point on line which is different from cannot be on the hyperbola. Hence has only point in common with the hyperbola and is, therefore, the tangent at point . From the diagram and the triangle inequality one recognizes that holds, which means: . But if is a point of the hyperbola, the difference should be . Midpoints of parallel chords The midpoints of parallel chords of a hyperbola lie on a line through the center (see diagram). The points of any chord may lie on different branches of the hyperbola. The proof of the property on midpoints is best done for the hyperbola . Because any hyperbola is an affine image of the hyperbola (see section below) and an affine transformation preserves parallelism and midpoints of line segments, the property is true for all hyperbolas: For two points of the hyperbola the midpoint of the chord is the slope of the chord is For parallel chords the slope is constant and the midpoints of the parallel chords lie on the line Consequence: for any pair of points of a chord there exists a skew reflection with an axis (set of fixed points) passing through the center of the hyperbola, which exchanges the points and leaves the hyperbola (as a whole) fixed. A skew reflection is a generalization of an ordinary reflection across a line , where all point-image pairs are on a line perpendicular to . Because a skew reflection leaves the hyperbola fixed, the pair of asymptotes is fixed, too. Hence the midpoint of a chord divides the related line segment between the asymptotes into halves, too. This means that . This property can be used for the construction of further points of the hyperbola if a point and the asymptotes are given. If the chord degenerates into a tangent, then the touching point divides the line segment between the asymptotes in two halves. Orthogonal tangents – orthoptic For a hyperbola the intersection points of orthogonal tangents lie on the circle . This circle is called the orthoptic of the given hyperbola. The tangents may belong to points on different branches of the hyperbola. In case of there are no pairs of orthogonal tangents. Pole-polar relation for a hyperbola Any hyperbola can be described in a suitable coordinate system by an equation . The equation of the tangent at a point of the hyperbola is If one allows point to be an arbitrary point different from the origin, then point is mapped onto the line , not through the center of the hyperbola. This relation between points and lines is a bijection. The inverse function maps line onto the point and line onto the point Such a relation between points and lines generated by a conic is called pole-polar relation or just polarity. The pole is the point, the polar the line. See Pole and polar. By calculation one checks the following properties of the pole-polar relation of the hyperbola: For a point (pole) on the hyperbola the polar is the tangent at this point (see diagram: ). For a pole outside the hyperbola the intersection points of its polar with the hyperbola are the tangency points of the two tangents passing (see diagram: ). For a point within the hyperbola the polar has no point with the hyperbola in common. (see diagram: ). Remarks: The intersection point of two polars (for example: ) is the pole of the line through their poles (here: ). The foci and respectively and the directrices and respectively belong to pairs of pole and polar. Pole-polar relations exist for ellipses and parabolas, too. Other properties The following are concurrent: (1) a circle passing through the hyperbola's foci and centered at the hyperbola's center; (2) either of the lines that are tangent to the hyperbola at the vertices; and (3) either of the asymptotes of the hyperbola. The following are also concurrent: (1) the circle that is centered at the hyperbola's center and that passes through the hyperbola's vertices; (2) either directrix; and (3) either of the asymptotes. Since both the transverse axis and the conjugate axis are axes of symmetry, the symmetry group of a hyperbola is the Klein four-group. The rectangular hyperbolas xy = constant admit group actions by squeeze mappings which have the hyperbolas as invariant sets. Arc length The arc length of a hyperbola does not have an elementary expression. The upper half of a hyperbola can be parameterized as Then the integral giving the arc length from to can be computed as: After using the substitution , this can also be represented using the incomplete elliptic integral of the second kind with parameter : Using only real numbers, this becomes where is the incomplete elliptic integral of the first kind with parameter and is the Gudermannian function. Derived curves Several other curves can be derived from the hyperbola by inversion, the so-called inverse curves of the hyperbola. If the center of inversion is chosen as the hyperbola's own center, the inverse curve is the lemniscate of Bernoulli; the lemniscate is also the envelope of circles centered on a rectangular hyperbola and passing through the origin. If the center of inversion is chosen at a focus or a vertex of the hyperbola, the resulting inverse curves are a limaçon or a strophoid, respectively. Elliptic coordinates A family of confocal hyperbolas is the basis of the system of elliptic coordinates in two dimensions. These hyperbolas are described by the equation where the foci are located at a distance c from the origin on the x-axis, and where θ is the angle of the asymptotes with the x-axis. Every hyperbola in this family is orthogonal to every ellipse that shares the same foci. This orthogonality may be shown by a conformal map of the Cartesian coordinate system w = z + 1/z, where z= x + iy are the original Cartesian coordinates, and w=u + iv are those after the transformation. Other orthogonal two-dimensional coordinate systems involving hyperbolas may be obtained by other conformal mappings. For example, the mapping w = z2 transforms the Cartesian coordinate system into two families of orthogonal hyperbolas. Conic section analysis of the hyperbolic appearance of circles Besides providing a uniform description of circles, ellipses, parabolas, and hyperbolas, conic sections can also be understood as a natural model of the geometry of perspective in the case where the scene being viewed consists of circles, or more generally an ellipse. The viewer is typically a camera or the human eye and the image of the scene a central projection onto an image plane, that is, all projection rays pass a fixed point O, the center. The lens plane is a plane parallel to the image plane at the lens O. The image of a circle c is (Special positions where the circle plane contains point O are omitted.) These results can be understood if one recognizes that the projection process can be seen in two steps: 1) circle c and point O generate a cone which is 2) cut by the image plane, in order to generate the image. One sees a hyperbola whenever catching sight of a portion of a circle cut by one's lens plane. The inability to see very much of the arms of the visible branch, combined with the complete absence of the second branch, makes it virtually impossible for the human visual system to recognize the connection with hyperbolas. Applications Sundials Hyperbolas may be seen in many sundials. On any given day, the sun revolves in a circle on the celestial sphere, and its rays striking the point on a sundial traces out a cone of light. The intersection of this cone with the horizontal plane of the ground forms a conic section. At most populated latitudes and at most times of the year, this conic section is a hyperbola. In practical terms, the shadow of the tip of a pole traces out a hyperbola on the ground over the course of a day (this path is called the declination line). The shape of this hyperbola varies with the geographical latitude and with the time of the year, since those factors affect the cone of the sun's rays relative to the horizon. The collection of such hyperbolas for a whole year at a given location was called a pelekinon by the Greeks, since it resembles a double-bladed axe. Multilateration A hyperbola is the basis for solving multilateration problems, the task of locating a point from the differences in its distances to given points — or, equivalently, the difference in arrival times of synchronized signals between the point and the given points. Such problems are important in navigation, particularly on water; a ship can locate its position from the difference in arrival times of signals from a LORAN or GPS transmitters. Conversely, a homing beacon or any transmitter can be located by comparing the arrival times of its signals at two separate receiving stations; such techniques may be used to track objects and people. In particular, the set of possible positions of a point that has a distance difference of 2a from two given points is a hyperbola of vertex separation 2a whose foci are the two given points. Path followed by a particle The path followed by any particle in the classical Kepler problem is a conic section. In particular, if the total energy E of the particle is greater than zero (that is, if the particle is unbound), the path of such a particle is a hyperbola. This property is useful in studying atomic and sub-atomic forces by scattering high-energy particles; for example, the Rutherford experiment demonstrated the existence of an atomic nucleus by examining the scattering of alpha particles from gold atoms. If the short-range nuclear interactions are ignored, the atomic nucleus and the alpha particle interact only by a repulsive Coulomb force, which satisfies the inverse square law requirement for a Kepler problem. Korteweg–de Vries equation The hyperbolic trig function appears as one solution to the Korteweg–de Vries equation which describes the motion of a soliton wave in a canal. Angle trisection As shown first by Apollonius of Perga, a hyperbola can be used to trisect any angle, a well studied problem of geometry. Given an angle, first draw a circle centered at its vertex O, which intersects the sides of the angle at points A and B. Next draw the line segment with endpoints A and B and its perpendicular bisector . Construct a hyperbola of eccentricity e=2 with as directrix and B as a focus. Let P be the intersection (upper) of the hyperbola with the circle. Angle POB trisects angle AOB. To prove this, reflect the line segment OP about the line obtaining the point P' as the image of P. Segment AP' has the same length as segment BP due to the reflection, while segment PP' has the same length as segment BP due to the eccentricity of the hyperbola. As OA, OP', OP and OB are all radii of the same circle (and so, have the same length), the triangles OAP', OPP' and OPB are all congruent. Therefore, the angle has been trisected, since 3×POB = AOB. Efficient portfolio frontier In portfolio theory, the locus of mean-variance efficient portfolios (called the efficient frontier) is the upper half of the east-opening branch of a hyperbola drawn with the portfolio return's standard deviation plotted horizontally and its expected value plotted vertically; according to this theory, all rational investors would choose a portfolio characterized by some point on this locus. Biochemistry In biochemistry and pharmacology, the Hill equation and Hill-Langmuir equation respectively describe biological responses and the formation of protein–ligand complexes as functions of ligand concentration. They are both rectangular hyperbolae. Hyperbolas as plane sections of quadrics Hyperbolas appear as plane sections of the following quadrics: Elliptic cone Hyperbolic cylinder Hyperbolic paraboloid Hyperboloid of one sheet Hyperboloid of two sheets
Mathematics
Geometry
null
14073
https://en.wikipedia.org/wiki/Hydropower
Hydropower
Hydropower (from Ancient Greek -, "water"), also known as water power or water energy, is the use of falling or fast-running water to produce electricity or to power machines. This is achieved by converting the gravitational potential or kinetic energy of a water source to produce power. Hydropower is a method of sustainable energy production. Hydropower is now used principally for hydroelectric power generation, and is also applied as one half of an energy storage system known as pumped-storage hydroelectricity. Hydropower is an attractive alternative to fossil fuels as it does not directly produce carbon dioxide or other atmospheric pollutants and it provides a relatively consistent source of power. Nonetheless, it has economic, sociological, and environmental downsides and requires a sufficiently energetic source of water, such as a river or elevated lake. International institutions such as the World Bank view hydropower as a low-carbon means for economic development. Since ancient times, hydropower from watermills has been used as a renewable energy source for irrigation and the operation of mechanical devices, such as gristmills, sawmills, textile mills, trip hammers, dock cranes, domestic lifts, and ore mills. A trompe, which produces compressed air from falling water, is sometimes used to power other machinery at a distance. Calculating the amount of available power A hydropower resource can be evaluated by its available power. Power is a function of the hydraulic head and volumetric flow rate. The head is the energy per unit weight (or unit mass) of water. The static head is proportional to the difference in height through which the water falls. Dynamic head is related to the velocity of moving water. Each unit of water can do an amount of work equal to its weight times the head. The power available from falling water can be calculated from the flow rate and density of water, the height of fall, and the local acceleration due to gravity: where (work flow rate out) is the useful power output (SI unit: watts) ("eta") is the efficiency of the turbine (dimensionless) is the mass flow rate (SI unit: kilograms per second) ("rho") is the density of water (SI unit: kilograms per cubic metre) is the volumetric flow rate (SI unit: cubic metres per second) is the acceleration due to gravity (SI unit: metres per second per second) ("Delta h") is the difference in height between the outlet and inlet (SI unit: metres) To illustrate, the power output of a turbine that is 85% efficient, with a flow rate of 80 cubic metres per second (2800 cubic feet per second) and a head of , is 97 megawatts: Operators of hydroelectric stations compare the total electrical energy produced with the theoretical potential energy of the water passing through the turbine to calculate efficiency. Procedures and definitions for calculation of efficiency are given in test codes such as ASME PTC 18 and IEC 60041. Field testing of turbines is used to validate the manufacturer's efficiency guarantee. Detailed calculation of the efficiency of a hydropower turbine accounts for the head lost due to flow friction in the power canal or penstock, rise in tailwater level due to flow, the location of the station and effect of varying gravity, the air temperature and barometric pressure, the density of the water at ambient temperature, and the relative altitudes of the forebay and tailbay. For precise calculations, errors due to rounding and the number of significant digits of constants must be considered. Some hydropower systems such as water wheels can draw power from the flow of a body of water without necessarily changing its height. In this case, the available power is the kinetic energy of the flowing water. Over-shot water wheels can efficiently capture both types of energy. The flow in a stream can vary widely from season to season. The development of a hydropower site requires analysis of flow records, sometimes spanning decades, to assess the reliable annual energy supply. Dams and reservoirs provide a more dependable source of power by smoothing seasonal changes in water flow. However, reservoirs have a significant environmental impact, as does alteration of naturally occurring streamflow. Dam design must account for the worst-case, "probable maximum flood" that can be expected at the site; a spillway is often included to route flood flows around the dam. A computer model of the hydraulic basin and rainfall and snowfall records are used to predict the maximum flood. Disadvantages and limitations Some disadvantages of hydropower have been identified. Dam failures can have catastrophic effects, including loss of life, property and pollution of land. Dams and reservoirs can have major negative impacts on river ecosystems such as preventing some animals traveling upstream, cooling and de-oxygenating of water released downstream, and loss of nutrients due to settling of particulates. River sediment builds river deltas and dams prevent them from restoring what is lost from erosion. Furthermore, studies found that the construction of dams and reservoirs can result in habitat loss for some aquatic species.Large and deep dam and reservoir plants cover large areas of land which causes greenhouse gas emissions from underwater rotting vegetation. Furthermore, although at lower levels than other renewable energy sources, it was found that hydropower produces methane equivalent to almost a billion tonnes of CO2 greenhouse gas a year. This occurs when organic matters accumulate at the bottom of the reservoir because of the deoxygenation of water which triggers anaerobic digestion. People who live near a hydro plant site are displaced during construction or when reservoir banks become unstable. Another potential disadvantage is cultural or religious sites may block construction. Applications Mechanical power Watermills Compressed air A plentiful head of water can be made to generate compressed air directly without moving parts. In these designs, a falling column of water is deliberately mixed with air bubbles generated through turbulence or a venturi pressure reducer at the high-level intake. This allows it to fall down a shaft into a subterranean, high-roofed chamber where the now-compressed air separates from the water and becomes trapped. The height of the falling water column maintains compression of the air in the top of the chamber, while an outlet, submerged below the water level in the chamber allows water to flow back to the surface at a lower level than the intake. A separate outlet in the roof of the chamber supplies the compressed air. A facility on this principle was built on the Montreal River at Ragged Shutes near Cobalt, Ontario, in 1910 and supplied 5,000 horsepower to nearby mines. Electricity Hydroelectricity is the biggest hydropower application. Hydroelectricity generates about 15% of global electricity and provides at least 50% of the total electricity supply for more than 35 countries. In 2021, global installed hydropower electrical capacity reached almost 1400 GW, the highest among all renewable energy technologies. Hydroelectricity generation starts with converting either the potential energy of water that is present due to the site's elevation or the kinetic energy of moving water into electrical energy. Hydroelectric power plants vary in terms of the way they harvest energy. One type involves a dam and a reservoir. The water in the reservoir is available on demand to be used to generate electricity by passing through channels that connect the dam to the reservoir. The water spins a turbine, which is connected to the generator that produces electricity. The other type is called a run-of-river plant. In this case, a barrage is built to control the flow of water, absent a reservoir. The run-of river power plant needs continuous water flow and therefore has less ability to provide power on demand. The kinetic energy of flowing water is the main source of energy. Both designs have limitations. For example, dam construction can result in discomfort to nearby residents. The dam and reservoirs occupy a relatively large amount of space that may be opposed by nearby communities. Moreover, reservoirs can potentially have major environmental consequences such as harming downstream habitats. On the other hand, the limitation of the run-of-river project is the decreased efficiency of electricity generation because the process depends on the speed of the seasonal river flow. This means that the rainy season increases electricity generation compared to the dry season. The size of hydroelectric plants can vary from small plants called micro hydro, to large plants that supply power to a whole country. As of 2019, the five largest power stations in the world are conventional hydroelectric power stations with dams. Hydroelectricity can also be used to store energy in the form of potential energy between two reservoirs at different heights with pumped-storage. Water is pumped uphill into reservoirs during periods of low demand to be released for generation when demand is high or system generation is low. Other forms of electricity generation with hydropower include tidal stream generators using energy from tidal power generated from oceans, rivers, and human-made canal systems to generating electricity. Rain power Rain has been referred to as "one of the last unexploited energy sources in nature. When it rains, billions of litres of water can fall, which have an enormous electric potential if used in the right way." Research is being done into the different methods of generating power from rain, such as by using the energy in the impact of raindrops. This is in its very early stages with new and emerging technologies being tested, prototyped and created. Such power has been called rain power. One method in which this has been attempted is by using hybrid solar panels called "all-weather solar panels" that can generate electricity from both the sun and the rain. According to zoologist and science and technology educator, Luis Villazon, "A 2008 French study estimated that you could use piezoelectric devices, which generate power when they move, to extract 12 milliwatts from a raindrop. Over a year, this would amount to less than 0.001kWh per square metre – enough to power a remote sensor." Villazon suggested a better application would be to collect the water from fallen rain and use it to drive a turbine, with an estimated energy generation of 3 kWh of energy per year for a 185 m2 roof. A microturbine-based system created by three students from the Technological University of Mexico has been used to generate electricity. The Pluvia system "uses the stream of rainwater runoff from houses' rooftop rain gutters to spin a microturbine in a cylindrical housing. Electricity generated by that turbine is used to charge 12-volt batteries." The term rain power has also been applied to hydropower systems which include the process of capturing the rain. History Ancient history Evidence suggests that the fundamentals of hydropower date to ancient Greek civilization. Other evidence indicates that the waterwheel independently emerged in China around the same period. Evidence of water wheels and watermills date to the ancient Near East in the 4th century BC. Moreover, evidence indicates the use of hydropower using irrigation machines to ancient civilizations such as Sumer and Babylonia. Studies suggest that the water wheel was the initial form of water power and it was driven by either humans or animals. In the Roman Empire, water-powered mills were described by Vitruvius by the first century BC. The Barbegal mill, located in modern-day France, had 16 water wheels processing up to 28 tons of grain per day. Roman waterwheels were also used for sawing marble such as the Hierapolis sawmill of the late 3rd century AD. Such sawmills had a waterwheel that drove two crank-and-connecting rods to power two saws. It also appears in two 6th century Eastern Roman sawmills excavated at Ephesus and Gerasa respectively. The crank and connecting rod mechanism of these Roman watermills converted the rotary motion of the waterwheel into the linear movement of the saw blades. Water-powered trip hammers and bellows in China, during the Han dynasty (202 BC – 220 AD), were initially thought to be powered by water scoops. However, some historians suggested that they were powered by waterwheels. This is since it was theorized that water scoops would not have had the motive force to operate their blast furnace bellows. Many texts describe the Hun waterwheel; some of the earliest ones are the Jijiupian dictionary of 40 BC, Yang Xiong's text known as the Fangyan of 15 BC, as well as Xin Lun, written by Huan Tan about 20 AD. It was also during this time that the engineer Du Shi (c. AD 31) applied the power of waterwheels to piston-bellows in forging cast iron. Ancient Indian texts dating back to the 4th century BC refer to the term cakkavattaka (turning wheel), which commentaries explain as arahatta-ghati-yanta (machine with wheel-pots attached), however whether this is water or hand powered is disputed by scholars India received Roman water mills and baths in the early 4th century AD when a certain according to Greek sources. Dams, spillways, reservoirs, channels, and water balance would develop in India during the Mauryan, Gupta and Chola empires. Another example of the early use of hydropower is seen in hushing, a historic method of mining that uses flood or torrent of water to reveal mineral veins. The method was first used at the Dolaucothi Gold Mines in Wales from 75 AD onwards. This method was further developed in Spain in mines such as Las Médulas. Hushing was also widely used in Britain in the Medieval and later periods to extract lead and tin ores. It later evolved into hydraulic mining when used during the California Gold Rush in the 19th century. The Islamic Empire spanned a large region, mainly in Asia and Africa, along with other surrounding areas. During the Islamic Golden Age and the Arab Agricultural Revolution (8th–13th centuries), hydropower was widely used and developed. Early uses of tidal power emerged along with large hydraulic factory complexes. A wide range of water-powered industrial mills were used in the region including fulling mills, gristmills, paper mills, hullers, sawmills, ship mills, stamp mills, steel mills, sugar mills, and tide mills. By the 11th century, every province throughout the Islamic Empire had these industrial mills in operation, from Al-Andalus and North Africa to the Middle East and Central Asia. Muslim engineers also used water turbines while employing gears in watermills and water-raising machines. They also pioneered the use of dams as a source of water power, used to provide additional power to watermills and water-raising machines. Islamic irriguation techniques including Persian Wheels would be introduced to India, and would be combined with local methods, during the Delhi Sultanate and the Mughal Empire. Furthermore, in his book, The Book of Knowledge of Ingenious Mechanical Devices, the Muslim mechanical engineer, Al-Jazari (1136–1206) described designs for 50 devices. Many of these devices were water-powered, including clocks, a device to serve wine, and five devices to lift water from rivers or pools, where three of them are animal-powered and one can be powered by animal or water. Moreover, they included an endless belt with jugs attached, a cow-powered shadoof (a crane-like irrigation tool), and a reciprocating device with hinged valves. 19th century In the 19th century, French engineer Benoît Fourneyron developed the first hydropower turbine. This device was implemented in the commercial plant of Niagara Falls in 1895 and it is still operating. In the early 20th century, English engineer William Armstrong built and operated the first private electrical power station which was located in his house in Cragside in Northumberland, England. In 1753, the French engineer Bernard Forest de Bélidor published his book, Architecture Hydraulique, which described vertical-axis and horizontal-axis hydraulic machines. The growing demand for the Industrial Revolution would drive development as well. At the beginning of the Industrial Revolution in Britain, water was the main power source for new inventions such as Richard Arkwright's water frame. Although water power gave way to steam power in many of the larger mills and factories, it was still used during the 18th and 19th centuries for many smaller operations, such as driving the bellows in small blast furnaces (e.g. the Dyfi Furnace) and gristmills, such as those built at Saint Anthony Falls, which uses the drop in the Mississippi River. Technological advances moved the open water wheel into an enclosed turbine or water motor. In 1848, the British-American engineer James B. Francis, head engineer of Lowell's Locks and Canals company, improved on these designs to create a turbine with 90% efficiency. He applied scientific principles and testing methods to the problem of turbine design. His mathematical and graphical calculation methods allowed the confident design of high-efficiency turbines to exactly match a site's specific flow conditions. The Francis reaction turbine is still in use. In the 1870s, deriving from uses in the California mining industry, Lester Allan Pelton developed the high-efficiency Pelton wheel impulse turbine, which used hydropower from the high head streams characteristic of the Sierra Nevada. 20th century The modern history of hydropower begins in the 1900s, with large dams built not simply to power neighboring mills or factories but provide extensive electricity for increasingly distant groups of people. Competition drove much of the global hydroelectric craze: Europe competed amongst itself to electrify first, and the United States' hydroelectric plants in Niagara Falls and the Sierra Nevada inspired bigger and bolder creations across the globe. American and USSR financers and hydropower experts also spread the gospel of dams and hydroelectricity across the globe during the Cold War, contributing to projects such as the Three Gorges Dam and the Aswan High Dam. Feeding desire for large scale electrification with water inherently required large dams across powerful rivers, which impacted public and private interests downstream and in flood zones. Inevitably smaller communities and marginalized groups suffered. They were unable to successfully resist companies flooding them out of their homes or blocking traditional salmon passages. The stagnant water created by hydroelectric dams provides breeding ground for pests and pathogens, leading to local epidemics. However, in some cases, a mutual need for hydropower could lead to cooperation between otherwise adversarial nations. Hydropower technology and attitude began to shift in the second half of the 20th century. While countries had largely abandoned their small hydropower systems by the 1930s, the smaller hydropower plants began to make a comeback in the 1970s, boosted by government subsidies and a push for more independent energy producers. Some politicians who once advocated for large hydropower projects in the first half of the 20th century began to speak out against them, and citizen groups organizing against dam projects increased. In the 1980s and 90s the international anti-dam movement had made finding government or private investors for new large hydropower projects incredibly difficult, and given rise to NGOs devoted to fighting dams. Additionally, while the cost of other energy sources fell, the cost of building new hydroelectric dams increased 4% annually between 1965 and 1990, due both to the increasing costs of construction and to the decrease in high quality building sites. In the 1990s, only 18% of the world's electricity came from hydropower. Tidal power production also emerged in the 1960s as a burgeoning alternative hydropower system, though still has not taken hold as a strong energy contender. United States Especially at the start of the American hydropower experiment, engineers and politicians began major hydroelectricity projects to solve a problem of 'wasted potential' rather than to power a population that needed the electricity. When the Niagara Falls Power Company began looking into damming Niagara, the first major hydroelectric project in the United States, in the 1890s they struggled to transport electricity from the falls far enough away to actually reach enough people and justify installation. The project succeeded in large part due to Nikola Tesla's invention of the alternating current motor. On the other side of the country, San Francisco engineers, the Sierra Club, and the federal government fought over acceptable use of the Hetch Hetchy Valley. Despite ostensible protection within a national park, city engineers successfully won the rights to both water and power in the Hetch Hetchy Valley in 1913. After their victory they delivered Hetch Hetchy hydropower and water to San Francisco a decade later and at twice the promised cost, selling power to PG&E which resold to San Francisco residents at a profit. The American West, with its mountain rivers and lack of coal, turned to hydropower early and often, especially along the Columbia River and its tributaries. The Bureau of Reclamation built the Hoover Dam in 1931, symbolically linking the job creation and economic growth priorities of the New Deal. The federal government quickly followed Hoover with the Shasta Dam and Grand Coulee Dam. Power demand in Oregon did not justify damming the Columbia until WWI revealed the weaknesses of a coal-based energy economy. The federal government then began prioritizing interconnected power—and lots of it. Electricity from all three dams poured into war production during WWII. After the war, the Grand Coulee Dam and accompanying hydroelectric projects electrified almost all of the rural Columbia Basin, but failed to improve the lives of those living and farming there the way its boosters had promised and also damaged the river ecosystem and migrating salmon populations. In the 1940s as well, the federal government took advantage of the sheer amount of unused power and flowing water from the Grand Coulee to build a nuclear site placed on the banks of the Columbia. The nuclear site leaked radioactive matter into the river, contaminating the entire area. Post-WWII Americans, especially engineers from the Tennessee Valley Authority, refocused from simply building domestic dams to promoting hydropower abroad. While domestic dam building continued well into the 1970s, with the Reclamation Bureau and Army Corps of Engineers building more than 150 new dams across the American West, organized opposition to hydroelectric dams sparked up in the 1950s and 60s based on environmental concerns. Environmental movements successfully shut down proposed hydropower dams in Dinosaur National Monument and the Grand Canyon, and gained more hydropower-fighting tools with 1970s environmental legislation. As nuclear and fossil fuels grew in the 70s and 80s and environmental activists push for river restoration, hydropower gradually faded in American importance. Africa Foreign powers and IGOs have frequently used hydropower projects in Africa as a tool to interfere in the economic development of African countries, such as the World Bank with the Kariba and Akosombo Dams, and the Soviet Union with the Aswan Dam. The Nile River especially has borne the consequences of countries both along the Nile and distant foreign actors using the river to expand their economic power or national force. After the British occupation of Egypt in 1882, the British worked with Egypt to construct the first Aswan Dam, which they heightened in 1912 and 1934 to try to hold back the Nile floods. Egyptian engineer Adriano Daninos developed a plan for the Aswan High Dam, inspired by the Tennessee Valley Authority's multipurpose dam. When Gamal Abdel Nasser took power in the 1950s, his government decided to undertake the High Dam project, publicizing it as an economic development project. After American refusal to help fund the dam, and anti-British sentiment in Egypt and British interests in neighboring Sudan combined to make the United Kingdom pull out as well, the Soviet Union funded the Aswan High Dam. Between 1977 and 1990 the dam's turbines generated one third of Egypt's electricity. The building of the Aswan Dam triggered a dispute between Sudan and Egypt over the sharing of the Nile, especially since the dam flooded part of Sudan and decreased the volume of water available to them. Ethiopia, also located on the Nile, took advantage of the Cold War tensions to request assistance from the United States for their own irrigation and hydropower investments in the 1960s. While progress stalled due to the coup d'état of 1974 and following 17-year-long Ethiopian Civil War Ethiopia began construction on the Grand Ethiopian Renaissance Dam in 2011. Beyond the Nile, hydroelectric projects cover the rivers and lakes of Africa. The Inga powerplant on the Congo River had been discussed since Belgian colonization in the late 19th century, and was successfully built after independence. Mobutu's government failed to regularly maintain the plants and their capacity declined until the 1995 formation of the Southern African Power Pool created a multi-national power grid and plant maintenance program. States with an abundance of hydropower, such as the Democratic Republic of the Congo and Ghana, frequently sell excess power to neighboring countries. Foreign actors such as Chinese hydropower companies have proposed a significant amount of new hydropower projects in Africa, and already funded and consulted on many others in countries like Mozambique and Ghana. Small hydropower also played an important role in early 20th century electrification across Africa. In South Africa, small turbines powered gold mines and the first electric railway in the 1890s, and Zimbabwean farmers installed small hydropower stations in the 1930s. While interest faded as national grids improved in the second half of the century, 21st century national governments in countries including South Africa and Mozambique, as well as NGOs serving countries like Zimbabwe, have begun re-exploring small-scale hydropower to diversify power sources and improve rural electrification. Europe In the early 20th century, two major factors motivated the expansion of hydropower in Europe: in the northern countries of Norway and Sweden, high rainfall and mountains proved exceptional resources for abundant hydropower, and in the south, coal shortages pushed governments and utility companies to seek alternative power sources. Early on, Switzerland dammed the Alpine rivers and the Swiss Rhine, creating, along with Italy and Scandinavia, a Southern Europe hydropower race. In Italy's Po Valley, the main 20th-century transition was not the creation of hydropower but the transition from mechanical to electrical hydropower. 12,000 watermills churned in the Po watershed in the 1890s, but the first commercial hydroelectric plant, completed in 1898, signaled the end of the mechanical reign. These new large plants moved power away from rural mountainous areas to urban centers in the lower plain. Italy prioritized early near-nationwide electrification, almost entirely from hydropower, which powered its rise as a dominant European and imperial force. However, they failed to reach any conclusive standard for determining water rights before WWI. Modern German hydropower dam construction was built on a history of small dams powering mines and mills in the 15th century. Some parts of the German industry relied more on waterwheels than steam until the 1870s. The German government did not set out building large dams such as the prewar Urft, Mohne, and Eder dams to expand hydropower: they mostly wanted to reduce flooding and improve navigation. However, hydropower quickly emerged as a bonus for all these dams, especially in the coal-poor south. Bavaria even achieved a statewide power grid by damming the Walchensee in 1924, inspired in part by loss of coal reserves after WWI. Hydropower became a symbol of regional pride and distaste for northern 'coal barons', although the north also held strong enthusiasm for hydropower. Dam building rapidly increased after WWII, aiming to increase hydropower. However, conflict accompanied the dam building and spread of hydropower: agrarian interests suffered from decreased irrigation, small mills lost water flow, and different interest groups fought over where dams should be located, controlling who benefited and whose homes they drowned.
Technology
Energy
null
14110
https://en.wikipedia.org/wiki/Holomorphic%20function
Holomorphic function
In mathematics, a holomorphic function is a complex-valued function of one or more complex variables that is complex differentiable in a neighbourhood of each point in a domain in complex coordinate space . The existence of a complex derivative in a neighbourhood is a very strong condition: It implies that a holomorphic function is infinitely differentiable and locally equal to its own Taylor series (is analytic). Holomorphic functions are the central objects of study in complex analysis. Though the term analytic function is often used interchangeably with "holomorphic function", the word "analytic" is defined in a broader sense to denote any function (real, complex, or of more general type) that can be written as a convergent power series in a neighbourhood of each point in its domain. That all holomorphic functions are complex analytic functions, and vice versa, is a major theorem in complex analysis. Holomorphic functions are also sometimes referred to as regular functions. A holomorphic function whose domain is the whole complex plane is called an entire function. The phrase "holomorphic at a point " means not just differentiable at , but differentiable everywhere within some close neighbourhood of in the complex plane. Definition Given a complex-valued function of a single complex variable, the derivative of at a point in its domain is defined as the limit This is the same definition as for the derivative of a real function, except that all quantities are complex. In particular, the limit is taken as the complex number tends to , and this means that the same value is obtained for any sequence of complex values for that tends to . If the limit exists, is said to be complex differentiable at . This concept of complex differentiability shares several properties with real differentiability: It is linear and obeys the product rule, quotient rule, and chain rule. A function is holomorphic on an open set if it is complex differentiable at every point of . A function is holomorphic at a point if it is holomorphic on some neighbourhood of . A function is holomorphic on some non-open set if it is holomorphic at every point of . A function may be complex differentiable at a point but not holomorphic at this point. For example, the function is complex differentiable at , but is not complex differentiable anywhere else, esp. including in no place close to (see the Cauchy–Riemann equations, below). So, it is not holomorphic at . The relationship between real differentiability and complex differentiability is the following: If a complex function is holomorphic, then and have first partial derivatives with respect to and , and satisfy the Cauchy–Riemann equations: or, equivalently, the Wirtinger derivative of with respect to , the complex conjugate of , is zero: which is to say that, roughly, is functionally independent from , the complex conjugate of . If continuity is not given, the converse is not necessarily true. A simple converse is that if and have continuous first partial derivatives and satisfy the Cauchy–Riemann equations, then is holomorphic. A more satisfying converse, which is much harder to prove, is the Looman–Menchoff theorem: if is continuous, and have first partial derivatives (but not necessarily continuous), and they satisfy the Cauchy–Riemann equations, then is holomorphic. Terminology The term holomorphic was introduced in 1875 by Charles Briot and Jean-Claude Bouquet, two of Augustin-Louis Cauchy's students, and derives from the Greek ὅλος (hólos) meaning "whole", and μορφή (morphḗ) meaning "form" or "appearance" or "type", in contrast to the term meromorphic derived from μέρος (méros) meaning "part". A holomorphic function resembles an entire function ("whole") in a domain of the complex plane while a meromorphic function (defined to mean holomorphic except at certain isolated poles), resembles a rational fraction ("part") of entire functions in a domain of the complex plane. Cauchy had instead used the term synectic. Today, the term "holomorphic function" is sometimes preferred to "analytic function". An important result in complex analysis is that every holomorphic function is complex analytic, a fact that does not follow obviously from the definitions. The term "analytic" is however also in wide use. Properties Because complex differentiation is linear and obeys the product, quotient, and chain rules, the sums, products and compositions of holomorphic functions are holomorphic, and the quotient of two holomorphic functions is holomorphic wherever the denominator is not zero. That is, if functions and are holomorphic in a domain , then so are , , , and . Furthermore, is holomorphic if has no zeros in ; otherwise it is meromorphic. If one identifies with the real plane , then the holomorphic functions coincide with those functions of two real variables with continuous first derivatives which solve the Cauchy–Riemann equations, a set of two partial differential equations. Every holomorphic function can be separated into its real and imaginary parts , and each of these is a harmonic function on (each satisfies Laplace's equation ), with the harmonic conjugate of . Conversely, every harmonic function on a simply connected domain is the real part of a holomorphic function: If is the harmonic conjugate of , unique up to a constant, then is holomorphic. Cauchy's integral theorem implies that the contour integral of every holomorphic function along a loop vanishes: Here is a rectifiable path in a simply connected complex domain whose start point is equal to its end point, and is a holomorphic function. Cauchy's integral formula states that every function holomorphic inside a disk is completely determined by its values on the disk's boundary. Furthermore: Suppose is a complex domain, is a holomorphic function and the closed disk is completely contained in . Let be the circle forming the boundary of . Then for every in the interior of : where the contour integral is taken counter-clockwise. The derivative can be written as a contour integral using Cauchy's differentiation formula: for any simple loop positively winding once around , and for infinitesimal positive loops around . In regions where the first derivative is not zero, holomorphic functions are conformal: they preserve angles and the shape (but not size) of small figures. Every holomorphic function is analytic. That is, a holomorphic function has derivatives of every order at each point in its domain, and it coincides with its own Taylor series at in a neighbourhood of . In fact, coincides with its Taylor series at in any disk centred at that point and lying within the domain of the function. From an algebraic point of view, the set of holomorphic functions on an open set is a commutative ring and a complex vector space. Additionally, the set of holomorphic functions in an open set is an integral domain if and only if the open set is connected. In fact, it is a locally convex topological vector space, with the seminorms being the suprema on compact subsets. From a geometric perspective, a function is holomorphic at if and only if its exterior derivative in a neighbourhood of is equal to for some continuous function . It follows from that is also proportional to , implying that the derivative is itself holomorphic and thus that is infinitely differentiable. Similarly, implies that any function that is holomorphic on the simply connected region is also integrable on . (For a path from to lying entirely in , define ; in light of the Jordan curve theorem and the generalized Stokes' theorem, is independent of the particular choice of path , and thus is a well-defined function on having or . Examples All polynomial functions in with complex coefficients are entire functions (holomorphic in the whole complex plane ), and so are the exponential function and the trigonometric functions and (cf. Euler's formula). The principal branch of the complex logarithm function is holomorphic on the domain . The square root function can be defined as and is therefore holomorphic wherever the logarithm is. The reciprocal function is holomorphic on . (The reciprocal function, and any other rational function, is meromorphic on .) As a consequence of the Cauchy–Riemann equations, any real-valued holomorphic function must be constant. Therefore, the absolute value the argument , the real part and the imaginary part are not holomorphic. Another typical example of a continuous function which is not holomorphic is the complex conjugate (The complex conjugate is antiholomorphic.) Several variables The definition of a holomorphic function generalizes to several complex variables in a straightforward way. A function in complex variables is analytic at a point if there exists a neighbourhood of in which is equal to a convergent power series in complex variables; the function is holomorphic in an open subset of if it is analytic at each point in . Osgood's lemma shows (using the multivariate Cauchy integral formula) that, for a continuous function , this is equivalent to being holomorphic in each variable separately (meaning that if any coordinates are fixed, then the restriction of is a holomorphic function of the remaining coordinate). The much deeper Hartogs' theorem proves that the continuity assumption is unnecessary: is holomorphic if and only if it is holomorphic in each variable separately. More generally, a function of several complex variables that is square integrable over every compact subset of its domain is analytic if and only if it satisfies the Cauchy–Riemann equations in the sense of distributions. Functions of several complex variables are in some basic ways more complicated than functions of a single complex variable. For example, the region of convergence of a power series is not necessarily an open ball; these regions are logarithmically-convex Reinhardt domains, the simplest example of which is a polydisk. However, they also come with some fundamental restrictions. Unlike functions of a single complex variable, the possible domains on which there are holomorphic functions that cannot be extended to larger domains are highly limited. Such a set is called a domain of holomorphy. A complex differential -form is holomorphic if and only if its antiholomorphic Dolbeault derivative is zero: . Extension to functional analysis The concept of a holomorphic function can be extended to the infinite-dimensional spaces of functional analysis. For instance, the Fréchet or Gateaux derivative can be used to define a notion of a holomorphic function on a Banach space over the field of complex numbers.
Mathematics
Calculus and analysis
null
14121
https://en.wikipedia.org/wiki/Hertz
Hertz
The hertz (symbol: Hz) is the unit of frequency in the International System of Units (SI), often described as being equivalent to one event (or cycle) per second. The hertz is an SI derived unit whose formal expression in terms of SI base units is s−1, meaning that one hertz is one per second or the reciprocal of one second. It is used only in the case of periodic events. It is named after Heinrich Rudolf Hertz (1857–1894), the first person to provide conclusive proof of the existence of electromagnetic waves. For high frequencies, the unit is commonly expressed in multiples: kilohertz (kHz), megahertz (MHz), gigahertz (GHz), terahertz (THz). Some of the unit's most common uses are in the description of periodic waveforms and musical tones, particularly those used in radio- and audio-related applications. It is also used to describe the clock speeds at which computers and other electronics are driven. The units are sometimes also used as a representation of the energy of a photon, via the Planck relation E = hν, where E is the photon's energy, ν is its frequency, and h is the Planck constant. Definition The hertz is defined as one per second for periodic events. The International Committee for Weights and Measures defined the second as "the duration of periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium-133 atom" and then adds: "It follows that the hyperfine splitting in the ground state of the caesium 133 atom is exactly , ." The dimension of the unit hertz is 1/time (T−1). Expressed in base SI units, the unit is the reciprocal second (1/s). In English, "hertz" is also used as the plural form. As an SI unit, Hz can be prefixed; commonly used multiples are kHz (kilohertz, ), MHz (megahertz, ), GHz (gigahertz, ) and THz (terahertz, ). One hertz (i.e. one per second) simply means "one periodic event occurs per second" (where the event being counted may be a complete cycle); means "one hundred periodic events occur per second", and so on. The unit may be applied to any periodic event—for example, a clock might be said to tick at , or a human heart might be said to beat at . The occurrence rate of aperiodic or stochastic events is expressed in reciprocal second or inverse second (1/s or s−1) in general or, in the specific case of radioactivity, in becquerels. Whereas (one per second) specifically refers to one cycle (or periodic event) per second, (also one per second) specifically refers to one radionuclide event per second on average. Even though frequency, angular velocity, angular frequency and radioactivity all have the dimension T−1, of these only frequency is expressed using the unit hertz. Thus a disc rotating at 60 revolutions per minute (rpm) is said to have an angular velocity of 2 rad/s and a frequency of rotation of . The correspondence between a frequency f with the unit hertz and an angular velocity ω with the unit radians per second is and History The hertz is named after the German physicist Heinrich Hertz (1857–1894), who made important scientific contributions to the study of electromagnetism. The name was established by the International Electrotechnical Commission (IEC) in 1935. It was adopted by the General Conference on Weights and Measures (CGPM) (Conférence générale des poids et mesures) in 1960, replacing the previous name for the unit, "cycles per second" (cps), along with its related multiples, primarily "kilocycles per second" (kc/s) and "megacycles per second" (Mc/s), and occasionally "kilomegacycles per second" (kMc/s). The term "cycles per second" was largely replaced by "hertz" by the 1970s. In some usage, the "per second" was omitted, so that "megacycles" (Mc) was used as an abbreviation of "megacycles per second" (that is, megahertz (MHz)). Applications Sound and vibration Sound is a traveling longitudinal wave, which is an oscillation of pressure. Humans perceive the frequency of a sound as its pitch. Each musical note corresponds to a particular frequency. An infant's ear is able to perceive frequencies ranging from to ; the average adult human can hear sounds between and . The range of ultrasound, infrasound and other physical vibrations such as molecular and atomic vibrations extends from a few femtohertz into the terahertz range and beyond. Electromagnetic radiation Electromagnetic radiation is often described by its frequency—the number of oscillations of the perpendicular electric and magnetic fields per second—expressed in hertz. Radio frequency radiation is usually measured in kilohertz (kHz), megahertz (MHz), or gigahertz (GHz). with the latter known as microwaves. Light is electromagnetic radiation that is even higher in frequency, and has frequencies in the range of tens of terahertz (THz, infrared) to a few petahertz (PHz, ultraviolet), with the visible spectrum being 400–790 THz. Electromagnetic radiation with frequencies in the low terahertz range (intermediate between those of the highest normally usable radio frequencies and long-wave infrared light) is often called terahertz radiation. Even higher frequencies exist, such as that of X-rays and gamma rays, which can be measured in exahertz (EHz). For historical reasons, the frequencies of light and higher frequency electromagnetic radiation are more commonly specified in terms of their wavelengths or photon energies: for a more detailed treatment of this and the above frequency ranges, see Electromagnetic spectrum. Gravitational waves Gravitational waves are also described in Hertz. Current observations are conducted in the 30–7000 Hz range by laser interferometers like LIGO, and the nanohertz (1–1000 nHz) range by pulsar timing arrays. Future space-based detectors are planned to fill in the gap, with LISA operating from 0.1–10 mHz (with some sensitivity from 10 μHz to 100 mHz), and DECIGO in the 0.1–10 Hz range. Computers In computers, most central processing units (CPU) are labeled in terms of their clock rate expressed in megahertz () or gigahertz (). This specification refers to the frequency of the CPU's master clock signal. This signal is nominally a square wave, which is an electrical voltage that switches between low and high logic levels at regular intervals. As the hertz has become the primary unit of measurement accepted by the general populace to determine the performance of a CPU, many experts have criticized this approach, which they claim is an easily manipulable benchmark. Some processors use multiple clock cycles to perform a single operation, while others can perform multiple operations in a single cycle. For personal computers, CPU clock speeds have ranged from approximately in the late 1970s (Atari, Commodore, Apple computers) to up to in IBM Power microprocessors. Various computer buses, such as the front-side bus connecting the CPU and northbridge, also operate at various frequencies in the megahertz range. SI multiples Higher frequencies than the International System of Units provides prefixes for are believed to occur naturally in the frequencies of the quantum-mechanical vibrations of massive particles, although these are not directly observable and must be inferred through other phenomena. By convention, these are typically not expressed in hertz, but in terms of the equivalent energy, which is proportional to the frequency by the factor of the Planck constant. Unicode The CJK Compatibility block in Unicode contains characters for common SI units for frequency. These are intended for compatibility with East Asian character encodings, and not for use in new documents (which would be expected to use Latin letters, e.g. "MHz"). (, ) (Hz) (kHz) (MHz) (GHz) (THz)
Physical sciences
Electromagnetism
null
14133
https://en.wikipedia.org/wiki/Hydroponics
Hydroponics
Hydroponics is a type of horticulture and a subset of hydroculture which involves growing plants, usually crops or medicinal plants, without soil, by using water-based mineral nutrient solutions in an artificial environment. Terrestrial or aquatic plants may grow freely with their roots exposed to the nutritious liquid or the roots may be mechanically supported by an inert medium such as perlite, gravel, or other substrates. Despite inert media, roots can cause changes of the rhizosphere pH and root exudates can affect rhizosphere biology and physiological balance of the nutrient solution when secondary metabolites are produced in plants. Transgenic plants grown hydroponically allow the release of pharmaceutical proteins as part of the root exudate into the hydroponic medium. The nutrients used in hydroponic systems can come from many different organic or inorganic sources, including fish excrement, duck manure, purchased chemical fertilizers, or artificial standard or hybrid nutrient solutions. In contrast to field cultivation, plants are commonly grown hydroponically in a greenhouse or contained environment on inert media, adapted to the controlled-environment agriculture (CEA) process. Plants commonly grown hydroponically include tomatoes, peppers, cucumbers, strawberries, lettuces, and cannabis, usually for commercial use, as well as Arabidopsis thaliana, which serves as a model organism in plant science and genetics. Hydroponics offers many advantages, notably a decrease in water usage in agriculture. To grow of tomatoes using intensive farming methods requires of water; using hydroponics, ; and only using aeroponics. Hydroponic cultures lead to highest biomass and protein production compared to other growth substrates, of plants cultivated in the same environmental conditions and supplied with equal amounts of nutrients. Hydroponics is not only used on earth, but has also proven itself in plant production experiments in earth orbit. History The earliest published work on growing terrestrial plants without soil was the 1627 book Sylva Sylvarum or 'A Natural History' by Francis Bacon, printed a year after his death. As a result of his work, water culture became a popular research technique. In 1699, John Woodward published his water culture experiments with spearmint. He found that plants in less-pure water sources grew better than plants in distilled water. By 1842, a list of nine elements believed to be essential for plant growth had been compiled, and the discoveries of German botanists Julius von Sachs and Wilhelm Knop, in the years 1859–1875, resulted in a development of the technique of soilless cultivation. To quote von Sachs directly: "In the year 1860, I published the results of experiments which demonstrated that land plants are capable of absorbing their nutritive matters out of watery solutions, without the aid of soil, and that it is possible in this way not only to maintain plants alive and growing for a long time, as had long been known, but also to bring about a vigorous increase of their organic substance, and even the production of seed capable of germination." Growth of terrestrial plants without soil in mineral nutrient solutions was later called "solution culture" in reference to "soil culture". It quickly became a standard research and teaching technique in the 19th and 20th centuries and is still widely used in plant nutrition science. Around the 1930s plant nutritionists investigated diseases of certain plants, and thereby, observed symptoms related to existing soil conditions such as salinity. In this context, water culture experiments were undertaken with the hope of delivering similar symptoms under controlled laboratory conditions. This approach forced by Dennis Robert Hoagland led to innovative model systems (e.g., green algae Nitella) and standardized nutrient recipes playing an increasingly important role in modern plant physiology. In 1929, William Frederick Gericke of the University of California at Berkeley began publicly promoting that the principles of solution culture be used for agricultural crop production. He first termed this cultivation method "aquiculture" created in analogy to "agriculture" but later found that the cognate term aquaculture was already applied to culture of aquatic organisms. Gericke created a sensation by growing tomato vines high in his backyard in mineral nutrient solutions rather than soil. He then introduced the term Hydroponics, water culture, in 1937, proposed to him by W. A. Setchell, a phycologist with an extensive education in the classics. Hydroponics is derived from neologism υδρωπονικά (derived from Greek ύδωρ=water and πονέω=cultivate), constructed in analogy to γεωπονικά (derived from Greek γαία=earth and πονέω=cultivate), geoponica, that which concerns agriculture, replacing, γεω-, earth, with ὑδρο-, water. Despite initial successes, however, Gericke realized that the time was not yet ripe for the general technical application and commercial use of hydroponics for producing crops. He also wanted to make sure all aspects of hydroponic cultivation were researched and tested before making any of the specifics available to the public. Reports of Gericke's work and his claims that hydroponics would revolutionize plant agriculture prompted a huge number of requests for further information. Gericke had been denied use of the university's greenhouses for his experiments due to the administration's skepticism, and when the university tried to compel him to release his preliminary nutrient recipes developed at home, he requested greenhouse space and time to improve them using appropriate research facilities. While he was eventually provided greenhouse space, the university assigned Hoagland and Arnon to re-evaluate Gericke's claims and show his formula held no benefit over soil grown plant yields, a view held by Hoagland. Because of these irreconcilable conflicts, Gericke left his academic position in 1937 in a climate that was politically unfavorable and continued his research independently in his greenhouse. In 1940, Gericke, whose work is considered to be the basis for all forms of hydroponic growing, published the book, Complete Guide to Soilless Gardening. Therein, for the first time, he published his basic formulas involving the macro- and micronutrient salts for hydroponically-grown plants. As a result of research of Gericke's claims by order of the Director of the California Agricultural Experiment Station of the University of California, Claude Hutchison, Dennis Hoagland and Daniel Arnon wrote a classic 1938 agricultural bulletin, The Water Culture Method for Growing Plants Without Soil, one of the most important works on solution culture ever, which made the claim that hydroponic crop yields were no better than crop yields obtained with good-quality soils. Ultimately, crop yields would be limited by factors other than mineral nutrients, especially light and aeration of the culture medium. However, in the introduction to his landmark book on soilless cultivation, published two years later, Gericke pointed out that the results published by Hoagland and Arnon in comparing the yields of experimental plants in sand, soil and solution cultures, were based on several systemic errors ("...these experimenters have made the mistake of limiting the productive capacity of hydroponics to that of soil. Comparison can be only by growing as great a number of plants in each case as the fertility of the culture medium can support"). For example, the Hoagland and Arnon study did not adequately appreciate that hydroponics has other key benefits compared to soil culture including the fact that the roots of the plant have constant access to oxygen and that the plants have access to as much or as little water and nutrients as they need. This is important as one of the most common errors when cultivating plants is over- and underwatering; hydroponics prevents this from occurring as large amounts of water, which may drown root systems in soil, can be made available to the plant in hydroponics, and any water not used, is drained away, recirculated, or actively aerated, eliminating anoxic conditions in the root area. In soil, a grower needs to be very experienced to know exactly how much water to feed the plant. Too much and the plant will be unable to access oxygen because air in the soil pores is displaced, which can lead to root rot; too little and the plant will undergo water stress or lose the ability to absorb nutrients, which are typically moved into the roots while dissolved, leading to nutrient deficiency symptoms such as chlorosis or fertilizer burn. Eventually, Gericke's advanced ideas led to the implementation of hydroponics into commercial agriculture while Hoagland's views and helpful support by the University prompted Hoagland and his associates to develop several new formulas (recipes) for mineral nutrient solutions, universally known as Hoagland solution. One of the earliest successes of hydroponics occurred on Wake Island, a rocky atoll in the Pacific Ocean used as a refueling stop for Pan American Airlines. Hydroponics was used there in the 1930s to grow vegetables for the passengers. Hydroponics was a necessity on Wake Island because there was no soil, and it was prohibitively expensive to airlift in fresh vegetables. From 1943 to 1946, Daniel I. Arnon served as a major in the United States Army and used his prior expertise with plant nutrition to feed troops stationed on barren Ponape Island in the western Pacific by growing crops in gravel and nutrient-rich water because there was no arable land available. In the 1960s, Allen Cooper of England developed the nutrient film technique. The Land Pavilion at Walt Disney World's EPCOT Center opened in 1982 and prominently features a variety of hydroponic techniques. In recent decades, NASA has done extensive hydroponic research for its Controlled Ecological Life Support System (CELSS). Hydroponics research mimicking a Martian environment uses LED lighting to grow in a different color spectrum with much less heat. Ray Wheeler, a plant physiologist at Kennedy Space Center's Space Life Science Lab, believes that hydroponics will create advances within space travel, as a bioregenerative life support system. As of 2017, Canada had hundreds of acres of large-scale commercial hydroponic greenhouses, producing tomatoes, peppers and cucumbers. Due to technological advancements within the industry and numerous economic factors, the global hydroponics market is forecast to grow from US$226.45 million in 2016 to US$724.87 million by 2023. Techniques There are two main variations for each medium: sub-irrigation and top irrigation. For all techniques, most hydroponic reservoirs are now built of plastic, but other materials have been used, including concrete, glass, metal, vegetable solids, and wood. The containers should exclude light to prevent algae and fungal growth in the hydroponic medium. Static solution culture In static solution culture, plants are grown in containers of nutrient solution, such as glass Mason jars (typically, in-home applications), pots, buckets, tubs, or tanks. The solution is usually gently aerated but may be un-aerated. If un-aerated, the solution level is kept low enough that enough roots are above the solution so they get adequate oxygen. A hole is cut (or drilled) in the top of the reservoir for each plant; if it is a jar or tub, it may be its lid, but otherwise, cardboard, foil, paper, wood or metal may be put on top. A single reservoir can be dedicated to a single plant, or to various plants. Reservoir size can be increased as plant size increases. A home-made system can be constructed from food containers or glass canning jars with aeration provided by an aquarium pump, aquarium airline tubing, aquarium valves or even a biofilm of green algae on the glass, through photosynthesis. Clear containers can also be covered with aluminium foil, butcher paper, black plastic, or other material to eliminate the effects of negative phototropism. The nutrient solution is changed either on a schedule, such as once per week, or when the concentration drops below a certain level as determined with an electrical conductivity meter. Whenever the solution is depleted below a certain level, either water or fresh nutrient solution is added. A Mariotte's bottle, or a float valve, can be used to automatically maintain the solution level. In raft solution culture, plants are placed in a sheet of buoyant plastic that is floated on the surface of the nutrient solution. That way, the solution level never drops below the roots. Continuous-flow solution culture In continuous-flow solution culture, the nutrient solution constantly flows past the roots. It is much easier to automate than the static solution culture because sampling and adjustments to the temperature, pH, and nutrient concentrations can be made in a large storage tank that has potential to serve thousands of plants. A popular variation is the nutrient film technique or NFT, whereby a very shallow stream of water containing all the dissolved nutrients required for plant growth is recirculated in a thin layer past a bare root mat of plants in a watertight channel, with an upper surface exposed to air. As a consequence, an abundant supply of oxygen is provided to the roots of the plants. A properly designed NFT system is based on using the right channel slope, the right flow rate, and the right channel length. The main advantage of the NFT system over other forms of hydroponics is that the plant roots are exposed to adequate supplies of water, oxygen, and nutrients. In all other forms of production, there is a conflict between the supply of these requirements, since excessive or deficient amounts of one results in an imbalance of one or both of the others. NFT, because of its design, provides a system where all three requirements for healthy plant growth can be met at the same time, provided that the simple concept of NFT is always remembered and practised. The result of these advantages is that higher yields of high-quality produce are obtained over an extended period of cropping. A downside of NFT is that it has very little buffering against interruptions in the flow (e.g., power outages). But, overall, it is probably one of the more productive techniques. The same design characteristics apply to all conventional NFT systems. While slopes along channels of 1:100 have been recommended, in practice it is difficult to build a base for channels that is sufficiently true to enable nutrient films to flow without ponding in locally depressed areas. As a consequence, it is recommended that slopes of 1:30 to 1:40 are used. This allows for minor irregularities in the surface, but, even with these slopes, ponding and water logging may occur. The slope may be provided by the floor, benches or racks may hold the channels and provide the required slope. Both methods are used and depend on local requirements, often determined by the site and crop requirements. As a general guide, flow rates for each gully should be one liter per minute. At planting, rates may be half this and the upper limit of 2 L/min appears about the maximum. Flow rates beyond these extremes are often associated with nutritional problems. Depressed growth rates of many crops have been observed when channels exceed 12 meters in length. On rapidly growing crops, tests have indicated that, while oxygen levels remain adequate, nitrogen may be depleted over the length of the gully. As a consequence, channel length should not exceed 10–15 meters. In situations where this is not possible, the reductions in growth can be eliminated by placing another nutrient feed halfway along the gully and halving the flow rates through each outlet. Aeroponics Aeroponics is a system wherein roots are continuously or discontinuously kept in an environment saturated with fine drops (a mist or aerosol) of nutrient solution. The method requires no substrate and entails growing plants with their roots suspended in a deep air or growth chamber with the roots periodically wetted with a fine mist of atomized nutrients. Excellent aeration is the main advantage of aeroponics. Aeroponic techniques have proven to be commercially successful for propagation, seed germination, seed potato production, tomato production, leaf crops, and micro-greens. Since inventor Richard Stoner commercialized aeroponic technology in 1983, aeroponics has been implemented as an alternative to water intensive hydroponic systems worldwide. A major limitation of hydroponics is the fact that of water can only hold of air, no matter whether aerators are utilized or not. Another distinct advantage of aeroponics over hydroponics is that any species of plants can be grown in a true aeroponic system because the microenvironment of an aeroponic can be finely controlled. Another limitation of hydroponics is that certain species of plants can only survive for so long in water before they become waterlogged. In contrast, suspended aeroponic plants receive 100% of the available oxygen and carbon dioxide to their roots zone, stems, and leaves, thus accelerating biomass growth and reducing rooting times. NASA research has shown that aeroponically grown plants have an 80% increase in dry weight biomass (essential minerals) compared to hydroponically grown plants. Aeroponics also uses 65% less water than hydroponics. NASA concluded that aeroponically grown plants require ¼ the nutrient input compared to hydroponics. Unlike hydroponically grown plants, aeroponically grown plants will not suffer transplant shock when transplanted to soil, and offers growers the ability to reduce the spread of disease and pathogens. Aeroponics is also widely used in laboratory studies of plant physiology and plant pathology. Aeroponic techniques have been given special attention from NASA since a mist is easier to handle than a liquid in a zero-gravity environment. Fogponics Fogponics is a derivation of aeroponics wherein the nutrient solution is aerosolized by a diaphragm vibrating at ultrasonic frequencies. Solution droplets produced by this method tend to be 5–10 μm in diameter, smaller than those produced by forcing a nutrient solution through pressurized nozzles, as in aeroponics. The smaller size of the droplets allows them to diffuse through the air more easily, and deliver nutrients to the roots without limiting their access to oxygen. Passive sub-irrigation Passive sub-irrigation, also known as passive hydroponics, semi-hydroponics, or hydroculture, is a method wherein plants are grown in an inert porous medium that moves water and fertilizer to the roots by capillary action from a separate reservoir as necessary, reducing labor and providing a constant supply of water to the roots. In the simplest method, the pot sits in a shallow solution of fertilizer and water or on a capillary mat saturated with nutrient solution. The various hydroponic media available, such as expanded clay and coconut husk, contain more air space than more traditional potting mixes, delivering increased oxygen to the roots, which is important in epiphytic plants such as orchids and bromeliads, whose roots are exposed to the air in nature. Additional advantages of passive hydroponics are the reduction of root rot. Ebb and flow (flood and drain) sub-irrigation In its simplest form, nutrient-enriched water is pumped into containers with plants in a growing medium such as Expanded clay aggregate At regular intervals, a simple timer causes a pump to fill the containers with nutrient solution, after which the solution drains back down into the reservoir. This keeps the medium regularly flushed with nutrients and air. Run-to-waste In a run-to-waste system, nutrient and water solution is periodically applied to the medium surface. The method was invented in Bengal in 1946; for this reason it is sometimes referred to as "The Bengal System". This method can be set up in various configurations. In its simplest form, a nutrient-and-water solution is manually applied one or more times per day to a container of inert growing media, such as rockwool, perlite, vermiculite, coco fibre, or sand. In a slightly more complex system, it is automated with a delivery pump, a timer and irrigation tubing to deliver nutrient solution with a delivery frequency that is governed by the key parameters of plant size, plant growing stage, climate, substrate, and substrate conductivity, pH, and water content. In a commercial setting, watering frequency is multi-factorial and governed by computers or PLCs. Commercial hydroponics production of large plants like tomatoes, cucumber, and peppers uses one form or another of run-to-waste hydroponics. Deep water culture The hydroponic method of plant production by means of suspending the plant roots in a solution of nutrient-rich, oxygenated water. Traditional methods favor the use of plastic buckets and large containers with the plant contained in a net pot suspended from the centre of the lid and the roots suspended in the nutrient solution. The solution is oxygen saturated by an air pump combined with porous stones. With this method, the plants grow much faster because of the high amount of oxygen that the roots receive. The Kratky Method is similar to deep water culture, but uses a non-circulating water reservoir. Top-fed deep water culture Top-fed deep water culture is a technique involving delivering highly oxygenated nutrient solution direct to the root zone of plants. While deep water culture involves the plant roots hanging down into a reservoir of nutrient solution, in top-fed deep water culture the solution is pumped from the reservoir up to the roots (top feeding). The water is released over the plant's roots and then runs back into the reservoir below in a constantly recirculating system. As with deep water culture, there is an airstone in the reservoir that pumps air into the water via a hose from outside the reservoir. The airstone helps add oxygen to the water. Both the airstone and the water pump run 24 hours a day. The biggest advantage of top-fed deep water culture over standard deep water culture is increased growth during the first few weeks. With deep water culture, there is a time when the roots have not reached the water yet. With top-fed deep water culture, the roots get easy access to water from the beginning and will grow to the reservoir below much more quickly than with a deep water culture system. Once the roots have reached the reservoir below, there is not a huge advantage with top-fed deep water culture over standard deep water culture. However, due to the quicker growth in the beginning, grow time can be reduced by a few weeks. Advantages Space optimization: Vertical farming and advanced control technologies maximize the use of limited spaces. Resource management: Reduced water and fertilizer consumption through the recycling of nutrient solutions. Protection for sensitive species: Controlled conditions shield plants from climatic extremes, pests, and diseases. Hydrozones lie at the intersection of urban agriculture innovations, environmental concerns, and biodiversity conservation efforts. Notable examples include specialized botanical gardens, cultivation facilities for threatened endemic species, and domestic spaces for advanced horticulture enthusiasts. Rotary A rotary hydroponic garden is a style of commercial hydroponics created within a circular frame which rotates continuously during the entire growth cycle of whatever plant is being grown. While system specifics vary, systems typically rotate once per hour, giving a plant 24 full turns within the circle each 24-hour period. Within the center of each rotary hydroponic garden can be a high intensity grow light, designed to simulate sunlight, often with the assistance of a mechanized timer. Each day, as the plants rotate, they are periodically watered with a hydroponic growth solution to provide all nutrients necessary for robust growth. Due to the plants continuous fight against gravity, plants typically mature much more quickly than when grown in soil or other traditional hydroponic growing systems. Because rotary hydroponic systems have a small size, they allow for more plant material to be grown per area of floor space than other traditional hydroponic systems. Rotary hydroponic systems should be avoided in most circumstances, mainly because of their experimental nature and their high costs for finding, buying, operating, and maintaining them. Substrates (growing support materials) Different media are appropriate for different growing techniques. Rock wool Rock wool (mineral wool) is the most widely used medium in hydroponics. Rock wool is an inert substrate suitable for both run-to-waste and recirculating systems. Rock wool is made from molten rock, basalt or 'slag' that is spun into bundles of single filament fibres, and bonded into a medium capable of capillary action, and is, in effect, protected from most common microbiological degradation. Rock wool is typically used only for the seedling stage, or with newly cut clones, but can remain with the plant base for its lifetime. Rock wool has many advantages and some disadvantages. The latter being the possible skin irritancy (mechanical) whilst handling (1:1000). Flushing with cold water usually brings relief. Advantages include its proven efficiency and effectiveness as a commercial hydroponic substrate. Most of the rock wool sold to date is a non-hazardous, non-carcinogenic material, falling under Note Q of the European Union Classification Packaging and Labeling Regulation (CLP). Mineral wool products can be engineered to hold large quantities of water and air that aid root growth and nutrient uptake in hydroponics; their fibrous nature also provides a good mechanical structure to hold the plant stable. The naturally high pH of mineral wool makes them initially unsuitable to plant growth and requires "conditioning" to produce a wool with an appropriate, stable pH. Expanded clay aggregate Baked clay pellets are suitable for hydroponic systems in which all nutrients are carefully controlled in water solution. The clay pellets are inert, pH-neutral, and do not contain any nutrient value. The clay is formed into round pellets and fired in rotary kilns at . This causes the clay to expand, like popcorn, and become porous. It is light in weight, and does not compact over time. The shape of an individual pellet can be irregular or uniform depending on brand and manufacturing process. The manufacturers consider expanded clay to be an ecologically sustainable and re-usable growing medium because of its ability to be cleaned and sterilized, typically by washing in solutions of white vinegar, chlorine bleach, or hydrogen peroxide (), and rinsing completely. Another view is that clay pebbles are best not re-used even when they are cleaned, due to root growth that may enter the medium. Breaking open a clay pebble after use can reveal this growth. Growstones Growstones, made from glass waste, have both more air and water retention space than perlite and peat. This aggregate holds more water than parboiled rice hulls. Growstones by volume consist of 0.5 to 5% calcium carbonate – for a standard 5.1 kg bag of Growstones that corresponds to 25.8 to 258 grams of calcium carbonate. The remainder is soda-lime glass. Coconut Coir Coconut coir, also known as coir peat, is a natural byproduct derived from coconut processing. The outer husk of a coconut consists of fibers which are commonly used to make a myriad of items ranging from floor mats to brushes. After the long fibers are used for those applications, the dust and short fibers are merged to create coir. Coconuts absorb high levels of nutrients throughout their life cycle, so the coir must undergo a maturation process before it becomes a viable growth medium. This process removes salt, tannins and phenolic compounds through substantial water washing. Contaminated water is a byproduct of this process, as three hundred to six hundred liters of water per one cubic meter of coir are needed. Additionally, this maturation can take up to six months and one study concluded the working conditions during the maturation process are dangerous and would be illegal in North America and Europe. Despite requiring attention, posing health risks and environmental impacts, coconut coir has impressive material properties. When exposed to water, the brown, dry, chunky and fibrous material expands nearly three or four times its original size. This characteristic combined with coconut coir's water retention capacity and resistance to pests and diseases make it an effective growth medium. Used as an alternative to rock wool, coconut coir offers optimized growing conditions. Rice husks Parboiled rice husks (PBH) are an agricultural byproduct that would otherwise have little use. They decay over time, and allow drainage, and even retain less water than growstones. A study showed that rice husks did not affect the effects of plant growth regulators. Perlite Perlite is a volcanic rock that has been superheated into very lightweight expanded glass pebbles. It is used loose or in plastic sleeves immersed in the water. It is also used in potting soil mixes to decrease soil density. It does contain a high amount of fluorine which could be harmful to some plants. Perlite has similar properties and uses to vermiculite but, in general, holds more air and less water and is buoyant. Vermiculite Like perlite, vermiculite is a mineral that has been superheated until it has expanded into light pebbles. Vermiculite holds more water than perlite and has a natural "wicking" property that can draw water and nutrients in a passive hydroponic system. If too much water and not enough air surrounds the plants roots, it is possible to gradually lower the medium's water-retention capability by mixing in increasing quantities of perlite. Pumice Like perlite, pumice is a lightweight, mined volcanic rock that finds application in hydroponics. Sand Sand is cheap and easily available. However, it is heavy, does not hold water very well, and it must be sterilized between uses. Gravel The same type that is used in aquariums, though any small gravel can be used, provided it is washed first. Indeed, plants growing in a typical traditional gravel filter bed, with water circulated using electric powerhead pumps, are in effect being grown using gravel hydroponics, also termed "nutriculture". Gravel is inexpensive, easy to keep clean, drains well and will not become waterlogged. However, it is also heavy, and, if the system does not provide continuous water, the plant roots may dry out. Wood fiber Wood fibre, produced from steam friction of wood, is an efficient organic substrate for hydroponics. It has the advantage that it keeps its structure for a very long time. Wood wool (i.e. wood slivers) have been used since the earliest days of the hydroponics research. However, more recent research suggests that wood fibre may have detrimental effects on "plant growth regulators". Sheep wool Wool from shearing sheep is a little-used yet promising renewable growing medium. In a study comparing wool with peat slabs, coconut fibre slabs, perlite and rockwool slabs to grow cucumber plants, sheep wool had a greater air capacity of 70%, which decreased with use to a comparable 43%, and water capacity that increased from 23% to 44% with use. Using sheep wool resulted in the greatest yield out of the tested substrates, while application of a biostimulator consisting of humic acid, lactic acid and Bacillus subtilis improved yields in all substrates. Brick shards Brick shards have similar properties to gravel. They have the added disadvantages of possibly altering the pH and requiring extra cleaning before reuse. Polystyrene packing peanuts Polystyrene packing peanuts are inexpensive, readily available, and have excellent drainage. However, they can be too lightweight for some uses. They are used mainly in closed-tube systems. Note that non-biodegradable polystyrene peanuts must be used; biodegradable packing peanuts will decompose into a sludge. Plants may absorb styrene and pass it to their consumers; this is a possible health risk. Nutrient solutions Inorganic hydroponic solutions The formulation of hydroponic solutions is an application of plant nutrition, with nutrient deficiency symptoms mirroring those found in traditional soil based agriculture. However, the underlying chemistry of hydroponic solutions can differ from soil chemistry in many significant ways. Important differences include: Unlike soil, hydroponic nutrient solutions do not have cation-exchange capacity (CEC) from clay particles or organic matter. The absence of CEC and soil pores means the pH, oxygen saturation, and nutrient concentrations can change much more rapidly in hydroponic setups than is possible in soil. Selective absorption of nutrients by plants often imbalances the amount of counterions in solution. This imbalance can rapidly affect solution pH and the ability of plants to absorb nutrients of similar ionic charge (see article membrane potential). For instance, nitrate anions are often consumed rapidly by plants to form proteins, leaving an excess of cations in solution. This cation imbalance can lead to deficiency symptoms in other cation based nutrients (e.g. Mg2+) even when an ideal quantity of those nutrients are dissolved in the solution. Depending on the pH or on the presence of water contaminants, nutrients such as iron can precipitate from the solution and become unavailable to plants. Routine adjustments to pH, buffering the solution, or the use of chelating agents is often necessary. Unlike soil types, which can vary greatly in their composition, hydroponic solutions are often standardized and require routine maintenance for plant cultivation. Under controlled laboratory conditions hydroponic solutions are periodically pH adjusted to near neutral (pH 6.0) and are aerated with oxygen. Also, water levels must be refilled to account for transpiration losses and nutrient solutions require re-fortification to correct the nutrient imbalances that occur as plants grow and deplete nutrient reserves. Sometimes the regular measurement of nitrate ions is used as a key parameter to estimate the remaining proportions and concentrations of other essential nutrient ions to restore a balanced solution. Well-known examples of standardized, balanced nutrient solutions are the Hoagland solution, the Long Ashton nutrient solution, or the Knop solution. As in conventional agriculture, nutrients should be adjusted to satisfy Liebig's law of the minimum for each specific plant variety. Nevertheless, generally acceptable concentrations for nutrient solutions exist, with minimum and maximum concentration ranges for most plants being somewhat similar. Most nutrient solutions are mixed to have concentrations between 1,000 and 2,500 ppm. Acceptable concentrations for the individual nutrient ions, which comprise that total ppm figure, are summarized in the following table. For essential nutrients, concentrations below these ranges often lead to nutrient deficiencies while exceeding these ranges can lead to nutrient toxicity. Optimum nutrition concentrations for plant varieties are found empirically by experience or by plant tissue tests. Organic hydroponic solutions Organic fertilizers can be used to supplement or entirely replace the inorganic compounds used in conventional hydroponic solutions. However, using organic fertilizers introduces a number of challenges that are not easily resolved. Examples include: organic fertilizers are highly variable in their nutritional compositions in terms of minerals and different organic and inorganic species. Even similar materials can differ significantly based on their source (e.g. the quality of manure varies based on an animal's diet). organic fertilizers are often sourced from animal byproducts, making disease transmission a serious concern for plants grown for human consumption or animal forage. organic fertilizers are often particulate and can clog substrates or other growing equipment. Sieving or milling the organic materials to fine dusts is often necessary. biochemical degradation and conversion processes of organic materials can make their mineral ingredients available to plants. some organic materials (i.e. particularly manures and offal) can further degrade to emit foul odors under anaerobic conditions. many organic molecules (i.e. sugars) demand additional oxygen during aerobic degradation, which is essential for cellular respiration in the plant roots. organic compounds (i.e. sugars, vitamins, a.o.) are not necessary for normal plant nutrition. Nevertheless, if precautions are taken, organic fertilizers can be used successfully in hydroponics. Organically sourced macronutrients Examples of suitable materials, with their average nutritional contents tabulated in terms of percent dried mass, are listed in the following table. Organically sourced micronutrients Micronutrients can be sourced from organic fertilizers as well. For example, composted pine bark is high in manganese and is sometimes used to fulfill that mineral requirement in hydroponic solutions. To satisfy requirements for National Organic Programs, pulverized, unrefined minerals (e.g. Gypsum, Calcite, and glauconite) can also be added to satisfy a plant's nutritional needs. Additives Compounds can be added in both organic and conventional hydroponic systems to improve nutrition acquisition and uptake by the plant. Chelating agents and humic acid have been shown to increase nutrient uptake. Additionally, plant growth promoting rhizobacteria (PGPR), which are regularly utilized in field and greenhouse agriculture, have been shown to benefit hydroponic plant growth development and nutrient acquisition. Some PGPR are known to increase nitrogen fixation. While nitrogen is generally abundant in hydroponic systems with properly maintained fertilizer regimens, Azospirillum and Azotobacter genera can help maintain mobilized forms of nitrogen in systems with higher microbial growth in the rhizosphere. Traditional fertilizer methods often lead to high accumulated concentrations of nitrate within plant tissue at harvest. Rhodopseudo-monas palustris has been shown to increase nitrogen use efficiency, increase yield, and decrease nitrate concentration by 88% at harvest compared to traditional hydroponic fertilizer methods in leafy greens. Many Bacillus spp., Pseudomonas spp. and Streptomyces spp. convert forms of phosphorus in the soil that are unavailable to the plant into soluble anions by decreasing soil pH, releasing phosphorus bound in chelated form that is available in a wider pH range, and mineralizing organic phosphorus. Some studies have found that Bacillus inoculants allow hydroponic leaf lettuce to overcome high salt stress that would otherwise reduce growth. This can be especially beneficial in regions with high electrical conductivity or salt content in their water source. This could potentially avoid costly reverse osmosis filtration systems while maintaining high crop yield. Tools Common equipment Managing nutrient concentrations, oxygen saturation, and pH values within acceptable ranges is essential for successful hydroponic horticulture. Common tools used to manage hydroponic solutions include: Electrical conductivity meters, a tool which estimates nutrient ppm by measuring how well a solution transmits an electric current. pH meter, a tool that uses an electric current to determine the concentration of hydrogen ions in solution. Oxygen electrode, an electrochemical sensor for determining the oxygen concentration in solution. Litmus paper, disposable pH indicator strips that determine hydrogen ion concentrations by color changing chemical reaction. Graduated cylinders or measuring spoons to measure out premixed, commercial hydroponic solutions. Equipment Chemical equipment can also be used to perform accurate chemical analyses of nutrient solutions. Examples include: Balances for accurately measuring materials. Laboratory glassware, such as burettes and pipettes, for performing titrations. Colorimeters for solution tests which apply the Beer–Lambert law. Spectrophotometer to measure the concentrations of the key parameter nitrate and other nutrients, such as phosphate, sulfate or iron. Containers for growing and storing the plants. Using chemical equipment for hydroponic solutions can be beneficial to growers of any background because nutrient solutions are often reusable. Because nutrient solutions are virtually never completely depleted, and should never be due to the unacceptably low osmotic pressure that would result, re-fortification of old solutions with new nutrients can save growers money and can control point source pollution, a common source for the eutrophication of nearby lakes and streams. Software Although pre-mixed concentrated nutrient solutions are generally purchased from commercial nutrient manufacturers by hydroponic hobbyists and small commercial growers, several tools exist to help anyone prepare their own solutions without extensive knowledge about chemistry. The free and open source tools HydroBuddy and HydroCal have been created by professional chemists to help any hydroponics grower prepare their own nutrient solutions. The first program is available for Windows, Mac and Linux while the second one can be used through a simple JavaScript interface. Both programs allow for basic nutrient solution preparation although HydroBuddy provides added functionality to use and save custom substances, save formulations and predict electrical conductivity values. Mixing solutions Often mixing hydroponic solutions using individual salts is impractical for hobbyists or small-scale commercial growers because commercial products are available at reasonable prices. However, even when buying commercial products, multi-component fertilizers are popular. Often these products are bought as three part formulas which emphasize certain nutritional roles. For example, solutions for vegetative growth (i.e. high in nitrogen), flowering (i.e. high in potassium and phosphorus), and micronutrient solutions (i.e. with trace minerals) are popular. The timing and application of these multi-part fertilizers should coincide with a plant's growth stage. For example, at the end of an annual plant's life cycle, a plant should be restricted from high nitrogen fertilizers. In most plants, nitrogen restriction inhibits vegetative growth and helps induce flowering. Additional improvements Growrooms With pest problems reduced and nutrients constantly fed to the roots, productivity in hydroponics is high; however, growers can further increase yield by manipulating a plant's environment by constructing sophisticated growrooms. CO2 enrichment To increase yield further, some sealed greenhouses inject CO2 into their environment to help improve growth and plant fertility.
Technology
Agriculture_2
null
14136
https://en.wikipedia.org/wiki/Hydrophobe
Hydrophobe
In chemistry, hydrophobicity is the chemical property of a molecule (called a hydrophobe) that is seemingly repelled from a mass of water. In contrast, hydrophiles are attracted to water. Hydrophobic molecules tend to be nonpolar and, thus, prefer other neutral molecules and nonpolar solvents. Because water molecules are polar, hydrophobes do not dissolve well among them. Hydrophobic molecules in water often cluster together, forming micelles. Water on hydrophobic surfaces will exhibit a high contact angle. Examples of hydrophobic molecules include the alkanes, oils, fats, and greasy substances in general. Hydrophobic materials are used for oil removal from water, the management of oil spills, and chemical separation processes to remove non-polar substances from polar compounds. The term hydrophobic—which comes from the Ancient Greek (), "having a fear of water", constructed —is often used interchangeably with lipophilic, "fat-loving". However, the two terms are not synonymous. While hydrophobic substances are usually lipophilic, there are exceptions, such as the silicones and fluorocarbons. Chemical background The hydrophobic interaction is mostly an entropic effect originating from the disruption of the highly dynamic hydrogen bonds between molecules of liquid water by the nonpolar solute, causing the water to form a clathrate-like structure around the non-polar molecules. This structure formed is more highly ordered than free water molecules due to the water molecules arranging themselves to interact as much as possible with themselves, and thus results in a higher entropic state which causes non-polar molecules to clump together to reduce the surface area exposed to water and decrease the entropy of the system. Thus, the two immiscible phases (hydrophilic vs. hydrophobic) will change so that their corresponding interfacial area will be minimal. This effect can be visualized in the phenomenon called phase separation. Superhydrophobicity Superhydrophobic surfaces, such as the leaves of the lotus plant, are those that are extremely difficult to wet. The contact angles of a water droplet exceeds 150°. This is referred to as the lotus effect, and is primarily a chemical property related to interfacial tension, rather than a chemical property. Theory In 1805, Thomas Young defined the contact angle θ by analyzing the forces acting on a fluid droplet resting on a solid surface surrounded by a gas. where = Interfacial tension between the solid and gas = Interfacial tension between the solid and liquid = Interfacial tension between the liquid and gas θ can be measured using a contact angle goniometer. Wenzel determined that when the liquid is in intimate contact with a microstructured surface, θ will change to θW* where r is the ratio of the actual area to the projected area. Wenzel's equation shows that microstructuring a surface amplifies the natural tendency of the surface. A hydrophobic surface (one that has an original contact angle greater than 90°) becomes more hydrophobic when microstructured – its new contact angle becomes greater than the original. However, a hydrophilic surface (one that has an original contact angle less than 90°) becomes more hydrophilic when microstructured – its new contact angle becomes less than the original. Cassie and Baxter found that if the liquid is suspended on the tops of microstructures, θ will change to θCB*: where φ is the area fraction of the solid that touches the liquid. Liquid in the Cassie–Baxter state is more mobile than in the Wenzel state. We can predict whether the Wenzel or Cassie–Baxter state should exist by calculating the new contact angle with both equations. By a minimization of free energy argument, the relation that predicted the smaller new contact angle is the state most likely to exist. Stated in mathematical terms, for the Cassie–Baxter state to exist, the following inequality must be true. A recent alternative criterion for the Cassie–Baxter state asserts that the Cassie–Baxter state exists when the following 2 criteria are met:1) Contact line forces overcome body forces of unsupported droplet weight and 2) The microstructures are tall enough to prevent the liquid that bridges microstructures from touching the base of the microstructures. A new criterion for the switch between Wenzel and Cassie-Baxter states has been developed recently based on surface roughness and surface energy. The criterion focuses on the air-trapping capability under liquid droplets on rough surfaces, which could tell whether Wenzel's model or Cassie-Baxter's model should be used for certain combination of surface roughness and energy. Contact angle is a measure of static hydrophobicity, and contact angle hysteresis and slide angle are dynamic measures. Contact angle hysteresis is a phenomenon that characterizes surface heterogeneity. When a pipette injects a liquid onto a solid, the liquid will form some contact angle. As the pipette injects more liquid, the droplet will increase in volume, the contact angle will increase, but its three-phase boundary will remain stationary until it suddenly advances outward. The contact angle the droplet had immediately before advancing outward is termed the advancing contact angle. The receding contact angle is now measured by pumping the liquid back out of the droplet. The droplet will decrease in volume, the contact angle will decrease, but its three-phase boundary will remain stationary until it suddenly recedes inward. The contact angle the droplet had immediately before receding inward is termed the receding contact angle. The difference between advancing and receding contact angles is termed contact angle hysteresis and can be used to characterize surface heterogeneity, roughness, and mobility. Surfaces that are not homogeneous will have domains that impede motion of the contact line. The slide angle is another dynamic measure of hydrophobicity and is measured by depositing a droplet on a surface and tilting the surface until the droplet begins to slide. In general, liquids in the Cassie–Baxter state exhibit lower slide angles and contact angle hysteresis than those in the Wenzel state. Research and development Dettre and Johnson discovered in 1964 that the superhydrophobic lotus effect phenomenon was related to rough hydrophobic surfaces, and they developed a theoretical model based on experiments with glass beads coated with paraffin or TFE telomer. The self-cleaning property of superhydrophobic micro-nanostructured surfaces was reported in 1977. Perfluoroalkyl, perfluoropolyether, and RF plasma -formed superhydrophobic materials were developed, used for electrowetting and commercialized for bio-medical applications between 1986 and 1995. Other technology and applications have emerged since the mid-1990s. A durable superhydrophobic hierarchical composition, applied in one or two steps, was disclosed in 2002 comprising nano-sized particles ≤ 100 nanometers overlaying a surface having micrometer-sized features or particles ≤ 100 micrometers. The larger particles were observed to protect the smaller particles from mechanical abrasion. In recent research, superhydrophobicity has been reported by allowing alkylketene dimer (AKD) to solidify into a nanostructured fractal surface. Many papers have since presented fabrication methods for producing superhydrophobic surfaces including particle deposition, sol-gel techniques, plasma treatments, vapor deposition, and casting techniques. Current opportunity for research impact lies mainly in fundamental research and practical manufacturing. Debates have recently emerged concerning the applicability of the Wenzel and Cassie–Baxter models. In an experiment designed to challenge the surface energy perspective of the Wenzel and Cassie–Baxter model and promote a contact line perspective, water drops were placed on a smooth hydrophobic spot in a rough hydrophobic field, a rough hydrophobic spot in a smooth hydrophobic field, and a hydrophilic spot in a hydrophobic field. Experiments showed that the surface chemistry and geometry at the contact line affected the contact angle and contact angle hysteresis, but the surface area inside the contact line had no effect. An argument that increased jaggedness in the contact line enhances droplet mobility has also been proposed. Many hydrophobic materials found in nature rely on Cassie's law and are biphasic on the submicrometer level with one component air. The lotus effect is based on this principle. Inspired by it, many functional superhydrophobic surfaces have been prepared. An example of a bionic or biomimetic superhydrophobic material in nanotechnology is nanopin film. One study presents a vanadium pentoxide surface that switches reversibly between superhydrophobicity and superhydrophilicity under the influence of UV radiation. According to the study, any surface can be modified to this effect by application of a suspension of rose-like V2O5 particles, for instance with an inkjet printer. Once again hydrophobicity is induced by interlaminar air pockets (separated by 2.1 nm distances). The UV effect is also explained. UV light creates electron-hole pairs, with the holes reacting with lattice oxygen, creating surface oxygen vacancies, while the electrons reduce V5+ to V3+. The oxygen vacancies are met by water, and it is this water absorbency by the vanadium surface that makes it hydrophilic. By extended storage in the dark, water is replaced by oxygen and hydrophilicity is once again lost. A significant majority of hydrophobic surfaces have their hydrophobic properties imparted by structural or chemical modification of a surface of a bulk material, through either coatings or surface treatments. That is to say, the presence of molecular species (usually organic) or structural features results in high contact angles of water. In recent years, rare earth oxides have been shown to possess intrinsic hydrophobicity. The intrinsic hydrophobicity of rare earth oxides depends on surface orientation and oxygen vacancy levels, and is naturally more robust than coatings or surface treatments, having potential applications in condensers and catalysts that can operate at high temperatures or corrosive environments. Applications and potential applications Hydrophobic concrete has been produced since the mid-20th century. Active recent research on superhydrophobic materials might eventually lead to more industrial applications. A simple routine of coating cotton fabric with silica or titania particles by sol-gel technique has been reported, which protects the fabric from UV light and makes it superhydrophobic. An efficient routine has been reported for making polyethylene superhydrophobic and thus self-cleaning. 99% of dirt on such a surface is easily washed away. Patterned superhydrophobic surfaces also have promise for lab-on-a-chip microfluidic devices and can drastically improve surface-based bioanalysis. In pharmaceuticals, hydrophobicity of pharmaceutical blends affects important quality attributes of final products, such as drug dissolution and hardness. Methods have been developed to measure the hydrophobicity of pharmaceutical materials. The development of hydrophobic passive daytime radiative cooling (PDRC) surfaces, whose effectiveness at solar reflectance and thermal emittance is predicated on their cleanliness, has improved the "self-cleaning" of these surfaces. Scalable and sustainable hydrophobic PDRCs that avoid VOCs have further been developed.
Physical sciences
Supramolecular chemistry
Chemistry
14147
https://en.wikipedia.org/wiki/Harmonic%20analysis
Harmonic analysis
Harmonic analysis is a branch of mathematics concerned with investigating the connections between a function and its representation in frequency. The frequency representation is found by using the Fourier transform for functions on unbounded domains such as the full real line or by Fourier series for functions on bounded domains, especially periodic functions on finite intervals. Generalizing these transforms to other domains is generally called Fourier analysis, although the term is sometimes used interchangeably with harmonic analysis. Harmonic analysis has become a vast subject with applications in areas as diverse as number theory, representation theory, signal processing, quantum mechanics, tidal analysis, spectral analysis, and neuroscience. The term "harmonics" originated from the Ancient Greek word harmonikos, meaning "skilled in music". In physical eigenvalue problems, it began to mean waves whose frequencies are integer multiples of one another, as are the frequencies of the harmonics of music notes. Still, the term has been generalized beyond its original meaning. Development of Harmonic Analysis Historically, harmonic functions first referred to the solutions of Laplace's equation. This terminology was extended to other special functions that solved related equations, then to eigenfunctions of general elliptic operators, and nowadays harmonic functions are considered as a generalization of periodic functions in function spaces defined on manifolds, for example as solutions of general, not necessarily elliptic, partial differential equations including some boundary conditions that may imply their symmetry or periodicity. Fourier Analysis The classical Fourier transform on Rn is still an area of ongoing research, particularly concerning Fourier transformation on more general objects such as tempered distributions. For instance, if we impose some requirements on a distribution f, we can attempt to translate these requirements into the Fourier transform of f. The Paley–Wiener theorem is an example. The Paley–Wiener theorem immediately implies that if f is a nonzero distribution of compact support (these include functions of compact support), then its Fourier transform is never compactly supported (i.e., if a signal is limited in one domain, it is unlimited in the other). This is an elementary form of an uncertainty principle in a harmonic-analysis setting. Fourier series can be conveniently studied in the context of Hilbert spaces, which provides a connection between harmonic analysis and functional analysis. There are four versions of the Fourier transform, dependent on the spaces that are mapped by the transformation: Discrete/periodic–discrete/periodic: Discrete Fourier transform Continuous/periodic–discrete/aperiodic: Fourier series Discrete/aperiodic–continuous/periodic: Discrete-time Fourier transform Continuous/aperiodic–continuous/aperiodic: Fourier transform As the spaces mapped by the Fourier transform are, in particular, subspaces of the space of tempered distributions it can be shown that the four versions of the Fourier transform are particular cases of the Fourier transform on tempered distributions. Abstract harmonic analysis Abstract harmonic analysis is primarily concerned with how real or complex-valued functions (often on very general domains) can be studied using symmetries such as translations or rotations (for instance via the Fourier transform and its relatives); this field is of course related to real-variable harmonic analysis, but is perhaps closer in spirit to representation theory and functional analysis. One of the most modern branches of harmonic analysis, having its roots in the mid-20th century, is analysis on topological groups. The core motivating ideas are the various Fourier transforms, which can be generalized to a transform of functions defined on Hausdorff locally compact topological groups. One of the major results in the theory of functions on abelian locally compact groups is called Pontryagin duality. Harmonic analysis studies the properties of that duality. Different generalization of Fourier transforms attempts to extend those features to different settings, for instance, first to the case of general abelian topological groups and second to the case of non-abelian Lie groups. Harmonic analysis is closely related to the theory of unitary group representations for general non-abelian locally compact groups. For compact groups, the Peter–Weyl theorem explains how one may get harmonics by choosing one irreducible representation out of each equivalence class of representations. This choice of harmonics enjoys some of the valuable properties of the classical Fourier transform in terms of carrying convolutions to pointwise products or otherwise showing a certain understanding of the underlying group structure.
Mathematics
Calculus and analysis
null
14170
https://en.wikipedia.org/wiki/HIV
HIV
The human immunodeficiency viruses (HIV) are two species of Lentivirus (a subgroup of retrovirus) that infect humans. Over time, they cause acquired immunodeficiency syndrome (AIDS), a condition in which progressive failure of the immune system allows life-threatening opportunistic infections and cancers to thrive. Without treatment, the average survival time after infection with HIV is estimated to be 9 to 11 years, depending on the HIV subtype. In most cases, HIV is a sexually transmitted infection and occurs by contact with or transfer of blood, pre-ejaculate, semen, and vaginal fluids. Non-sexual transmission can occur from an infected mother to her infant during pregnancy, during childbirth by exposure to her blood or vaginal fluid, and through breast milk. Within these bodily fluids, HIV is present as both free virus particles and virus within infected immune cells. Research has shown (for both same-sex and opposite-sex couples) that HIV is not contagious during sexual intercourse without a condom if the HIV-positive partner has a consistently undetectable viral load. HIV infects vital cells in the human immune system, such as helper T cells (specifically CD4+ T cells), macrophages, and dendritic cells. HIV infection leads to low levels of CD4+ T cells through a number of mechanisms, including pyroptosis of abortively infected T cells, apoptosis of uninfected bystander cells, direct viral killing of infected cells, and killing of infected CD4+ T cells by CD8+ cytotoxic lymphocytes that recognize infected cells. When CD4+ T cell numbers decline below a critical level, cell-mediated immunity is lost, and the body becomes progressively more susceptible to opportunistic infections, leading to the development of AIDS. Virology Classification HIV is a member of the genus Lentivirus, part of the family Retroviridae. Lentiviruses have many morphologies and biological properties in common. Many species are infected by lentiviruses, which are characteristically responsible for long-duration illnesses with a long incubation period. Lentiviruses are transmitted as single-stranded, positive-sense, enveloped RNA viruses. Upon entry into the target cell, the viral RNA genome is converted (reverse transcribed) into double-stranded DNA by a virally encoded enzyme, reverse transcriptase, that is transported along with the viral genome in the virus particle. The resulting viral DNA is then imported into the cell nucleus and integrated into the cellular DNA by a virally encoded enzyme, integrase, and host co-factors. Once integrated, the virus may become latent, allowing the virus and its host cell to avoid detection by the immune system, for an indeterminate amount of time. The virus can remain dormant in the human body for up to ten years after primary infection; during this period the virus does not cause symptoms. Alternatively, the integrated viral DNA may be transcribed, producing new RNA genomes and viral proteins, using host cell resources, that are packaged and released from the cell as new virus particles that will begin the replication cycle anew. Two types of HIV have been characterized: HIV-1 and HIV-2. HIV-1 is the virus that was initially discovered and termed both lymphadenopathy associated virus (LAV) and human T-lymphotropic virus 3 (HTLV-III). HIV-1 is more virulent and more infective than HIV-2, and is the cause of the majority of HIV infections globally. The lower infectivity of HIV-2, compared to HIV-1, implies that fewer of those exposed to HIV-2 will be infected per exposure. Due to its relatively poor capacity for transmission, HIV-2 is largely confined to West Africa. Both HIV-1 and HIV-2 have gained an additional classification according to the International Committee on Taxonomy of Viruses, with the change being approved in 2020, to belong to the species called "Lentivirus humimdef1" and "Lentivirus humimdef2" for HIV-1 and HIV-2 respectively. Structure and genome HIV is similar in structure to other retroviruses. It is roughly spherical with a diameter of about 120 nm, around 100,000 times smaller in volume than a red blood cell. It is composed of two copies of positive-sense single-stranded RNA that codes for the virus' nine genes enclosed by a conical capsid composed of 2,000 copies of the viral protein p24. The single-stranded RNA is tightly bound to nucleocapsid proteins, p7, and enzymes needed for the development of the virion such as reverse transcriptase, proteases, ribonuclease and integrase. A matrix composed of the viral protein p17 surrounds the capsid ensuring the integrity of the virion particle. This is, in turn, surrounded by the viral envelope, that is composed of the lipid bilayer taken from the membrane of a human host cell when the newly formed virus particle buds from the cell. The viral envelope contains proteins from the host cell and relatively few copies of the HIV envelope protein, which consists of a cap made of three molecules known as glycoprotein (gp) 120, and a stem consisting of three gp41 molecules that anchor the structure into the viral envelope. The envelope protein, encoded by the HIV env gene, allows the virus to attach to target cells and fuse the viral envelope with the target cell's membrane releasing the viral contents into the cell and initiating the infectious cycle. As the sole viral protein on the surface of the virus, the envelope protein is a major target for HIV vaccine efforts. Over half of the mass of the trimeric envelope spike is N-linked glycans. The density is high as the glycans shield the underlying viral protein from neutralisation by antibodies. This is one of the most densely glycosylated molecules known and the density is sufficiently high to prevent the normal maturation process of glycans during biogenesis in the endoplasmic and Golgi apparatus. The majority of the glycans are therefore stalled as immature 'high-mannose' glycans not normally present on human glycoproteins that are secreted or present on a cell surface. The unusual processing and high density means that almost all broadly neutralising antibodies that have so far been identified (from a subset of patients that have been infected for many months to years) bind to, or are adapted to cope with, these envelope glycans. The molecular structure of the viral spike has now been determined by X-ray crystallography and cryogenic electron microscopy. These advances in structural biology were made possible due to the development of stable recombinant forms of the viral spike by the introduction of an intersubunit disulphide bond and an isoleucine to proline mutation (radical replacement of an amino acid) in gp41. The so-called SOSIP trimers not only reproduce the antigenic properties of the native viral spike, but also display the same degree of immature glycans as presented on the native virus. Recombinant trimeric viral spikes are promising vaccine candidates as they display less non-neutralising epitopes than recombinant monomeric gp120, which act to suppress the immune response to target epitopes. The RNA genome consists of at least seven structural landmarks (LTR, TAR, RRE, PE, SLIP, CRS, and INS), and nine genes (gag, pol, and env, tat, rev, nef, vif, vpr, vpu, and sometimes a tenth tev, which is a fusion of tat, env and rev), encoding 19 proteins. Three of these genes, gag, pol, and env, contain information needed to make the structural proteins for new virus particles. For example, env codes for a protein called gp160 that is cut in two by a cellular protease to form gp120 and gp41. The six remaining genes, tat, rev, nef, vif, vpr, and vpu (or vpx in the case of HIV-2), are regulatory genes for proteins that control the ability of HIV to infect cells, produce new copies of virus (replicate), or cause disease. The two tat proteins (p16 and p14) are transcriptional transactivators for the LTR promoter acting by binding the TAR RNA element. The TAR may also be processed into microRNAs that regulate the apoptosis genes ERCC1 and IER3. The rev protein (p19) is involved in shuttling RNAs from the nucleus and the cytoplasm by binding to the RRE RNA element. The vif protein (p23) prevents the action of APOBEC3G (a cellular protein that deaminates cytidine to uridine in the single-stranded viral DNA and/or interferes with reverse transcription). The vpr protein (p14) arrests cell division at G2/M. The nef protein (p27) down-regulates CD4 (the major viral receptor), as well as the MHC class I and class II molecules. Nef also interacts with SH3 domains. The vpu protein (p16) influences the release of new virus particles from infected cells. The ends of each strand of HIV RNA contain an RNA sequence called a long terminal repeat (LTR). Regions in the LTR act as switches to control production of new viruses and can be triggered by proteins from either HIV or the host cell. The Psi element is involved in viral genome packaging and recognized by gag and rev proteins. The SLIP element () is involved in the frameshift in the gag-pol reading frame required to make functional pol. Tropism The term viral tropism refers to the cell types a virus infects. HIV can infect a variety of immune cells such as CD4+ T cells, macrophages, and microglial cells. HIV-1 entry to macrophages and CD4+ T cells is mediated through interaction of the virion envelope glycoproteins (gp120) with the CD4 molecule on the target cells' membrane and also with chemokine co-receptors. Macrophage-tropic (M-tropic) strains of HIV-1, or non-syncytia-inducing strains (NSI; now called R5 viruses) use the β-chemokine receptor, CCR5, for entry and are thus able to replicate in both macrophages and CD4+ T cells. This CCR5 co-receptor is used by almost all primary HIV-1 isolates regardless of viral genetic subtype. Indeed, macrophages play a key role in several critical aspects of HIV infection. They appear to be the first cells infected by HIV and perhaps the source of HIV production when CD4+ cells become depleted in the patient. Macrophages and microglial cells are the cells infected by HIV in the central nervous system. In the tonsils and adenoids of HIV-infected patients, macrophages fuse into multinucleated giant cells that produce huge amounts of virus. T-tropic strains of HIV-1, or syncytia-inducing strains (SI; now called X4 viruses) replicate in primary CD4+ T cells as well as in macrophages and use the α-chemokine receptor, CXCR4, for entry. Dual-tropic HIV-1 strains are thought to be transitional strains of HIV-1 and thus are able to use both CCR5 and CXCR4 as co-receptors for viral entry. The α-chemokine SDF-1, a ligand for CXCR4, suppresses replication of T-tropic HIV-1 isolates. It does this by down-regulating the expression of CXCR4 on the surface of HIV target cells. M-tropic HIV-1 isolates that use only the CCR5 receptor are termed R5; those that use only CXCR4 are termed X4, and those that use both, X4R5. However, the use of co-receptors alone does not explain viral tropism, as not all R5 viruses are able to use CCR5 on macrophages for a productive infection and HIV can also infect a subtype of myeloid dendritic cells, which probably constitute a reservoir that maintains infection when CD4+ T cell numbers have declined to extremely low levels. Some people are resistant to certain strains of HIV. For example, people with the CCR5-Δ32 mutation are resistant to infection by the R5 virus, as the mutation leaves HIV unable to bind to this co-receptor, reducing its ability to infect target cells. Sexual intercourse is the major mode of HIV transmission. Both X4 and R5 HIV are present in the seminal fluid, which enables the virus to be transmitted from a male to his sexual partner. The virions can then infect numerous cellular targets and disseminate into the whole organism. However, a selection process leads to a predominant transmission of the R5 virus through this pathway. In patients infected with subtype B HIV-1, there is often a co-receptor switch in late-stage disease and T-tropic variants that can infect a variety of T cells through CXCR4. These variants then replicate more aggressively with heightened virulence that causes rapid T cell depletion, immune system collapse, and opportunistic infections that mark the advent of AIDS. HIV-positive patients acquire an enormously broad spectrum of opportunistic infections, which was particularly problematic prior to the onset of HAART therapies; however, the same infections are reported among HIV-infected patients examined post-mortem following the onset of antiretroviral therapies. Thus, during the course of infection, viral adaptation to the use of CXCR4 instead of CCR5 may be a key step in the progression to AIDS. A number of studies with subtype B-infected individuals have determined that between 40 and 50 percent of AIDS patients can harbour viruses of the SI and, it is presumed, the X4 phenotypes. HIV-2 is much less pathogenic than HIV-1 and is restricted in its worldwide distribution to West Africa. The adoption of "accessory genes" by HIV-2 and its more promiscuous pattern of co-receptor usage (including CD4-independence) may assist the virus in its adaptation to avoid innate restriction factors present in host cells. Adaptation to use normal cellular machinery to enable transmission and productive infection has also aided the establishment of HIV-2 replication in humans. A survival strategy for any infectious agent is not to kill its host, but ultimately become a commensal organism. Having achieved a low pathogenicity, over time, variants that are more successful at transmission will be selected. Replication cycle Entry to the cell The HIV virion enters macrophages and CD4+ T cells by the adsorption of glycoproteins on its surface to receptors on the target cell followed by fusion of the viral envelope with the target cell membrane and the release of the HIV capsid into the cell. Entry to the cell begins through interaction of the trimeric envelope complex (gp160 spike) on the HIV viral envelope and both CD4 and a chemokine co-receptor (generally either CCR5 or CXCR4, but others are known to interact) on the target cell surface. Gp120 binds to integrin α4β7 activating LFA-1, the central integrin involved in the establishment of virological synapses, which facilitate efficient cell-to-cell spreading of HIV-1. The gp160 spike contains binding domains for both CD4 and chemokine receptors. The first step in fusion involves the high-affinity attachment of the CD4 binding domains of gp120 to CD4. Once gp120 is bound with the CD4 protein, the envelope complex undergoes a structural change, exposing the chemokine receptor binding domains of gp120 and allowing them to interact with the target chemokine receptor. This allows for a more stable two-pronged attachment, which allows the N-terminal fusion peptide gp41 to penetrate the cell membrane. Repeat sequences in gp41, HR1, and HR2 then interact, causing the collapse of the extracellular portion of gp41 into a hairpin shape. This loop structure brings the virus and cell membranes close together, allowing fusion of the membranes and subsequent entry of the viral capsid. After HIV has bound to the target cell, the HIV RNA and various enzymes, including reverse transcriptase, integrase, ribonuclease, and protease, are injected into the cell. During the microtubule-based transport to the nucleus, the viral single-strand RNA genome is transcribed into double-strand DNA, which is then integrated into a host chromosome. HIV can infect dendritic cells (DCs) by this CD4-CCR5 route, but another route using mannose-specific C-type lectin receptors such as DC-SIGN can also be used. DCs are one of the first cells encountered by the virus during sexual transmission. They are currently thought to play an important role by transmitting HIV to T cells when the virus is captured in the mucosa by DCs. The presence of FEZ-1, which occurs naturally in neurons, is believed to prevent the infection of cells by HIV. HIV-1 entry, as well as entry of many other retroviruses, has long been believed to occur exclusively at the plasma membrane. More recently, however, productive infection by pH-independent, clathrin-mediated endocytosis of HIV-1 has also been reported and was recently suggested to constitute the only route of productive entry. Replication and transcription Shortly after the viral capsid enters the cell, an enzyme called reverse transcriptase liberates the positive-sense single-stranded RNA genome from the attached viral proteins and copies it into a complementary DNA (cDNA) molecule. The process of reverse transcription is extremely error-prone, and the resulting mutations may cause drug resistance or allow the virus to evade the body's immune system. The reverse transcriptase also has ribonuclease activity that degrades the viral RNA during the synthesis of cDNA, as well as DNA-dependent DNA polymerase activity that creates a sense DNA from the antisense cDNA. Together, the cDNA and its complement form a double-stranded viral DNA that is then transported into the cell nucleus. The integration of the viral DNA into the host cell's genome is carried out by another viral enzyme called integrase. The integrated viral DNA may then lie dormant, in the latent stage of HIV infection. To actively produce the virus, certain cellular transcription factors need to be present, the most important of which is NF-κB (nuclear factor kappa B), which is upregulated when T cells become activated. This means that those cells most likely to be targeted, entered and subsequently killed by HIV are those actively fighting infection. During viral replication, the integrated DNA provirus is transcribed into RNA. The full-length genomic RNAs (gRNA) can be packaged into new viral particles in a pseudodiploid form. The selectivity in the packaging is explained by the structural properties of the dimeric conformer of the gRNA. The gRNA dimer is characterized by a tandem three-way junction within the gRNA monomer, in which the SD and AUG hairpins, responsible for splicing and translation respectively, are sequestered and the DIS (dimerization initiation signal) hairpin is exposed. The formation of the gRNA dimer is mediated by a 'kissing' interaction between the DIS hairpin loops of the gRNA monomers. At the same time, certain guanosine residues in the gRNA are made available for binding of the nucleocapsid (NC) protein leading to the subsequent virion assembly. The labile gRNA dimer has been also reported to achieve a more stable conformation following the NC binding, in which both the DIS and the U5:AUG regions of the gRNA participate in extensive base pairing. RNA can also be processed to produce mature messenger RNAs (mRNAs). In most cases, this processing involves RNA splicing to produce mRNAs that are shorter than the full-length genome. Which part of the RNA is removed during RNA splicing determines which of the HIV protein-coding sequences is translated. Mature HIV mRNAs are exported from the nucleus into the cytoplasm, where they are translated to produce HIV proteins, including Rev. As the newly produced Rev protein is produced it moves to the nucleus, where it binds to full-length, unspliced copies of virus RNAs and allows them to leave the nucleus. Some of these full-length RNAs function as mRNAs that are translated to produce the structural proteins Gag and Env. Gag proteins bind to copies of the virus RNA genome to package them into new virus particles. HIV-1 and HIV-2 appear to package their RNA differently. HIV-1 will bind to any appropriate RNA. HIV-2 will preferentially bind to the mRNA that was used to create the Gag protein itself. Recombination Two RNA genomes are encapsidated in each HIV-1 particle (see Structure and genome of HIV). Upon infection and replication catalyzed by reverse transcriptase, recombination between the two genomes can occur. Recombination occurs as the single-strand, positive-sense RNA genomes are reverse transcribed to form DNA. During reverse transcription, the nascent DNA can switch multiple times between the two copies of the viral RNA. This form of recombination is known as copy-choice. Recombination events may occur throughout the genome. Anywhere from two to 20 recombination events per genome may occur at each replication cycle, and these events can rapidly shuffle the genetic information that is transmitted from parental to progeny genomes. Viral recombination produces genetic variation that likely contributes to the evolution of resistance to anti-retroviral therapy. Recombination may also contribute, in principle, to overcoming the immune defenses of the host. Yet, for the adaptive advantages of genetic variation to be realized, the two viral genomes packaged in individual infecting virus particles need to have arisen from separate progenitor parental viruses of differing genetic constitution. It is unknown how often such mixed packaging occurs under natural conditions. Bonhoeffer et al. suggested that template switching by reverse transcriptase acts as a repair process to deal with breaks in the single-stranded RNA genome. In addition, Hu and Temin suggested that recombination is an adaptation for repair of damage in the RNA genomes. Strand switching (copy-choice recombination) by reverse transcriptase could generate an undamaged copy of genomic DNA from two damaged single-stranded RNA genome copies. This view of the adaptive benefit of recombination in HIV could explain why each HIV particle contains two complete genomes, rather than one. Furthermore, the view that recombination is a repair process implies that the benefit of repair can occur at each replication cycle, and that this benefit can be realized whether or not the two genomes differ genetically. On the view that recombination in HIV is a repair process, the generation of recombinational variation would be a consequence, but not the cause of, the evolution of template switching. HIV-1 infection causes chronic inflammation and production of reactive oxygen species. Thus, the HIV genome may be vulnerable to oxidative damage, including breaks in the single-stranded RNA. For HIV, as well as for viruses in general, successful infection depends on overcoming host defense strategies that often include production of genome-damaging reactive oxygen species. Thus, Michod et al. suggested that recombination by viruses is an adaptation for repair of genome damage, and that recombinational variation is a byproduct that may provide a separate benefit. Assembly and release The final step of the viral cycle, assembly of new HIV-1 virions, begins at the plasma membrane of the host cell. The Env polyprotein (gp160) goes through the endoplasmic reticulum and is transported to the Golgi apparatus where it is cleaved by furin resulting in the two HIV envelope glycoproteins, gp41 and gp120. These are transported to the plasma membrane of the host cell where gp41 anchors gp120 to the membrane of the infected cell. The Gag (p55) and Gag-Pol (p160) polyproteins also associate with the inner surface of the plasma membrane along with the HIV genomic RNA as the forming virion begins to bud from the host cell. The budded virion is still immature as the gag polyproteins still need to be cleaved into the actual matrix, capsid and nucleocapsid proteins. This cleavage is mediated by the packaged viral protease and can be inhibited by antiretroviral drugs of the protease inhibitor class. The various structural components then assemble to produce a mature HIV virion. Only mature virions are then able to infect another cell. Spread within the body The classical process of infection of a cell by a virion can be called "cell-free spread" to distinguish it from a more recently recognized process called "cell-to-cell spread". In cell-free spread (see figure), virus particles bud from an infected T cell, enter the blood or extracellular fluid and then infect another T cell following a chance encounter. HIV can also disseminate by direct transmission from one cell to another by a process of cell-to-cell spread, for which two pathways have been described. Firstly, an infected T cell can transmit virus directly to a target T cell via a virological synapse. Secondly, an antigen-presenting cell (APC), such as a macrophage or dendritic cell, can transmit HIV to T cells by a process that either involves productive infection (in the case of macrophages) or capture and transfer of virions in trans (in the case of dendritic cells). Whichever pathway is used, infection by cell-to-cell transfer is reported to be much more efficient than cell-free virus spread. A number of factors contribute to this increased efficiency, including polarised virus budding towards the site of cell-to-cell contact, close apposition of cells, which minimizes fluid-phase diffusion of virions, and clustering of HIV entry receptors on the target cell towards the contact zone. Cell-to-cell spread is thought to be particularly important in lymphoid tissues, where CD4+ T cells are densely packed and likely to interact frequently. Intravital imaging studies have supported the concept of the HIV virological synapse in vivo. The many dissemination mechanisms available to HIV contribute to the virus' ongoing replication in spite of anti-retroviral therapies. Genetic variability HIV differs from many viruses in that it has very high genetic variability. This diversity is a result of its fast replication cycle, with the generation of about 1010 virions every day, coupled with a high mutation rate of approximately 3 x 10−5 per nucleotide base per cycle of replication and recombinogenic properties of reverse transcriptase. This complex scenario leads to the generation of many variants of HIV in a single infected patient in the course of one day. This variability is compounded when a single cell is simultaneously infected by two or more different strains of HIV. When simultaneous infection occurs, the genome of progeny virions may be composed of RNA strands from two different strains. This hybrid virion then infects a new cell where it undergoes replication. As this happens, the reverse transcriptase, by jumping back and forth between the two different RNA templates, will generate a newly synthesized retroviral DNA sequence that is a recombinant between the two parental genomes. This recombination is most obvious when it occurs between subtypes. The closely related simian immunodeficiency virus (SIV) has evolved into many strains, classified by the natural host species. SIV strains of the African green monkey (SIVagm) and sooty mangabey (SIVsmm) are thought to have a long evolutionary history with their hosts. These hosts have adapted to the presence of the virus, which is present at high levels in the host's blood, but evokes only a mild immune response, does not cause the development of simian AIDS, and does not undergo the extensive mutation and recombination typical of HIV infection in humans. In contrast, when these strains infect species that have not adapted to SIV ("heterologous" or similar hosts such as rhesus or cynomologus macaques), the animals develop AIDS and the virus generates genetic diversity similar to what is seen in human HIV infection. Chimpanzee SIV (SIVcpz), the closest genetic relative of HIV-1, is associated with increased mortality and AIDS-like symptoms in its natural host. SIVcpz appears to have been transmitted relatively recently to chimpanzee and human populations, so their hosts have not yet adapted to the virus. This virus has also lost a function of the nef gene that is present in most SIVs. For non-pathogenic SIV variants, nef suppresses T cell activation through the CD3 marker. Nef function in non-pathogenic forms of SIV is to downregulate expression of inflammatory cytokines, MHC-1, and signals that affect T cell trafficking. In HIV-1 and SIVcpz, nef does not inhibit T-cell activation and it has lost this function. Without this function, T cell depletion is more likely, leading to immunodeficiency. Three groups of HIV-1 have been identified on the basis of differences in the envelope (env) region: M, N, and O. Group M is the most prevalent and is subdivided into eight subtypes (or clades), based on the whole genome, which are geographically distinct. The most prevalent are subtypes B (found mainly in North America and Europe), A and D (found mainly in Africa), and C (found mainly in Africa and Asia); these subtypes form branches in the phylogenetic tree representing the lineage of the M group of HIV-1. Co-infection with distinct subtypes gives rise to circulating recombinant forms (CRFs). In 2000, the last year in which an analysis of global subtype prevalence was made, 47.2% of infections worldwide were of subtype C, 26.7% were of subtype A/CRF02_AG, 12.3% were of subtype B, 5.3% were of subtype D, 3.2% were of CRF_AE, and the remaining 5.3% were composed of other subtypes and CRFs. Most HIV-1 research is focused on subtype B; few laboratories focus on the other subtypes. The existence of a fourth group, "P", has been hypothesised based on a virus isolated in 2009. The strain is apparently derived from gorilla SIV (SIVgor), first isolated from western lowland gorillas in 2006. HIV-2's closest relative is SIVsm, a strain of SIV found in sooty mangabees. Since HIV-1 is derived from SIVcpz, and HIV-2 from SIVsm, the genetic sequence of HIV-2 is only partially homologous to HIV-1 and more closely resembles that of SIVsm. Diagnosis Many HIV-positive people are unaware that they are infected with the virus. For example, in 2001 less than 1% of the sexually active urban population in Africa had been tested, and this proportion is even lower in rural populations. Furthermore, in 2001 only 0.5% of pregnant women attending urban health facilities were counselled, tested or received their test results. Again, this proportion is even lower in rural health facilities. Since donors may therefore be unaware of their infection, donor blood and blood products used in medicine and medical research are routinely screened for HIV. HIV-1 testing is initially done using an enzyme-linked immunosorbent assay (ELISA) to detect antibodies to HIV-1. Specimens with a non-reactive result from the initial ELISA are considered HIV-negative, unless new exposure to an infected partner or partner of unknown HIV status has occurred. Specimens with a reactive ELISA result are retested in duplicate. If the result of either duplicate test is reactive, the specimen is reported as repeatedly reactive and undergoes confirmatory testing with a more specific supplemental test (e.g., a polymerase chain reaction (PCR), western blot or, less commonly, an immunofluorescence assay (IFA)). Only specimens that are repeatedly reactive by ELISA and positive by IFA or PCR or reactive by western blot are considered HIV-positive and indicative of HIV infection. Specimens that are repeatedly ELISA-reactive occasionally provide an indeterminate western blot result, which may be either an incomplete antibody response to HIV in an infected person or nonspecific reactions in an uninfected person. Although IFA can be used to confirm infection in these ambiguous cases, this assay is not widely used. In general, a second specimen should be collected more than a month later and retested for persons with indeterminate western blot results. Although much less commonly available, nucleic acid testing (e.g., viral RNA or proviral DNA amplification method) can also help diagnosis in certain situations. In addition, a few tested specimens might provide inconclusive results because of a low quantity specimen. In these situations, a second specimen is collected and tested for HIV infection. Modern HIV testing is extremely accurate, when the window period is taken into consideration. A single screening test is correct more than 99% of the time. The chance of a false-positive result in a standard two-step testing protocol is estimated to be about 1 in 250,000 in a low risk population. Testing post-exposure is recommended immediately and then at six weeks, three months, and six months. The latest recommendations of the US Centers for Disease Control and Prevention (CDC) show that HIV testing must start with an immunoassay combination test for HIV-1 and HIV-2 antibodies and p24 antigen. A negative result rules out HIV exposure, while a positive one must be followed by an HIV-1/2 antibody differentiation immunoassay to detect which antibodies are present. This gives rise to four possible scenarios: 1. HIV-1 (+) & HIV-2 (−): HIV-1 antibodies detected 2. HIV-1 (−) & HIV-2 (+): HIV-2 antibodies detected 3. HIV-1 (+) & HIV-2 (+): both HIV-1 and HIV-2 antibodies detected 4. HIV-1 (−) or indeterminate & HIV-2 (−): Nucleic acid test must be carried out to detect the acute infection of HIV-1 or its absence. Research HIV/AIDS research includes all medical research that attempts to prevent, treat, or cure HIV/AIDS, as well as fundamental research about the nature of HIV as an infectious agent and AIDS as the disease caused by HIV. Many governments and research institutions participate in HIV/AIDS research. This research includes behavioral health interventions, such as research into sex education, and drug development, such as research into microbicides for sexually transmitted diseases, HIV vaccines, and anti-retroviral drugs. Other medical research areas include the topics of pre-exposure prophylaxis, post-exposure prophylaxis, circumcision, and accelerated aging effects. Treatment and transmission The management of HIV/AIDS typically involves the use of multiple antiretroviral drugs. In many parts of the world, HIV has become a chronic condition, with progression to AIDS increasingly rare. HIV latency and the resulting viral reservoir in CD4+ T cells, dendritic cells, and macrophages is the main barrier to eradication of the virus. While HIV is highly virulent, transmission through sexual contact does not occur when an HIV-positive individual maintains a consistently undetectable viral load (<50 copies/ml) due to antiretroviral treatment. This concept was first proposed by the Swiss Federal Commission for AIDS/HIV in 2008 in what is known as the Swiss Statement. Although initially controversial, subsequent studies have confirmed that the risk of transmitting HIV through sex is effectively zero when the HIV-positive person has a consistently undetectable viral load, a concept now widely known as U=U, or "Undetectable = Untransmittable." Studies that established the U=U principle include Opposites Attract, PARTNER 1, PARTNER 2 (which focused on male-male couples), and HPTN052 (which focused on heterosexual couples). These studies involved couples where one partner was HIV-positive and one was HIV-negative, and included regular HIV testing. Across these four studies, a total of 4,097 couples participated from four continents, reporting 151,880 acts of condomless sex with zero phylogenetically-linked HIV transmissions when the positive partner had an undetectable viral load. Following these findings, the U=U consensus statement advocating the use of the term 'zero risk' was endorsed by numerous individuals and organizations, including the CDC, the British HIV Association, and The Lancet medical journal. Additionally, reactivation of herpes simplex virus-2 (HSV-2) in individuals with genital herpes is associated with an increase in CCR-5 enriched CD4+ T cells and inflammatory dendritic cells in the dermis of ulcerated genital skin, persisting even after ulcer healing. HIV's tropism for CCR-5 positive cells contributes to the two- to threefold increased risk of HIV acquisition in persons with genital herpes. Notably, daily antiviral medication, such as acyclovir, does not reduce the subclinical post-reactivation inflammation and therefore does not decrease the risk of HIV acquisition. History Discovery The first news story on "an exotic new disease" appeared May 18, 1981, in the gay newspaper New York Native. AIDS was first clinically observed in 1981 in the United States. The initial cases were a cluster of injection drug users and gay men with no known cause of impaired immunity who showed symptoms of Pneumocystis pneumonia (PCP or PJP, the latter term recognizing that the causative agent is now called Pneumocystis jirovecii), a rare opportunistic infection that was known to occur in people with very compromised immune systems. Soon thereafter, researchers at the NYU School of Medicine studied gay men developing a previously rare skin cancer called Kaposi's sarcoma (KS). Many more cases of PJP and KS emerged, alerting U.S. Centers for Disease Control and Prevention (CDC) and a CDC task force was formed to monitor the outbreak. The earliest retrospectively described case of AIDS is believed to have been in Norway beginning in 1966. In the beginning, the CDC did not have an official name for the disease, often referring to it by way of the diseases that were associated with it, for example, lymphadenopathy, the disease after which the discoverers of HIV originally named the virus. They also used Kaposi's Sarcoma and Opportunistic Infections, the name by which a task force had been set up in 1981. In the general press, the term GRID, which stood for gay-related immune deficiency, had been coined. The CDC, in search of a name and looking at the infected communities, coined "the 4H disease", as it seemed to single out homosexuals, heroin users, hemophiliacs, and Haitians. However, after determining that AIDS was not isolated to the gay community, it was realized that the term GRID was misleading and AIDS was introduced at a meeting in July 1982. By September 1982 the CDC started using the name AIDS. In 1983, two separate research groups led by American Robert Gallo and French investigators and Luc Montagnier independently declared that a novel retrovirus may have been infecting AIDS patients, and published their findings in the same issue of the journal Science. Gallo claimed that a virus his group had isolated from a person with AIDS was strikingly similar in shape to other human T-lymphotropic viruses (HTLVs) his group had been the first to isolate. Gallo admitted in 1987 that the virus he claimed to have discovered in 1984 was in reality a virus sent to him from France the year before. Gallo's group called their newly isolated virus HTLV-III. Montagnier's group isolated a virus from a patient presenting with swelling of the lymph nodes of the neck and physical weakness, two classic symptoms of primary HIV infection. Contradicting the report from Gallo's group, Montagnier and his colleagues showed that core proteins of this virus were immunologically different from those of HTLV-I. Montagnier's group named their isolated virus lymphadenopathy-associated virus (LAV). As these two viruses turned out to be the same, in 1986 LAV and HTLV-III were renamed HIV. Another group working contemporaneously with the Montagnier and Gallo groups was that of Jay A. Levy at the University of California, San Francisco. He independently discovered the AIDS virus in 1983 and named it the AIDS associated retrovirus (ARV). This virus was very different from the virus reported by the Montagnier and Gallo groups. The ARV strains indicated, for the first time, the heterogeneity of HIV isolates and several of these remain classic examples of the AIDS virus found in the United States. Origins Both HIV-1 and HIV-2 are believed to have originated in non-human primates in West-central Africa, and are believed to have transferred to humans (a process known as zoonosis) in the early 20th century. HIV-1 appears to have originated in southern Cameroon through the evolution of SIVcpz, a simian immunodeficiency virus (SIV) that infects wild chimpanzees (HIV-1 descends from the SIVcpz endemic in the chimpanzee subspecies Pan troglodytes troglodytes). The closest relative of HIV-2 is SIVsmm, a virus of the sooty mangabey (Cercocebus atys atys), an Old World monkey living in littoral West Africa (from southern Senegal to western Côte d'Ivoire). New World monkeys such as the owl monkey are resistant to HIV-1 infection, possibly because of a genomic fusion of two viral resistance genes. HIV-1 is thought to have jumped the species barrier on at least three separate occasions, giving rise to the three groups of the virus, M, N, and O. There is evidence that humans who participate in bushmeat activities, either as hunters or as bushmeat vendors, commonly acquire SIV. However, SIV is a weak virus, and it is typically suppressed by the human immune system within weeks of infection. It is thought that several transmissions of the virus from individual to individual in quick succession are necessary to allow it enough time to mutate into HIV. Furthermore, due to its relatively low person-to-person transmission rate, it can only spread throughout the population in the presence of one or more high-risk transmission channels, which are thought to have been absent in Africa prior to the 20th century. Specific proposed high-risk transmission channels, allowing the virus to adapt to humans and spread throughout the society, depend on the proposed timing of the animal-to-human crossing. Genetic studies of the virus suggest that the most recent common ancestor of the HIV-1 M group dates back to . Proponents of this dating link the HIV epidemic with the emergence of colonialism and growth of large colonial African cities, leading to social changes, including different patterns of sexual contact (especially multiple, concurrent partnerships), the spread of prostitution, and the concomitant high frequency of genital ulcer diseases (such as syphilis) in nascent colonial cities. While transmission rates of HIV during vaginal intercourse are typically low, they are increased manyfold if one of the partners has a sexually transmitted infection resulting in genital ulcers. Early 1900s colonial cities were notable for their high prevalence of prostitution and genital ulcers to the degree that as of 1928 as many as 45% of female residents of eastern Leopoldville (currently Kinshasa) were thought to have been prostitutes and as of 1933 around 15% of all residents of the same city were infected by one of the forms of syphilis. The earliest, well-documented case of HIV in a human dates back to 1959 in the Belgian Congo. The virus may have been present in the United States as early as the mid- to late 1960s, as a sixteen-year-old male named Robert Rayford presented with symptoms in 1966 and died in 1969. An alternative and likely complementary hypothesis points to the widespread use of unsafe medical practices in Africa during years following World War II, such as unsterile reuse of single-use syringes during mass vaccination, antibiotic, and anti-malaria treatment campaigns. Research on the timing of most recent common ancestor for HIV-1 groups M and O, as well as on HIV-2 groups A and B, indicates that SIV has given rise to transmissible HIV lineages throughout the twentieth century. The dispersed timing of these transmissions to humans implies that no single external factor is needed to explain the cross-species transmission of HIV. This observation is consistent with both of the two prevailing views of the origin of the HIV epidemics, namely SIV transmission to humans during the slaughter or butchering of infected primates, and the colonial expansion of sub-Saharan African cities.
Biology and health sciences
Infectious disease
null
14207
https://en.wikipedia.org/wiki/Hematite
Hematite
Hematite (), also spelled as haematite, is a common iron oxide compound with the formula, Fe2O3 and is widely found in rocks and soils. Hematite crystals belong to the rhombohedral lattice system which is designated the alpha polymorph of . It has the same crystal structure as corundum () and ilmenite (). With this it forms a complete solid solution at temperatures above . Hematite occurs naturally in black to steel or silver-gray, brown to reddish-brown, or red colors. It is mined as an important ore mineral of iron. It is electrically conductive. Hematite varieties include kidney ore, martite (pseudomorphs after magnetite), iron rose and specularite (specular hematite). While these forms vary, they all have a rust-red streak. Hematite is not only harder than pure iron, but also much more brittle. The term kidney ore may be broadly used to describe botryoidal, mammillary, or reniform hematite. Maghemite is a polymorph of hematite (γ-) with the same chemical formula, but with a spinel structure like magnetite. Large deposits of hematite are found in banded iron formations. Gray hematite is typically found in places that have still, standing water, or mineral hot springs, such as those in Yellowstone National Park in North America. The mineral may precipitate in the water and collect in layers at the bottom of the lake, spring, or other standing water. Hematite can also occur in the absence of water, usually as the result of volcanic activity. Clay-sized hematite crystals also may occur as a secondary mineral formed by weathering processes in soil, and along with other iron oxides or oxyhydroxides such as goethite, which is responsible for the red color of many tropical, ancient, or otherwise highly weathered soils. Etymology and history The name hematite is derived from the Greek word for blood, (haima), due to the red coloration found in some varieties of hematite. The color of hematite is often used as a pigment. The English name of the stone is derived from Middle French hématite pierre, which was taken from Latin lapis haematites the 15th century, which originated from Ancient Greek (haimatitēs lithos, "blood-red stone"). Ochre is a clay that is colored by varying amounts of hematite, varying between 20% and 70%. Red ochre contains unhydrated hematite, whereas yellow ochre contains hydrated hematite (Fe2O3 · H2O). The principal use of ochre is for tinting with a permanent color. Use of the red chalk of this iron-oxide mineral in writing, drawing, and decoration is among the earliest in human history. To date, the earliest known human use of the powdery mineral is 164,000 years ago by the inhabitants of the Pinnacle Point caves in what now is South Africa, possibly for social purposes. Hematite residues are also found in graves from 80,000 years ago. Near Rydno in Poland and Lovas in Hungary red chalk mines have been found that are from 5000 BC, belonging to the Linear Pottery culture at the Upper Rhine. Rich deposits of hematite have been found on the island of Elba that have been mined since the time of the Etruscans. Underground hematite mining is classified as carcinogenic hazard to humans. Magnetism Hematite shows only a very feeble response to a magnetic field. Unlike magnetite, it is not noticeably attracted to an ordinary magnet. Hematite is an antiferromagnetic material below the Morin transition at , and a canted antiferromagnet or weakly ferromagnetic above the Morin transition and below its Néel temperature at , above which it is paramagnetic. The magnetic structure of α-hematite was the subject of considerable discussion and debate during the 1950s, as it appeared to be ferromagnetic with a Curie temperature of approximately , but with an extremely small magnetic moment (0.002 Bohr magnetons). Adding to the surprise was a transition with a decrease in temperature at around to a phase with no net magnetic moment. It was shown that the system is essentially antiferromagnetic, but that the low symmetry of the cation sites allows spin–orbit coupling to cause canting of the moments when they are in the plane perpendicular to the c axis. The disappearance of the moment with a decrease in temperature at is caused by a change in the anisotropy which causes the moments to align along the c axis. In this configuration, spin canting does not reduce the energy. The magnetic properties of bulk hematite differ from their nanoscale counterparts. For example, the Morin transition temperature of hematite decreases with a decrease in the particle size. The suppression of this transition has been observed in hematite nanoparticles and is attributed to the presence of impurities, water molecules and defects in the crystals lattice. Hematite is part of a complex solid solution oxyhydroxide system having various contents of H2O (water), hydroxyl groups and vacancy substitutions that affect the mineral's magnetic and crystal chemical properties. Two other end-members are referred to as protohematite and hydrohematite. Enhanced magnetic coercivities for hematite have been achieved by dry-heating a two-line ferrihydrite precursor prepared from solution. Hematite exhibited temperature-dependent magnetic coercivity values ranging from . The origin of these high coercivity values has been interpreted as a consequence of the subparticle structure induced by the different particle and crystallite size growth rates at increasing annealing temperature. These differences in the growth rates are translated into a progressive development of a subparticle structure at the nanoscale (super small). At lower temperatures (350–600 °C), single particles crystallize. However, at higher temperatures (600–1000 °C), the growth of crystalline aggregates, and a subparticle structure is favored. Mine tailings Hematite is present in the waste tailings of iron mines. A recently developed process, magnetation, uses magnets to glean waste hematite from old mine tailings in Minnesota's vast Mesabi Range iron district. Falu red is a pigment used in traditional Swedish house paints. Originally, it was made from tailings of the Falu mine. Mars The spectral signature of hematite was seen on the planet Mars by the infrared spectrometer on the NASA Mars Global Surveyor and 2001 Mars Odyssey spacecraft in orbit around Mars. The mineral was seen in abundance at two sites on the planet, the Terra Meridiani site, near the Martian equator at 0° longitude, and the Aram Chaos site near the Valles Marineris. Several other sites also showed hematite, such as Aureum Chaos. Because terrestrial hematite is typically a mineral formed in aqueous environments or by aqueous alteration, this detection was scientifically interesting enough that the second of the two Mars Exploration Rovers was sent to a site in the Terra Meridiani region designated Meridiani Planum. In-situ investigations by the Opportunity rover showed a significant amount of hematite, much of it in the form of small "Martian spherules" that were informally named "blueberries" by the science team. Analysis indicates that these spherules are apparently concretions formed from a water solution. "Knowing just how the hematite on Mars was formed will help us characterize the past environment and determine whether that environment was favorable for life". Jewelry Hematite is often shaped into beads, tumbling stones, and other jewellery components. Hematite was once used as mourning jewelry. Certain types of hematite- or iron-oxide-rich clay, especially Armenian bole, have been used in gilding. Hematite is also used in art such as in the creation of intaglio engraved gems. Hematine is a synthetic material sold as magnetic hematite. Pigment Hematite has been sourced to make pigments since earlier origins of human pictorial depictions, such as on cave linings and other surfaces, and has been employed continually in artwork through the eras. In Roman times, the pigment obtained by finely grinding hematite was known as sil atticum. Other names for the mineral when used in painting include colcotar and caput mortuum. In Spanish, it is called almagre or almagra, from the Arabic al-maghrah, red earth, which passed into English and Portuguese. Other ancient names for the pigment include ochra hispanica, sil atticum antiquorum, and Spanish brown. It forms the basis for red, purple, and brown iron-oxide pigments, as well as being an important component of ochre, sienna, and umber pigments. The main producer of hematite for the pigment industry is India, followed distantly by Spain. Industrial purposes As mentioned earlier, hematite is an important mineral for iron ore. The physical properties of hematite are also employed in the areas of medical equipment, shipping industries, and coal production. Having high density and capable as an effective barrier against X-ray passage, it often is incorporated into radiation shielding. As with other iron ores, it often is a component of ship ballasts because of its density and economy. In the coal industry, it can be formed into a high specific density solution, to help separate coal powder from impurities. Gallery
Physical sciences
Minerals
Earth science
14208
https://en.wikipedia.org/wiki/Holocene%20extinction
Holocene extinction
The Holocene extinction, also referred to as the Anthropocene extinction, is an ongoing extinction event caused by human activities during the Holocene epoch. This extinction event spans numerous families of plants and animals, including mammals, birds, reptiles, amphibians, fish, and invertebrates, impacting both terrestrial and marine species. Widespread degradation of biodiversity hotspots such as coral reefs and rainforests has exacerbated the crisis. Many of these extinctions are undocumented, as the species are often undiscovered before their extinctions. Current extinction rates are estimated at 100 to 1,000 times higher than natural background extinction rates and are accelerating. Over the past 100–200 years, biodiversity loss has reached such alarming levels that some conservation biologists now believe human activities have triggered a mass extinction, or are on the cusp of doing so. As such, after the "Big Five" mass extinctions, the Holocene extinction event has been referred to as the sixth mass extinction. However, given the recent recognition of the Capitanian mass extinction, the term seventh mass extinction has also been proposed. The Holocene extinction was preceded by the extinction of most large (megafaunal) animals during the Late Pleistocene, a decline attributed in part to human hunting. The prevailing theory is that human overhunting, coinciding with existing stress conditions, likely contributed to this decline. Examples from regions such as New Zealand, Madagascar, and Hawaii have shown how human colonization and habitat destruction have led to significant biodiversity losses. While debates persist about the exact role of human predation and habitat alteration, certain extinctions have been directly linked to these activities. Additionally, climate shifts at the end of the Pleistocene likely compounded these effects. Over the course of the Late Holocene, human settlement of the previously uninhabited Pacific islands led to extinctions of hundreds of bird species, peaking around 1300 AD. Recent estimates suggest that roughly 12% of avian species have been lost to human activities over the last 126,000 years—double earlier estimates. In the 20th century, the human population quadrupled, and the global economy grew twenty-five-fold. This period, often called the Great Acceleration, has intensified species' extinction. Humanity has become an unprecedented "global superpredator", preying on adult apex predators, invading habitats of other species, and disrupting food webs. The Holocene extinction continues into the 21st century, driven by anthropogenic global warming, human population growth, and increasing consumption—particularly among affluent societies. Factors such as rising meat production, deforestation, and the destruction of critical habitats compound these issues. Other drivers include overexploitation of natural resources, pollution, and climate change-induced shifts in ecosystems. Major extinction events during this period have been recorded across all continents, including Africa, Asia, Europe, Australia, North and South America, and various islands. The cumulative effects of deforestation, overfishing, ocean acidification, and wetland destruction have further destabilized ecosystems. Decline in amphibian populations, in particular, serves as an early indicator of broader ecological collapse. Despite this grim outlook, there are efforts to mitigate biodiversity loss. Conservation initiatives, international treaties, and sustainable practices aim to address this crisis. However, without significant changes in global policies and individual behaviors, the Holocene extinction threatens to irreversibly alter the planet's ecosystems and the services they provide. Background Mass extinctions are characterized by the loss of at least 75% of species within a geologically short period of time (i.e., less than 2 million years). The Holocene extinction is also known as the "sixth extinction", as it is possibly the sixth mass extinction event, after the Ordovician–Silurian extinction events, the Late Devonian extinction, the Permian–Triassic extinction event, the Triassic–Jurassic extinction event, and the Cretaceous–Paleogene extinction event. If the Capitanian extinction event is included among the first-order mass extinctions, the Holocene extinction would correspondingly be known as the "seventh extinction". The Holocene is the current geological epoch. Overview The precise timing of the Holocene extinction event remains debated, with no clear consensus on when it began or whether it should be considered distinct from the Quaternary extinction event. However, most scientists agree that human activities are the primary driver of the Holocene extinction. A 1998 survey conducted by the American Museum of Natural History found that 70% of biologists acknowledged an ongoing anthropogenic extinction event. Some researchers suggested that the activities of earlier archaic humans may have contributed to earlier extinctions, especially in Australia, New Zealand, and Madagascar. Even modest hunting pressure, combined with the vulnerability of large animals on isolated islands, is thought to have been enough to wipe out entire species. Only in the more recent stages of the Holocene have plants suffered extensive losses, which are also linked to human activities such as deforestation and land conversion. Extinction rate The contemporary rate of extinction is estimated at 100 to 1,000 times higher than the natural background extinction rate—the typical rate of species loss through natural evolutionary processes. One estimation suggested the rate could be as high as 10,000 times the background extinction rate, though this figure remains controversial. Theoretical ecologist Stuart Pimm has noted that the extinction rate for plants alone is 100 times higher than normal. While some argue that the current extinction rates have not yet reached the catastrophic levels of past mass extinctions, Barnosky et al. (2011) and Hull et al. (2015) point out that extinction rates during past mass extinctions cannot be fully determined due to gaps in the fossil record. However, they agree that the ongoing biodiversity loss is nonetheless unprecedented. Estimates of species lost per year vary widely—from 1.5 to 40,000 species—but all indicate that human activity is driving this crisis. In The Future of Life (2002), biologist Edward Osborne Wilson predicted that, if current trend continues, half of Earth's higher lifeforms could be extinct by 2100. More recent studies further support this view. A 2015 study on Hawaiian snails suggested that up to 7% of Earth's species may already be extinct. A 2021 study also found that only around 3% of the planet's terrestrial surface remains ecologically and faunally intact—areas still with healthy populations of native species and minimal human footprint. A 2022 study suggests that if global warming continues, between 13% and 27% of terrestrial vertebrate species could be driven to extinction by 2100, with habitat destructions and co-extinctions accounting for the rest. The 2019 Global Assessment Report on Biodiversity and Ecosystem Services, published by the United Nations IPBES, estimated that about one million species are currently at risk of extinction within decades due to human activities. Organized human existence is jeopardised by increasingly rapid destruction of the systems that support life on Earth, according to the report, the result of one of the most comprehensive studies of the health of the planet ever conducted. Moreover, the 2021 Economics of Biodiversity review, published by the UK government, asserts that "biodiversity is declining faster than at any time in human history." According to a 2022 study published in Frontiers in Ecology and the Environment, a survey of more than 3,000 experts says that the extent of the mass extinction might be greater than previously thought, and estimates that roughly 30% of species "have been globally threatened or driven extinct since the year 1500." In a 2022 report, IPBES listed unsustainable fishing, hunting, and logging as some of the primary drivers of the global extinction crisis. A 2023 study published in PLOS One shows that around two million species are threatened with extinction, double the estimate put forward in the 2019 IPBES report. According to a 2023 study published in PNAS, at least 73 genera of animals have gone extinct since 1500. If humans had never existed, the study estimates it would have taken 18,000 years for the same genera to have disappeared naturally, leading the authors to conclude that "the current generic extinction rates are 35 times higher than expected background rates prevailing in the last million years under the absence of human impacts" and that human civilization is causing the "rapid mutilation of the tree of life." Attribution There is widespread consensus among scientists that human activities—especially habitat destruction, resource consumption, and the elimination of species— are the main drivers of the current extinction crisis. Rising extinction rates among mammals, birds, reptiles, amphibians, and other groups have led many scientists to declare a global biodiversity crisis. Scientific debate The description of recent extinction as a mass extinction has been debated among scientists. Stuart Pimm, for example, asserts that the sixth mass extinction "is something that hasn't happened yet – we are on the edge of it." Several studies posit that the Earth has entered a sixth mass extinction event, including a 2015 paper by Barnosky et al. and a November 2017 statement titled "World Scientists' Warning to Humanity: A Second Notice", led by eight authors and signed by 15,364 scientists from 184 countries which asserted, among other things, that "we have unleashed a mass extinction event, the sixth in roughly 540 million years, wherein many current life forms could be extirpated or at least committed to extinction by the end of this century." The World Wide Fund for Nature's 2020 Living Planet Report says that wildlife populations have declined by 68% since 1970 as a result of overconsumption, population growth, and intensive farming, which is further evidence that humans have unleashed a sixth mass extinction event; however, this finding has been disputed by one 2020 study, which posits that this major decline was primarily driven by a few extreme outlier populations, and that when these outliers are removed, the trend shifts to that of a decline between the 1980s and 2000s, but a roughly positive trend after 2000. A 2021 report in Frontiers in Conservation Science which cites both of the aforementioned studies, says "population sizes of vertebrate species that have been monitored across years have declined by an average of 68% over the last five decades, with certain population clusters in extreme decline, thus presaging the imminent extinction of their species," and asserts "that we are already on the path of a sixth major extinction is now scientifically undeniable." A January 2022 review article published in Biological Reviews builds upon previous studies documenting biodiversity decline to assert that a sixth mass extinction event caused by anthropogenic activity is currently under way. A December 2022 study published in Science Advances states that "the planet has entered the sixth mass extinction" and warns that current anthropogenic trends, particularly regarding climate and land-use changes, could result in the loss of more than a tenth of plant and animal species by the end of the century. 12% of all bird species are threatened with extinction. A 2023 study published in Biological Reviews found that, of 70,000 monitored species, some 48% are experiencing population declines from anthropogenic pressures, whereas only 3% have increasing populations. According to the UNDP's 2020 Human Development Report, The Next Frontier: Human Development and the Anthropocene: The 2022 Living Planet Report found that vertebrate wildlife populations have plummeted by an average of almost 70% since 1970, with agriculture and fishing being the primary drivers of this decline. Some scientists, including Rodolfo Dirzo and Paul R. Ehrlich, contend that the sixth mass extinction is largely unknown to most people globally and is also misunderstood by many in the scientific community. They say it is not the disappearance of species, which gets the most attention, that is at the heart of the crisis, but "the existential threat of myriad population extinctions." Anthropocene The abundance of species extinctions considered anthropogenic, or due to human activity, has sometimes (especially when referring to hypothesized future events) been collectively called the "Anthropocene extinction". Anthropocene is a term introduced in 2000. Some now postulate that a new geological epoch has begun, with the most abrupt and widespread extinction of species since the Cretaceous–Paleogene extinction event 66 million years ago. The term "anthropocene" is being used more frequently by scientists, and some commentators may refer to the current and projected future extinctions as part of a longer Holocene extinction. The Holocene–Anthropocene boundary is contested, with some commentators asserting significant human influence on climate for much of what is normally regarded as the Holocene Epoch. Some experts mark the transition from the Holocene to the Anthropocene at the onset of the industrial revolution. They also note that the official use of this term in the near future will heavily rely on its usefulness, especially for Earth scientists studying late Holocene periods. It has been suggested that human activity has made the period starting from the mid-20th century different enough from the rest of the Holocene to consider it a new geological epoch, known as the Anthropocene, a term which was considered for inclusion in the timeline of Earth's history by the International Commission on Stratigraphy in 2016, but the proposal was rejected in 2024. To constitute the Holocene as an extinction event, scientists must determine exactly when anthropogenic greenhouse gas emissions began to measurably alter natural atmospheric levels on a global scale, and when these alterations caused changes to global climate. Using chemical proxies from Antarctic ice cores, researchers have estimated the fluctuations of carbon dioxide (CO2) and methane (CH4) gases in the Earth's atmosphere during the Late Pleistocene and Holocene epochs. Estimates of the fluctuations of these two gases in the atmosphere, using chemical proxies from Antarctic ice cores, generally indicate that the peak of the Anthropocene occurred within the previous two centuries: typically beginning with the Industrial Revolution, when the highest greenhouse gas levels were recorded. Human ecology A 2015 article in Science suggested that humans are unique in ecology as an unprecedented "global superpredator", regularly preying on large numbers of fully grown terrestrial and marine apex predators, and with a great deal of influence over food webs and climatic systems worldwide. Although significant debate exists as to how much human predation and indirect effects contributed to prehistoric extinctions, certain population crashes have been directly correlated with human arrival. Human activity has been the main cause of mammalian extinctions since the Late Pleistocene. A 2018 study published in PNAS found that since the dawn of human civilization, the biomass of wild mammals has decreased by 83%. The biomass decrease is 80% for marine mammals, 50% for plants, and 15% for fish. Currently, livestock make up 60% of the biomass of all mammals on Earth, followed by humans (36%) and wild mammals (4%). As for birds, 70% are domesticated, such as poultry, whereas only 30% are wild. Historic extinction Human activity Activities contributing to extinctions Extinction of animals, plants, and other organisms caused by human actions may go as far back as the late Pleistocene, over 12,000 years ago. There is a correlation between megafaunal extinction and the arrival of humans. Megafauna that are still extant also suffered severe declines that were highly correlated with human expansion and activity. Over the past 125,000 years, the average body size of wildlife has fallen by 14% as actions by prehistoric humans eradicated megafauna on all continents with the exception of Africa. Over the past 130,000 years, avian functional diversity has declined precipitously and disproportionately relative to phylogenetic diversity losses. Human civilization was founded on and grew from agriculture. The more land used for farming, the greater the population a civilization could sustain, and subsequent popularization of farming led to widespread habitat conversion. Habitat destruction by humans, thus replacing the original local ecosystems, is a major driver of extinction. The sustained conversion of biodiversity rich forests and wetlands into poorer fields and pastures (of lesser carrying capacity for wild species), over the last 10,000 years, has considerably reduced the Earth's carrying capacity for wild birds and mammals, among other organisms, in both population size and species count. Other, related human causes of the extinction event include deforestation, hunting, pollution, the introduction in various regions of non-native species, and the widespread transmission of infectious diseases spread through livestock and crops. Agriculture and climate change Recent investigations into the practice of landscape burning during the Neolithic Revolution have a major implication for the current debate about the timing of the Anthropocene and the role that humans may have played in the production of greenhouse gases prior to the Industrial Revolution. Studies of early hunter-gatherers raise questions about the current use of population size or density as a proxy for the amount of land clearance and anthropogenic burning that took place in pre-industrial times. Scientists have questioned the correlation between population size and early territorial alterations. Ruddiman and Ellis' research paper in 2009 makes the case that early farmers involved in systems of agriculture used more land per capita than growers later in the Holocene, who intensified their labor to produce more food per unit of area (thus, per laborer); arguing that agricultural involvement in rice production implemented thousands of years ago by relatively small populations created significant environmental impacts through large-scale means of deforestation. While a number of human-derived factors are recognized as contributing to rising atmospheric concentrations of CH4 (methane) and CO2 (carbon dioxide), deforestation and territorial clearance practices associated with agricultural development may have contributed most to these concentrations globally in earlier millennia. Scientists that are employing a variance of archaeological and paleoecological data argue that the processes contributing to substantial human modification of the environment spanned many thousands of years on a global scale and thus, not originating as late as the Industrial Revolution. Palaeoclimatologist William Ruddiman has argued that in the early Holocene 11,000 years ago, atmospheric carbon dioxide and methane levels fluctuated in a pattern which was different from the Pleistocene epoch before it. He argued that the patterns of the significant decline of CO2 levels during the last ice age of the Pleistocene inversely correlate to the Holocene where there have been dramatic increases of CO2 around 8000 years ago and CH4 levels 3000 years after that. The correlation between the decrease of CO2 in the Pleistocene and the increase of it during the Holocene implies that the causation of this spark of greenhouse gases into the atmosphere was the growth of human agriculture during the Holocene. Climate change One of the main theories explaining early Holocene extinctions is historic climate change. The climate change theory has suggested that a change in climate near the end of the late Pleistocene stressed the megafauna to the point of extinction. Some scientists favor abrupt climate change as the catalyst for the extinction of the megafauna at the end of the Pleistocene, most who believe increased hunting from early modern humans also played a part, with others even suggesting that the two interacted. In the Americas, a controversial explanation for the shift in climate is presented under the Younger Dryas impact hypothesis, which states that the impact of comets cooled global temperatures. Despite its popularity among nonscientists, this hypothesis has never been accepted by relevant experts, who dismiss it as a fringe theory. Contemporary extinction History Contemporary human overpopulation and continued population growth, along with per-capita consumption growth, prominently in the past two centuries, are regarded as the underlying causes of extinction. Inger Andersen, the executive director of the United Nations Environment Programme, stated that "we need to understand that the more people there are, the more we put the Earth under heavy pressure. As far as biodiversity is concerned, we are at war with nature." Some scholars assert that the emergence of capitalism as the dominant economic system has accelerated ecological exploitation and destruction, and has also exacerbated mass species extinction. CUNY professor David Harvey, for example, posits that the neoliberal era "happens to be the era of the fastest mass extinction of species in the Earth's recent history". Ecologist William E. Rees concludes that the "neoliberal paradigm contributes significantly to planetary unraveling" by treating the economy and the ecosphere as totally separate systems, and by neglecting the latter. Major lobbying organizations representing corporations in the agriculture, fisheries, forestry and paper, mining, and oil and gas industries, including the United States Chamber of Commerce, have been pushing back against legislation that could address the extinction crisis. A 2022 report by the climate think tank InfluenceMap stated that "although industry associations, especially in the US, appear reluctant to discuss the biodiversity crisis, they are clearly engaged on a wide range of policies with significant impacts on biodiversity loss." The loss of animal species from ecological communities, defaunation, is primarily driven by human activity. This has resulted in empty forests, ecological communities depleted of large vertebrates. This is not to be confused with extinction, as it includes both the disappearance of species and declines in abundance. Defaunation effects were first implied at the Symposium of Plant-Animal Interactions at the University of Campinas, Brazil in 1988 in the context of Neotropical forests. Since then, the term has gained broader usage in conservation biology as a global phenomenon. Big cat populations have severely declined over the last half-century and could face extinction in the following decades. According to 2011 IUCN estimates: lions are down to 25,000, from 450,000; leopards are down to 50,000, from 750,000; cheetahs are down to 12,000, from 45,000; tigers are down to 3,000 in the wild, from 50,000. A December 2016 study by the Zoological Society of London, Panthera Corporation and Wildlife Conservation Society showed that cheetahs are far closer to extinction than previously thought, with only 7,100 remaining in the wild, existing within only 9% of their historic range. Human pressures are to blame for the cheetah population crash, including prey loss due to overhunting by people, retaliatory killing from farmers, habitat loss and the illegal wildlife trade. Populations of brown bears have experienced similar population decline. The term pollinator decline refers to the reduction in abundance of insect and other animal pollinators in many ecosystems worldwide beginning at the end of the twentieth century, and continuing into the present day. Pollinators, which are necessary for 75% of food crops, are declining globally in both abundance and diversity. A 2017 study led by Radboud University's Hans de Kroon indicated that the biomass of insect life in Germany had declined by three-quarters in the previous 25 years. Participating researcher Dave Goulson of Sussex University stated that their study suggested that humans are making large parts of the planet uninhabitable for wildlife. Goulson characterized the situation as an approaching "ecological Armageddon", adding that "if we lose the insects then everything is going to collapse." A 2019 study found that over 40% of insect species are threatened with extinction. The most significant drivers in the decline of insect populations are associated with intensive farming practices, along with pesticide use and climate change. The world's insect population decreases by around 1 to 2% per year. Various species are predicted to become extinct in the near future, among them some species of rhinoceros, primates, and pangolins. Others, including several species of giraffe, are considered "vulnerable" and are experiencing significant population declines from anthropogenic impacts including hunting, deforestation and conflict. Hunting alone threatens bird and mammalian populations around the world. The direct killing of megafauna for meat and body parts is the primary driver of their destruction, with 70% of the 362 megafauna species in decline as of 2019. Mammals in particular have suffered such severe losses as the result of human activity (mainly during the Quaternary extinction event, but partly during the Holocene) that it could take several million years for them to recover. Contemporary assessments have discovered that roughly 41% of amphibians, 25% of mammals, 21% of reptiles and 14% of birds are threatened with extinction, which could disrupt ecosystems on a global scale and eliminate billions of years of phylogenetic diversity. 189 countries, which are signatory to the Convention on Biological Diversity (Rio Accord), have committed to preparing a Biodiversity Action Plan, a first step at identifying specific endangered species and habitats, country by country. A 2023 study published in Current Biology concluded that current biodiversity loss rates could reach a tipping point and inevitably trigger a total ecosystem collapse. Recent extinction Recent extinctions are more directly attributable to human influences, whereas prehistoric extinctions can be attributed to other factors. The International Union for Conservation of Nature (IUCN) characterizes 'recent' extinction as those that have occurred past the cut-off point of 1500, and at least 875 plant and animal species have gone extinct since that time and 2009. Some species, such as the Père David's deer and the Hawaiian crow, are extinct in the wild, and survive solely in captive populations. Other populations are only locally extinct (extirpated), still existent elsewhere, but reduced in distribution, as with the extinction of gray whales in the Atlantic, and of the leatherback sea turtle in Malaysia. Since the Late Pleistocene, humans (together with other factors) have been rapidly driving the largest vertebrate animals towards extinction, and in the process interrupting a 66-million-year-old feature of ecosystems, the relationship between diet and body mass, which researchers suggest could have unpredictable consequences. A 2019 study published in Nature Communications found that rapid biodiversity loss is impacting larger mammals and birds to a much greater extent than smaller ones, with the body mass of such animals expected to shrink by 25% over the next century. Another 2019 study published in Biology Letters found that extinction rates are perhaps much higher than previously estimated, in particular for bird species. The 2019 Global Assessment Report on Biodiversity and Ecosystem Services lists the primary causes of contemporary extinctions in descending order: (1) changes in land and sea use (primarily agriculture and overfishing respectively); (2) direct exploitation of organisms such as hunting; (3) anthropogenic climate change; (4) pollution and (5) invasive alien species spread by human trade. This report, along with the 2020 Living Planet Report by the WWF, both project that climate change will be the leading cause in the next several decades. A June 2020 study published in PNAS posits that the contemporary extinction crisis "may be the most serious environmental threat to the persistence of civilization, because it is irreversible" and that its acceleration "is certain because of the still fast growth in human numbers and consumption rates." The study found that more than 500 vertebrate species are poised to be lost in the next two decades. Habitat destruction Humans both create and destroy crop cultivar and domesticated animal varieties. Advances in transportation and industrial farming has led to monoculture and the extinction of many cultivars. The use of certain plants and animals for food has also resulted in their extinction, including silphium and the passenger pigeon. It was estimated in 2012 that 13% of Earth's ice-free land surface is used as row-crop agricultural sites, 26% used as pastures, and 4% urban-industrial areas. In March 2019, Nature Climate Change published a study by ecologists from Yale University, who found that over the next half century, human land use will reduce the habitats of 1,700 species by up to 50%, pushing them closer to extinction. That same month PLOS Biology published a similar study drawing on work at the University of Queensland, which found that "more than 1,200 species globally face threats to their survival in more than 90% of their habitat and will almost certainly face extinction without conservation intervention". Since 1970, the populations of migratory freshwater fish have declined by 76%, according to research published by the Zoological Society of London in July 2020. Overall, around one in three freshwater fish species are threatened with extinction due to human-driven habitat degradation and overfishing. Some scientists and academics assert that industrial agriculture and the growing demand for meat is contributing to significant global biodiversity loss as this is a significant driver of deforestation and habitat destruction; species-rich habitats, such as the Amazon region and Indonesia being converted to agriculture. A 2017 study by the World Wildlife Fund (WWF) found that 60% of biodiversity loss can be attributed to the vast scale of feed crop cultivation required to rear tens of billions of farm animals. Moreover, a 2006 report by the Food and Agriculture Organization (FAO) of the United Nations, Livestock's Long Shadow, also found that the livestock sector is a "leading player" in biodiversity loss. More recently, in 2019, the IPBES Global Assessment Report on Biodiversity and Ecosystem Services attributed much of this ecological destruction to agriculture and fishing, with the meat and dairy industries having a very significant impact. Since the 1970s food production has soared to feed a growing human population and bolster economic growth, but at a huge price to the environment and other species. The report says some 25% of the Earth's ice-free land is used for cattle grazing. A 2020 study published in Nature Communications warned that human impacts from housing, industrial agriculture and in particular meat consumption are wiping out a combined 50 billion years of Earth's evolutionary history (defined as phylogenetic diversity) and driving to extinction some of the "most unique animals on the planet," among them the Aye-aye lemur, the Chinese crocodile lizard and the pangolin. Said lead author Rikki Gumbs: Urbanization has also been cited as a significant driver of biodiversity loss, particularly of plant life. A 1999 study of local plant extirpations in Great Britain found that urbanization contributed at least as much to local plant extinction as did agriculture. Climate change Climate change is expected to be a major driver of extinctions from the 21st century. Rising levels of carbon dioxide are resulting in influx of this gas into the ocean, increasing its acidity. Marine organisms which possess calcium carbonate shells or exoskeletons experience physiological pressure as the carbonate reacts with acid. For example, this is already resulting in coral bleaching on various coral reefs worldwide, which provide valuable habitat and maintain a high biodiversity. Marine gastropods, bivalves, and other invertebrates are also affected, as are the organisms that feed on them. Some studies have suggested that it is not climate change that is driving the current extinction crisis, but the demands of contemporary human civilization on nature. However, a rise in average global temperatures greater than 5.2 °C is projected to cause a mass extinction similar to the "Big Five" mass extinction events of the Phanerozoic, even without other anthropogenic impacts on biodiversity. Overexploitation Overhunting can reduce the local population of game animals by more than half, as well as reducing population density, and may lead to extinction for some species. Populations located nearer to villages are significantly more at risk of depletion. Several conservationist organizations, among them IFAW and HSUS, assert that trophy hunters, particularly from the United States, are playing a significant role in the decline of giraffes, which they refer to as a "silent extinction". The surge in the mass killings by poachers involved in the illegal ivory trade along with habitat loss is threatening African elephant populations. In 1979, their populations stood at 1.7 million; at present there are fewer than 400,000 remaining. Prior to European colonization, scientists believe Africa was home to roughly 20 million elephants. According to the Great Elephant Census, 30% of African elephants (or 144,000 individuals) disappeared over a seven-year period, 2007 to 2014. African elephants could become extinct by 2035 if poaching rates continue. Fishing has had a devastating effect on marine organism populations for several centuries even before the explosion of destructive and highly effective fishing practices like trawling. Humans are unique among predators in that they regularly prey on other adult apex predators, particularly in marine environments; bluefin tuna, blue whales, North Atlantic right whales, and over fifty species of sharks and rays are vulnerable to predation pressure from human fishing, in particular commercial fishing. A 2016 study published in Science concludes that humans tend to hunt larger species, and this could disrupt ocean ecosystems for millions of years. A 2020 study published in Science Advances found that around 18% of marine megafauna, including iconic species such as the Great white shark, are at risk of extinction from human pressures over the next century. In a worst-case scenario, 40% could go extinct over the same time period. According to a 2021 study published in Nature, 71% of oceanic shark and ray populations have been destroyed by overfishing (the primary driver of ocean defaunation) from 1970 to 2018, and are nearing the "point of no return" as 24 of the 31 species are now threatened with extinction, with several being classified as critically endangered. Almost two-thirds of sharks and rays around coral reefs are threatened with extinction from overfishing, with 14 of 134 species being critically endangered. Disease The decline of amphibian populations has also been identified as an indicator of environmental degradation. As well as habitat loss, introduced predators and pollution, Chytridiomycosis, a fungal infection accidentally spread by human travel, globalization, and the wildlife trade, has caused severe population drops of over 500 amphibian species, and perhaps 90 extinctions, including (among many others) the extinction of the golden toad in Costa Rica, the Gastric-brooding frog in Australia, the Rabb's Fringe-limbed Treefrog and the extinction of the Panamanian golden frog in the wild. Chytrid fungus has spread across Australia, New Zealand, Central America and Africa, including countries with high amphibian diversity such as cloud forests in Honduras and Madagascar. Batrachochytrium salamandrivorans is a similar infection currently threatening salamanders. Amphibians are now the most endangered vertebrate group, having existed for more than 300 million years through three other mass extinctions. Millions of bats in the US have been dying off since 2012 due to a fungal infection known as white-nose syndrome that spread from European bats, who appear to be immune. Population drops have been as great as 90% within five years, and extinction of at least one bat species is predicted. There is currently no form of treatment, and such declines have been described as "unprecedented" in bat evolutionary history by Alan Hicks of the New York State Department of Environmental Conservation. Between 2007 and 2013, over ten million beehives were abandoned due to colony collapse disorder, which causes worker bees to abandon the queen. Though no single cause has gained widespread acceptance by the scientific community, proposals include infections with Varroa and Acarapis mites; malnutrition; various pathogens; genetic factors; immunodeficiencies; loss of habitat; changing beekeeping practices; or a combination of factors. By region Megafauna were once found on every continent of the world, but are now almost exclusively found on the continent of Africa. In some regions, megafauna experienced population crashes and trophic cascades shortly after the earliest human settlers. Worldwide, 178 species of the world's largest mammals died out between 52,000 and 9,000 BC; it has been suggested that a higher proportion of African megafauna survived because they evolved alongside humans. The timing of South American megafaunal extinction appears to precede human arrival, although the possibility that human activity at the time impacted the global climate enough to cause such an extinction has been suggested. Africa Africa experienced the smallest decline in megafauna compared to the other continents. This is presumably due to the idea that African megafauna evolved alongside humans, and thus developed a healthy fear of them, unlike the comparatively tame animals of other continents. Eurasia Unlike other continents, the megafauna of Eurasia went extinct over a relatively long period of time, possibly due to climate fluctuations fragmenting and decreasing populations, leaving them vulnerable to over-exploitation, as with the steppe bison (Bison priscus). The warming of the arctic region caused the rapid decline of grasslands, which had a negative effect on the grazing megafauna of Eurasia. Most of what once was mammoth steppe was converted to mire, rendering the environment incapable of supporting them, notably the woolly mammoth. However, all these megafauna had survived previous interglacials with the same or more intense warming, suggesting that even during warm periods, refugia may have existed and that human hunting may have been the critical factor for their extinction. In the western Mediterranean region, anthropogenic forest degradation began around 4,000 BP, during the Chalcolithic, and became especially pronounced during the Roman era. The reasons for the decline of forest ecosystems stem from agriculture, grazing, and mining. During the twilight years of the Western Roman Empire, forests in northwestern Europe rebounded from losses incurred throughout the Roman period, though deforestation on a large scale resumed once again around 800 BP, during the High Middle Ages. In southern China, human land use is believed to have permanently altered the trend of vegetation dynamics in the region, which was previously governed by temperature. This is evidenced by high fluxes of charcoal from that time interval. Americas There has been a debate as to the extent to which the disappearance of megafauna at the end of the last glacial period can be attributed to human activities by hunting, or even by slaughter of prey populations. Discoveries at Monte Verde in South America and at Meadowcroft Rock Shelter in Pennsylvania have caused a controversy regarding the Clovis culture. There likely would have been human settlements prior to the Clovis culture, and the history of humans in the Americas may extend back many thousands of years before the Clovis culture. The amount of correlation between human arrival and megafauna extinction is still being debated: for example, in Wrangel Island in Siberia the extinction of dwarf woolly mammoths (approximately 2000 BC) did not coincide with the arrival of humans, nor did megafaunal mass extinction on the South American continent, although it has been suggested climate changes induced by anthropogenic effects elsewhere in the world may have contributed. Comparisons are sometimes made between recent extinctions (approximately since the Industrial Revolution) and the Pleistocene extinction near the end of the last glacial period. The latter is exemplified by the extinction of large herbivores such as the woolly mammoth and the carnivores that preyed on them. Humans of this era actively hunted the mammoth and the mastodon, but it is not known if this hunting was the cause of the subsequent massive ecological changes, widespread extinctions and climate changes. The ecosystems encountered by the first Americans had not been exposed to human interaction, and may have been far less resilient to human made changes than the ecosystems encountered by industrial era humans. Therefore, the actions of the Clovis people, despite seeming insignificant by today's standards could indeed have had a profound effect on the ecosystems and wild life which was entirely unused to human influence. In the Yukon, the mammoth steppe ecosystem collapsed between 13,500 and 10,000 BP, though wild horses and woolly mammoths somehow persisted in the region for millennia after this collapse. In what is now Texas, a drop in local plant and animal biodiversity occurred during the Younger Dryas cooling, though while plant diversity recovered after the Younger Dryas, animal diversity did not. In the Channel Islands, multiple terrestrial species went extinct around the same time as human arrival, but direct evidence for an anthropogenic cause of their extinction remains lacking. In the montane forests of the Colombian Andes, spores of coprophilous fungi indicate megafaunal extinction occurred in two waves, the first occurring around 22,900 BP and the second around 10,990 BP. A 2023 study of megafaunal extinctions in the Junín Plateau of Peru found that the timing of the disappearance of megafauna was concurrent with a large uptick in fire activity attributed to human actions, implicating humans as the cause of their local extinction on the plateau. New Guinea Humans in New Guinea used volcanically fertilised soil following major eruptions and interfered with vegetation succession patterns since the Late Pleistocene, with this process intensifying in the Holocene. Australia Since European colonisation Australia has lost over 100 plant and animal species, including 10% of its mammal species, the highest of any continent. Australia was once home to a large assemblage of megafauna, with many parallels to those found on the African continent today. Australia's fauna is characterized by primarily marsupial mammals, and many reptiles and birds, all existing as giant forms until recently. Humans arrived on the continent very early, about 50,000 years ago. The extent human arrival contributed is controversial; climatic drying of Australia 40,000–60,000 years ago was an unlikely cause, as it was less severe in speed or magnitude than previous regional climate change which failed to kill off megafauna. Extinctions in Australia continued from original settlement until today in both plants and animals, while many more animals and plants have declined or are endangered. Due to the older timeframe and the soil chemistry on the continent, very little subfossil preservation evidence exists relative to elsewhere. However, continent-wide extinction of all genera weighing over 100 kilograms, and six of seven genera weighing between 45 and 100 kilograms occurred around 46,400 years ago (4,000 years after human arrival) and the fact that megafauna survived until a later date on the island of Tasmania following the establishment of a land bridge suggest direct hunting or anthropogenic ecosystem disruption such as fire-stick farming as likely causes. The first evidence of direct human predation leading to extinction in Australia was published in 2016. A 2021 study found that the rate of extinction of Australia's megafauna is rather unusual, with some generalistic species having gone extinct earlier while highly specialized ones having become extinct later or even still surviving today. A mosaic cause of extinction with different anthropogenic and environmental pressures has been proposed. The arrival of invasive species such as feral cats and cane toads has further devastated Australia's ecosystems. Caribbean Human arrival in the Caribbean around 6,000 years ago is correlated with the extinction of many species. These include many different genera of ground and arboreal sloths across all islands. These sloths were generally smaller than those found on the South American continent. Megalocnus were the largest genus at up to , Acratocnus were medium-sized relatives of modern two-toed sloths endemic to Cuba, Imagocnus also of Cuba, Neocnus and many others. Macaronesia The arrival of the first human settlers in the Azores saw the introduction of invasive plants and livestock to the archipelago, resulting in the extinction of at least two plant species on Pico Island. On Faial Island, the decline of Prunus lusitanica has been hypothesized by some scholars to have been related to the tree species being endozoochoric, with the extirpation or extinction of various bird species drastically limiting its seed dispersal. Lacustrine ecosystems were ravaged by human colonization, as evidenced by hydrogen isotopes from C30 fatty acids recording hypoxic bottom waters caused by eutrophication in Lake Funda on Flores Island beginning between 1500 and 1600 AD. The arrival of humans on the archipelago of Madeira caused the extinction of approximately two-thirds of its endemic bird species, with two non-endemic birds also being locally extirpated from the archipelago. Of thirty-four land snail species collected in a subfossil sample from eastern Madeira Island, nine became extinct following the arrival of humans. On the Desertas Islands, of forty-five land snail species known to exist before human colonization, eighteen are extinct and five are no longer present on the islands. Eurya stigmosa, whose extinction is typically attributed to climate change following the end of the Pleistocene rather than humans, may have survived until the colonization of the archipelago by the Portuguese and gone extinct as a result of human activity. Introduced mice have been implicated as a leading driver of extinction on Madeira following its discovery and settlement by humans. In the Canary Islands, native thermophilous woodlands were decimated and two tree taxa were driven extinct following the arrival of its first humans, primarily as a result of increased fire clearance and soil erosion and the introduction of invasive pigs, goats, and rats. Invasive species introductions accelerated during the Age of Discovery when Europeans first settled the Macaronesian archipelago. The archipelago's laurel forests, though still negatively impacted, fared better due to being less suitable for human economic use. Cabo Verde, like the Canary Islands, witnessed precipitous deforestation upon the arrival of European settlers and various invasive species brought by them in the archipelago, with the archipelago's thermophilous woodlands suffering the greatest destruction. Introduced species, overgrazing, increased fire incidence, and soil degradation have been attributed as the chief causes of Cabo Verde's ecological devastation. Pacific Archaeological and paleontological digs on 70 different Pacific islands suggested that numerous species became extinct as people moved across the Pacific, starting 30,000 years ago in the Bismarck Archipelago and Solomon Islands. It is currently estimated that among the bird species of the Pacific, some 2000 species have gone extinct since the arrival of humans, representing a 20% drop in the biodiversity of birds worldwide. In Polynesia, the Late Holocene declines in avifaunas only abated after they were heavily depleted and there were increasingly fewer bird species able to be driven to extinction. Iguanas were likewise decimated by the spread of humans. Additionally, the endemic faunas of Pacific archipelagos are exceptionally at risk in the coming decades due to rising sea levels caused by global warming. Lord Howe Island, which remained uninhabited until the arrival of Europeans in the South Pacific in the 18th century, lost much of its endemic avifauna when it became a whaling station in the early 19th century. Another wave of bird extinctions occurred following the introduction of black rats in 1918. The endemic megafaunal meiolaniid turtles of Vanuatu became extinct immediately following the first human arrivals and remains of them containing evidence of butchery by humans have been found. The arrival of humans in New Caledonia marked the commencement of coastal forest and mangrove decline on the island. The archipelago's megafauna was still extant when humans arrived, but indisputable evidence for the anthropogenicity of their extinction remains elusive. In Fiji, the giant iguanas Brachylophus gibbonsi and Lapitiguana impensa both succumbed to human-induced extinction shortly after encountering the first humans on the island. In American Samoa, deposits dating back to the period of initial human colonisation contain elevated quantities of bird, turtle, and fish remains caused by increased predation pressure. On Mangaia in the Cook Islands, human colonisation was associated with a major extinction of endemic avifauna, along with deforestation, erosion of volcanic hillsides, and increased charcoal influx, causing additional environmental damage. On Rapa in the Austral Archipelago, human arrival, marked by the increase in charcoal and in taro pollen in the palynological record, is associated with the extinction of an endemic palm. Henderson Island, once thought to be untouched by humans, was colonised and later abandoned by Polynesians. The ecological collapse on the island caused by the anthropogenic extinctions is believed to have caused the island's abandonment. The first human settlers of the Hawaiian Islands are thought to have arrived between 300 and 800 AD, with European arrival in the 16th century. Hawaii is notable for its endemism of plants, birds, insects, mollusks and fish; 30% of its organisms are endemic. Many of its species are endangered or have gone extinct, primarily due to accidentally introduced species and livestock grazing. Over 40% of its bird species have gone extinct, and it is the location of 75% of extinctions in the United States. Evidence suggests that the introduction of the Polynesian rat, above all other factors, drove the ecocide of the endemic forests of the archipelago. Extinction has increased in Hawaii over the last 200 years and is relatively well documented, with extinctions among native snails used as estimates for global extinction rates. High rates of habitat fragmentation on the archipelago have further reduced biodiversity. The extinction of endemic Hawaiian avifauna is likely to accelerate even further as anthropogenic global warming adds additional pressure on top of land-use changes and invasive species. Madagascar Within centuries of the arrival of humans around the 1st millennium AD, nearly all of Madagascar's distinct, endemic, and geographically isolated megafauna became extinct. The largest animals, of more than , were extinct very shortly after the first human arrival, with large and medium-sized species dying out after prolonged hunting pressure from an expanding human population moving into more remote regions of the island around 1000 years ago. as well as 17 species of "giant" lemurs. Some of these lemurs typically weighed over , and their fossils have provided evidence of human butchery on many species. Other megafauna present on the island included the Malagasy hippopotamuses as well as the large flightless elephant birds, both groups are thought to have gone extinct in the interval 750–1050 AD. Smaller fauna experienced initial increases due to decreased competition, and then subsequent declines over the last 500 years. All fauna weighing over died out. The primary reasons for the decline of Madagascar's biota, which at the time was already stressed by natural aridification, were human hunting, herding, farming, and forest clearing, all of which persist and threaten Madagascar's remaining taxa today. The natural ecosystems of Madagascar as a whole were further impacted by the much greater incidence of fire as a result of anthropogenic fire production; evidence from Lake Amparihibe on the island of Nosy Be indicates a shift in local vegetation from intact rainforest to a fire-disturbed patchwork of grassland and woodland between 1300 and 1000 BP. New Zealand New Zealand is characterized by its geographic isolation and island biogeography, and had been isolated from mainland Australia for 80 million years. It was the last large land mass to be colonized by humans. Upon the arrival of Polynesian settlers in the late 13th century, the native biota suffered a catastrophic decline due to deforestation, hunting, and the introduction of invasive species. The extinction of all of the islands' megafaunal birds occurred within several hundred years of human arrival. The moa, large flightless ratites, were thriving during the Late Holocene, but became extinct within 200 years of the arrival of human settlers, as did the enormous Haast's eagle, their primary predator, and at least two species of large, flightless geese. The Polynesians also introduced the Polynesian rat. This may have put some pressure on other birds, but at the time of early European contact (18th century) and colonization (19th century), the bird life was prolific. The megafaunal extinction happened extremely rapidly despite a very small population density, which never exceeded 0.01 people per km2. Extinctions of parasites followed the extinction of New Zealand's megafauna. With them, the Europeans brought various invasive species including ship rats, possums, cats and mustelids which devastated native bird life, some of which had adapted flightlessness and ground nesting habits, and had no defensive behavior as a result of having no native mammalian predators. The kākāpō, the world's biggest parrot, which is flightless, now only exists in managed breeding sanctuaries. New Zealand's national emblem, the kiwi, is on the endangered bird list. ==Mitigation== Stabilizing human populations; reining in capitalism, decreasing economic demands, and shifting them to economic activities with low impacts on biodiversity; transitioning to plant-based diets; and increasing the number and size of terrestrial and marine protected areas have been suggested to avoid or limit biodiversity loss and a possible sixth mass extinction. Rodolfo Dirzo and Paul R. Ehrlich suggest that "the one fundamental, necessary, 'simple' cure, ... is reducing the scale of the human enterprise." According to a 2021 paper published in Frontiers in Conservation Science, humanity almost certainly faces a "ghastly future" of mass extinction, biodiversity collapse, climate change, and their impacts unless major efforts to change human industry and activity are rapidly undertaken. Reducing human population growth has been suggested as a means of mitigating climate change and the biodiversity crisis, although many scholars believe it has been largely ignored in mainstream policy discourse. An alternative proposal is greater agricultural efficiency & sustainability. Lots of non-arable land can be made into arable land good for growing food crops. Mushrooms have also been known to repair damaged soil. A 2018 article in Science advocated for the global community to designate 30% of the planet by 2030, and 50% by 2050, as protected areas to mitigate the contemporary extinction crisis. It highlighted that the human population is projected to grow to 10 billion by the middle of the century, and consumption of food and water resources is projected to double by this time. A 2022 report published in Science warned that 44% of Earth's terrestrial surface, or , must be conserved and made "ecologically sound" to prevent further biodiversity loss. In November 2018, the UN's biodiversity chief Cristiana Pașca Palmer urged people worldwide to pressure governments to implement significant protections for wildlife by 2020. She called biodiversity loss a "silent killer" as dangerous as global warming but said it had received little attention by comparison. "It's different from climate change, where people feel the impact in everyday life. With biodiversity, it is not so clear but by the time you feel what is happening, it may be too late." In January 2020, the UN Convention on Biological Diversity drafted a Paris-style plan to stop biodiversity and ecosystem collapse by setting the deadline of 2030 to protect 30% of the Earth's land and oceans and to reduce pollution by 50%, to allow for the restoration of ecosystems by 2050. The world failed to meet the Aichi Biodiversity Targets for 2020 set by the convention during a summit in Japan in 2010. Of the 20 biodiversity targets proposed, only six were "partially achieved" by the deadline. It was called a global failure by Inger Andersen, head of the United Nations Environment Programme: Some scientists have proposed keeping extinctions below 20 per year for the next century as a global target to reduce species loss, which is the biodiversity equivalent of the 2 °C climate target, although it is still much higher than the normal background rate of two per year prior to anthropogenic impacts on the natural world. An October 2020 report on the "era of pandemics" from IPBES found that many of the same human activities that contribute to biodiversity loss and climate change, including deforestation and the wildlife trade, have also increased the risk of future pandemics. The report offers several policy options to reduce such risk, such as taxing meat production and consumption, cracking down on the illegal wildlife trade, removing high disease-risk species from the legal wildlife trade, and eliminating subsidies to businesses which are harmful to the environment. According to marine zoologist John Spicer, "the COVID-19 crisis is not just another crisis alongside the biodiversity crisis and the climate change crisis. Make no mistake, this is one big crisis – the greatest that humans have ever faced." In December 2022, nearly every country on Earth, with the United States and the Holy See being the only exceptions, signed onto the Kunming-Montreal Global Biodiversity Framework agreement formulated at the 2022 United Nations Biodiversity Conference (COP 15) which includes protecting 30% of land and oceans by 2030 and 22 other targets intended to mitigate the extinction crisis. The agreement is weaker than the Aichi Targets of 2010. It was criticized by some countries for being rushed and not going far enough to protect endangered species.
Physical sciences
Events
Earth science
14225
https://en.wikipedia.org/wiki/Hydrogen%20atom
Hydrogen atom
A hydrogen atom is an atom of the chemical element hydrogen. The electrically neutral hydrogen atom contains a single positively charged proton in the nucleus, and a single negatively charged electron bound to the nucleus by the Coulomb force. Atomic hydrogen constitutes about 75% of the baryonic mass of the universe. In everyday life on Earth, isolated hydrogen atoms (called "atomic hydrogen") are extremely rare. Instead, a hydrogen atom tends to combine with other atoms in compounds, or with another hydrogen atom to form ordinary (diatomic) hydrogen gas, H2. "Atomic hydrogen" and "hydrogen atom" in ordinary English use have overlapping, yet distinct, meanings. For example, a water molecule contains two hydrogen atoms, but does not contain atomic hydrogen (which would refer to isolated hydrogen atoms). Atomic spectroscopy shows that there is a discrete infinite set of states in which a hydrogen (or any) atom can exist, contrary to the predictions of classical physics. Attempts to develop a theoretical understanding of the states of the hydrogen atom have been important to the history of quantum mechanics, since all other atoms can be roughly understood by knowing in detail about this simplest atomic structure. Isotopes The most abundant isotope, protium (1H), or light hydrogen, contains no neutrons and is simply a proton and an electron. Protium is stable and makes up 99.985% of naturally occurring hydrogen atoms. Deuterium (2H) contains one neutron and one proton in its nucleus. Deuterium is stable, makes up 0.0156% of naturally occurring hydrogen, and is used in industrial processes like nuclear reactors and Nuclear Magnetic Resonance. Tritium (3H) contains two neutrons and one proton in its nucleus and is not stable, decaying with a half-life of 12.32 years. Because of its short half-life, tritium does not exist in nature except in trace amounts. Heavier isotopes of hydrogen are only created artificially in particle accelerators and have half-lives on the order of 10−22 seconds. They are unbound resonances located beyond the neutron drip line; this results in prompt emission of a neutron. The formulas below are valid for all three isotopes of hydrogen, but slightly different values of the Rydberg constant (correction formula given below) must be used for each hydrogen isotope. Hydrogen ion Lone neutral hydrogen atoms are rare under normal conditions. However, neutral hydrogen is common when it is covalently bound to another atom, and hydrogen atoms can also exist in cationic and anionic forms. If a neutral hydrogen atom loses its electron, it becomes a cation. The resulting ion, which consists solely of a proton for the usual isotope, is written as "H+" and sometimes called hydron. Free protons are common in the interstellar medium, and solar wind. In the context of aqueous solutions of classical Brønsted–Lowry acids, such as hydrochloric acid, it is actually hydronium, H3O+, that is meant. Instead of a literal ionized single hydrogen atom being formed, the acid transfers the hydrogen to H2O, forming H3O+. If instead a hydrogen atom gains a second electron, it becomes an anion. The hydrogen anion is written as "H–" and called hydride. Theoretical analysis The hydrogen atom has special significance in quantum mechanics and quantum field theory as a simple two-body problem physical system which has yielded many simple analytical solutions in closed-form. Failed classical description Experiments by Ernest Rutherford in 1909 showed the structure of the atom to be a dense, positive nucleus with a tenuous negative charge cloud around it. This immediately raised questions about how such a system could be stable. Classical electromagnetism had shown that any accelerating charge radiates energy, as shown by the Larmor formula. If the electron is assumed to orbit in a perfect circle and radiates energy continuously, the electron would rapidly spiral into the nucleus with a fall time of: where is the Bohr radius and is the classical electron radius. If this were true, all atoms would instantly collapse. However, atoms seem to be stable. Furthermore, the spiral inward would release a smear of electromagnetic frequencies as the orbit got smaller. Instead, atoms were observed to emit only discrete frequencies of radiation. The resolution would lie in the development of quantum mechanics. Bohr–Sommerfeld Model In 1913, Niels Bohr obtained the energy levels and spectral frequencies of the hydrogen atom after making a number of simple assumptions in order to correct the failed classical model. The assumptions included: Electrons can only be in certain, discrete circular orbits or stationary states, thereby having a discrete set of possible radii and energies. Electrons do not emit radiation while in one of these stationary states. An electron can gain or lose energy by jumping from one discrete orbit to another. Bohr supposed that the electron's angular momentum is quantized with possible values: where and is Planck constant over . He also supposed that the centripetal force which keeps the electron in its orbit is provided by the Coulomb force, and that energy is conserved. Bohr derived the energy of each orbit of the hydrogen atom to be: where is the electron mass, is the electron charge, is the vacuum permittivity, and is the quantum number (now known as the principal quantum number). Bohr's predictions matched experiments measuring the hydrogen spectral series to the first order, giving more confidence to a theory that used quantized values. For , the value is called the Rydberg unit of energy. It is related to the Rydberg constant of atomic physics by The exact value of the Rydberg constant assumes that the nucleus is infinitely massive with respect to the electron. For hydrogen-1, hydrogen-2 (deuterium), and hydrogen-3 (tritium) which have finite mass, the constant must be slightly modified to use the reduced mass of the system, rather than simply the mass of the electron. This includes the kinetic energy of the nucleus in the problem, because the total (electron plus nuclear) kinetic energy is equivalent to the kinetic energy of the reduced mass moving with a velocity equal to the electron velocity relative to the nucleus. However, since the nucleus is much heavier than the electron, the electron mass and reduced mass are nearly the same. The Rydberg constant RM for a hydrogen atom (one electron), R is given by where is the mass of the atomic nucleus. For hydrogen-1, the quantity is about 1/1836 (i.e. the electron-to-proton mass ratio). For deuterium and tritium, the ratios are about 1/3670 and 1/5497 respectively. These figures, when added to 1 in the denominator, represent very small corrections in the value of R, and thus only small corrections to all energy levels in corresponding hydrogen isotopes. There were still problems with Bohr's model: it failed to predict other spectral details such as fine structure and hyperfine structure it could only predict energy levels with any accuracy for single–electron atoms (hydrogen-like atoms) the predicted values were only correct to , where is the fine-structure constant. Most of these shortcomings were resolved by Arnold Sommerfeld's modification of the Bohr model. Sommerfeld introduced two additional degrees of freedom, allowing an electron to move on an elliptical orbit characterized by its eccentricity and declination with respect to a chosen axis. This introduced two additional quantum numbers, which correspond to the orbital angular momentum and its projection on the chosen axis. Thus the correct multiplicity of states (except for the factor 2 accounting for the yet unknown electron spin) was found. Further, by applying special relativity to the elliptic orbits, Sommerfeld succeeded in deriving the correct expression for the fine structure of hydrogen spectra (which happens to be exactly the same as in the most elaborate Dirac theory). However, some observed phenomena, such as the anomalous Zeeman effect, remained unexplained. These issues were resolved with the full development of quantum mechanics and the Dirac equation. It is often alleged that the Schrödinger equation is superior to the Bohr–Sommerfeld theory in describing hydrogen atom. This is not the case, as most of the results of both approaches coincide or are very close (a remarkable exception is the problem of hydrogen atom in crossed electric and magnetic fields, which cannot be self-consistently solved in the framework of the Bohr–Sommerfeld theory), and in both theories the main shortcomings result from the absence of the electron spin. It was the complete failure of the Bohr–Sommerfeld theory to explain many-electron systems (such as helium atom or hydrogen molecule) which demonstrated its inadequacy in describing quantum phenomena. Schrödinger equation The Schrödinger equation is the standard quantum-mechanics model; it allows one to calculate the stationary states and also the time evolution of quantum systems. Exact analytical answers are available for the nonrelativistic hydrogen atom. Before we go to present a formal account, here we give an elementary overview. Given that the hydrogen atom contains a nucleus and an electron, quantum mechanics allows one to predict the probability of finding the electron at any given radial distance . It is given by the square of a mathematical function known as the "wavefunction", which is a solution of the Schrödinger equation. The lowest energy equilibrium state of the hydrogen atom is known as the ground state. The ground state wave function is known as the wavefunction. It is written as: Here, is the numerical value of the Bohr radius. The probability density of finding the electron at a distance in any radial direction is the squared value of the wavefunction: The wavefunction is spherically symmetric, and the surface area of a shell at distance is , so the total probability of the electron being in a shell at a distance and thickness is It turns out that this is a maximum at . That is, the Bohr picture of an electron orbiting the nucleus at radius corresponds to the most probable radius. Actually, there is a finite probability that the electron may be found at any place , with the probability indicated by the square of the wavefunction. Since the probability of finding the electron somewhere in the whole volume is unity, the integral of is unity. Then we say that the wavefunction is properly normalized. As discussed below, the ground state is also indicated by the quantum numbers . The second lowest energy states, just above the ground state, are given by the quantum numbers , , and . These states all have the same energy and are known as the and states. There is one state: and there are three states: An electron in the or state is most likely to be found in the second Bohr orbit with energy given by the Bohr formula. Wavefunction The Hamiltonian of the hydrogen atom is the radial kinetic energy operator plus the Coulomb electrostatic potential energy between the positive proton and the negative electron. Using the time-independent Schrödinger equation, ignoring all spin-coupling interactions and using the reduced mass , the equation is written as: Expanding the Laplacian in spherical coordinates: This is a separable, partial differential equation which can be solved in terms of special functions. When the wavefunction is separated as product of functions , , and three independent differential functions appears with A and B being the separation constants: radial: polar: azimuth: The normalized position wavefunctions, given in spherical coordinates are: where: , is the reduced Bohr radius, , is a generalized Laguerre polynomial of degree , and is a spherical harmonic function of degree and order . Note that the generalized Laguerre polynomials are defined differently by different authors. The usage here is consistent with the definitions used by Messiah, and Mathematica. In other places, the Laguerre polynomial includes a factor of , or the generalized Laguerre polynomial appearing in the hydrogen wave function is instead. The quantum numbers can take the following values: (principal quantum number) (azimuthal quantum number) (magnetic quantum number). Additionally, these wavefunctions are normalized (i.e., the integral of their modulus square equals 1) and orthogonal: where is the state represented by the wavefunction in Dirac notation, and is the Kronecker delta function. The wavefunctions in momentum space are related to the wavefunctions in position space through a Fourier transform which, for the bound states, results in where denotes a Gegenbauer polynomial and is in units of . The solutions to the Schrödinger equation for hydrogen are analytical, giving a simple expression for the hydrogen energy levels and thus the frequencies of the hydrogen spectral lines and fully reproduced the Bohr model and went beyond it. It also yields two other quantum numbers and the shape of the electron's wave function ("orbital") for the various possible quantum-mechanical states, thus explaining the anisotropic character of atomic bonds. The Schrödinger equation also applies to more complicated atoms and molecules. When there is more than one electron or nucleus the solution is not analytical and either computer calculations are necessary or simplifying assumptions must be made. Since the Schrödinger equation is only valid for non-relativistic quantum mechanics, the solutions it yields for the hydrogen atom are not entirely correct. The Dirac equation of relativistic quantum theory improves these solutions (see below). Results of Schrödinger equation The solution of the Schrödinger equation (wave equation) for the hydrogen atom uses the fact that the Coulomb potential produced by the nucleus is isotropic (it is radially symmetric in space and only depends on the distance to the nucleus). Although the resulting energy eigenfunctions (the orbitals) are not necessarily isotropic themselves, their dependence on the angular coordinates follows completely generally from this isotropy of the underlying potential: the eigenstates of the Hamiltonian (that is, the energy eigenstates) can be chosen as simultaneous eigenstates of the angular momentum operator. This corresponds to the fact that angular momentum is conserved in the orbital motion of the electron around the nucleus. Therefore, the energy eigenstates may be classified by two angular momentum quantum numbers, and (both are integers). The angular momentum quantum number determines the magnitude of the angular momentum. The magnetic quantum number determines the projection of the angular momentum on the (arbitrarily chosen) -axis. In addition to mathematical expressions for total angular momentum and angular momentum projection of wavefunctions, an expression for the radial dependence of the wave functions must be found. It is only here that the details of the Coulomb potential enter (leading to Laguerre polynomials in ). This leads to a third quantum number, the principal quantum number . The principal quantum number in hydrogen is related to the atom's total energy. Note that the maximum value of the angular momentum quantum number is limited by the principal quantum number: it can run only up to , i.e., . Due to angular momentum conservation, states of the same but different have the same energy (this holds for all problems with rotational symmetry). In addition, for the hydrogen atom, states of the same but different are also degenerate (i.e., they have the same energy). However, this is a specific property of hydrogen and is no longer true for more complicated atoms which have an (effective) potential differing from the form (due to the presence of the inner electrons shielding the nucleus potential). Taking into account the spin of the electron adds a last quantum number, the projection of the electron's spin angular momentum along the -axis, which can take on two values. Therefore, any eigenstate of the electron in the hydrogen atom is described fully by four quantum numbers. According to the usual rules of quantum mechanics, the actual state of the electron may be any superposition of these states. This explains also why the choice of -axis for the directional quantization of the angular momentum vector is immaterial: an orbital of given and obtained for another preferred axis can always be represented as a suitable superposition of the various states of different (but same ) that have been obtained for . Mathematical summary of eigenstates of hydrogen atom In 1928, Paul Dirac found an equation that was fully compatible with special relativity, and (as a consequence) made the wave function a 4-component "Dirac spinor" including "up" and "down" spin components, with both positive and "negative" energy (or matter and antimatter). The solution to this equation gave the following results, more accurate than the Schrödinger solution. Energy levels The energy levels of hydrogen, including fine structure (excluding Lamb shift and hyperfine structure), are given by the Sommerfeld fine-structure expression: where is the fine-structure constant and is the total angular momentum quantum number, which is equal to , depending on the orientation of the electron spin relative to the orbital angular momentum. This formula represents a small correction to the energy obtained by Bohr and Schrödinger as given above. The factor in square brackets in the last expression is nearly one; the extra term arises from relativistic effects (for details, see #Features going beyond the Schrödinger solution). It is worth noting that this expression was first obtained by A. Sommerfeld in 1916 based on the relativistic version of the old Bohr theory. Sommerfeld has however used different notation for the quantum numbers. Visualizing the hydrogen electron orbitals The image to the right shows the first few hydrogen atom orbitals (energy eigenfunctions). These are cross-sections of the probability density that are color-coded (black represents zero density and white represents the highest density). The angular momentum (orbital) quantum number ℓ is denoted in each column, using the usual spectroscopic letter code (s means ℓ = 0, p means ℓ = 1, d means ℓ = 2). The main (principal) quantum number n (= 1, 2, 3, ...) is marked to the right of each row. For all pictures the magnetic quantum number m has been set to 0, and the cross-sectional plane is the xz-plane (z is the vertical axis). The probability density in three-dimensional space is obtained by rotating the one shown here around the z-axis. The "ground state", i.e. the state of lowest energy, in which the electron is usually found, is the first one, the 1s state (principal quantum level n = 1, ℓ = 0). Black lines occur in each but the first orbital: these are the nodes of the wavefunction, i.e. where the probability density is zero. (More precisely, the nodes are spherical harmonics that appear as a result of solving the Schrödinger equation in spherical coordinates.) The quantum numbers determine the layout of these nodes. There are: total nodes, of which are angular nodes: angular nodes go around the axis (in the xy plane). (The figure above does not show these nodes since it plots cross-sections through the xz-plane.) (the remaining angular nodes) occur on the (vertical) axis. (the remaining non-angular nodes) are radial nodes. Oscillation of orbitals The frequency of a state in level n is , so in case of a superposition of multiple orbitals, they would oscillate due to the difference in frequency. For example two states, ψ1and ψ2: The wavefunction is given by and the probability function is The result is a rotating wavefunction. The movement of electrons and change of quantum states radiates light at a frequency of the cosine. Features going beyond the Schrödinger solution There are several important effects that are neglected by the Schrödinger equation and which are responsible for certain small but measurable deviations of the real spectral lines from the predicted ones: Although the mean speed of the electron in hydrogen is only 1/137th of the speed of light, many modern experiments are sufficiently precise that a complete theoretical explanation requires a fully relativistic treatment of the problem. A relativistic treatment results in a momentum increase of about 1 part in 37,000 for the electron. Since the electron's wavelength is determined by its momentum, orbitals containing higher speed electrons show contraction due to smaller wavelengths. Even when there is no external magnetic field, in the inertial frame of the moving electron, the electromagnetic field of the nucleus has a magnetic component. The spin of the electron has an associated magnetic moment which interacts with this magnetic field. This effect is also explained by special relativity, and it leads to the so-called spin–orbit coupling, i.e., an interaction between the electron's orbital motion around the nucleus, and its spin. Both of these features (and more) are incorporated in the relativistic Dirac equation, with predictions that come still closer to experiment. Again the Dirac equation may be solved analytically in the special case of a two-body system, such as the hydrogen atom. The resulting solution quantum states now must be classified by the total angular momentum number (arising through the coupling between electron spin and orbital angular momentum). States of the same and the same are still degenerate. Thus, direct analytical solution of Dirac equation predicts 2S() and 2P() levels of hydrogen to have exactly the same energy, which is in a contradiction with observations (Lamb–Retherford experiment). There are always vacuum fluctuations of the electromagnetic field, according to quantum mechanics. Due to such fluctuations degeneracy between states of the same but different is lifted, giving them slightly different energies. This has been demonstrated in the famous Lamb–Retherford experiment and was the starting point for the development of the theory of quantum electrodynamics (which is able to deal with these vacuum fluctuations and employs the famous Feynman diagrams for approximations using perturbation theory). This effect is now called Lamb shift. For these developments, it was essential that the solution of the Dirac equation for the hydrogen atom could be worked out exactly, such that any experimentally observed deviation had to be taken seriously as a signal of failure of the theory. Alternatives to the Schrödinger theory In the language of Heisenberg's matrix mechanics, the hydrogen atom was first solved by Wolfgang Pauli using a rotational symmetry in four dimensions [O(4)-symmetry] generated by the angular momentum and the Laplace–Runge–Lenz vector. By extending the symmetry group O(4) to the dynamical group O(4,2), the entire spectrum and all transitions were embedded in a single irreducible group representation. In 1979 the (non-relativistic) hydrogen atom was solved for the first time within Feynman's path integral formulation of quantum mechanics by Duru and Kleinert. This work greatly extended the range of applicability of Feynman's method. Further alternative models are Bohm mechanics and the complex Hamilton-Jacobi formulation of quantum mechanics.
Physical sciences
s-Block
Chemistry
14229
https://en.wikipedia.org/wiki/Homeopathy
Homeopathy
Homeopathy or homoeopathy is a pseudoscientific system of alternative medicine. It was conceived in 1796 by the German physician Samuel Hahnemann. Its practitioners, called homeopaths or homeopathic physicians, believe that a substance that causes symptoms of a disease in healthy people can cure similar symptoms in sick people; this doctrine is called similia similibus curentur, or "like cures like". Homeopathic preparations are termed remedies and are made using homeopathic dilution. In this process, the selected substance is repeatedly diluted until the final product is chemically indistinguishable from the diluent. Often not even a single molecule of the original substance can be expected to remain in the product. Between each dilution homeopaths may hit and/or shake the product, claiming this makes the diluent "remember" the original substance after its removal. Practitioners claim that such preparations, upon oral intake, can treat or cure disease. All relevant scientific knowledge about physics, chemistry, biochemistry and biology contradicts homeopathy. Homeopathic remedies are typically biochemically inert, and have no effect on any known disease. Its theory of disease, centered around principles Hahnemann termed miasms, is inconsistent with subsequent identification of viruses and bacteria as causes of disease. Clinical trials have been conducted and generally demonstrated no objective effect from homeopathic preparations. The fundamental implausibility of homeopathy as well as a lack of demonstrable effectiveness has led to it being characterized within the scientific and medical communities as quackery and fraud. Homeopathy achieved its greatest popularity in the 19th century. It was introduced to the United States in 1825, and the first American homeopathic school opened in 1835. Throughout the 19th century, dozens of homeopathic institutions appeared in Europe and the United States. During this period, homeopathy was able to appear relatively successful, as other forms of treatment could be harmful and ineffective. By the end of the century the practice began to wane, with the last exclusively homeopathic medical school in the United States closing in 1920. During the 1970s, homeopathy made a significant comeback, with sales of some homeopathic products increasing tenfold. The trend corresponded with the rise of the New Age movement, and may be in part due to chemophobia, an irrational aversion to synthetic chemicals, and the longer consultation times homeopathic practitioners provided. In the 21st century, a series of meta-analyses have shown that the therapeutic claims of homeopathy lack scientific justification. As a result, national and international bodies have recommended the withdrawal of government funding for homeopathy in healthcare. National bodies from Australia, the United Kingdom, Switzerland and France, as well as the European Academies' Science Advisory Council and the Russian Academy of Sciences have all concluded that homeopathy is ineffective, and recommended against the practice receiving any further funding. The National Health Service in England no longer provides funding for homeopathic remedies and asked the Department of Health to add homeopathic remedies to the list of forbidden prescription items. France removed funding in 2021, while Spain has also announced moves to ban homeopathy and other pseudotherapies from health centers. History Homeopathy was created in 1796 by Samuel Hahnemann. Hahnemann rejected the mainstream medicine of the late 18th century as irrational and inadvisable, because it was largely ineffective and often harmful. He advocated the use of single drugs at lower doses and promoted an immaterial, vitalistic view of how living organisms function. The term homeopathy was coined by Hahnemann and first appeared in print in 1807. He also coined the expression "allopathic medicine", which was used to pejoratively refer to traditional Western medicine. Concept Hahnemann conceived of homeopathy while translating a medical treatise by the Scottish physician and chemist William Cullen into German. Being sceptical of Cullen's theory that cinchona cured malaria because it was bitter, Hahnemann ingested some bark specifically to investigate what would happen. He experienced fever, shivering and joint pain: symptoms similar to those of malaria itself. From this, Hahnemann came to believe that all effective drugs produce symptoms in healthy individuals similar to those of the diseases that they treat. This led to the name "homeopathy", which comes from the hómoios, "-like" and páthos, "suffering". The doctrine that those drugs are effective which produce symptoms similar to the symptoms caused by the diseases they treat, called "the law of similars", was expressed by Hahnemann with the Latin phrase similia similibus curentur, or "like cures like". Hahnemann's law of similars is unproven and does not derive from the scientific method. An account of the effects of eating cinchona bark noted by Oliver Wendell Holmes, published in 1861, failed to reproduce the symptoms Hahnemann reported. Subsequent scientific work showed that cinchona cures malaria because it contains quinine, which kills the Plasmodium falciparum parasite that causes the disease; the mechanism of action is unrelated to Hahnemann's ideas. Provings Hahnemann began to test what effects various substances may produce in humans, a procedure later called "homeopathic proving". These tests required subjects to test the effects of ingesting substances by recording all their symptoms as well as the ancillary conditions under which they appeared. He published a collection of provings in 1805, and a second collection of 65 preparations appeared in his book, Materia Medica Pura (1810). As Hahnemann believed that large doses of drugs that caused similar symptoms would only aggravate illness, he advocated for extreme dilutions. A technique was devised for making dilutions that Hahnemann claimed would preserve the substance's therapeutic properties while removing its harmful effects. Hahnemann believed that this process enhanced "the spirit-like medicinal powers of the crude substances". He gathered and published an overview of his new medical system in his book, The Organon of the Healing Art (1810), with a sixth edition published in 1921 that homeopaths still use today. Miasms and disease In the Organon, Hahnemann introduced the concept of "miasms" as the "infectious principles" underlying chronic disease and as "peculiar morbid derangement[s] of vital force". Hahnemann associated each miasm with specific diseases, and thought that initial exposure to miasms causes local symptoms, such as skin or venereal diseases. His assertion was that if these symptoms were suppressed by medication, the cause went deeper and began to manifest itself as diseases of the internal organs. Homeopathy maintains that treating diseases by directly alleviating their symptoms, as is sometimes done in conventional medicine, is ineffective because all "disease can generally be traced to some latent, deep-seated, underlying chronic, or inherited tendency". The underlying imputed miasm still remains, and deep-seated ailments can be corrected only by removing the deeper disturbance of the vital force. Hahnemann's hypotheses for miasms originally presented only three local symptoms: psora (the itch), syphilis (venereal disease) or sycosis (fig-wart disease). Of these the most important was psora, described as being related to any itching diseases of the skin and was claimed to be the foundation of many further disease conditions. Hahnemann believed it to be the cause of such diseases as epilepsy, cancer, jaundice, deafness, and cataracts. Since Hahnemann's time, other miasms have been proposed, some replacing illnesses previously attributed to the psora, including tuberculosis and cancer miasms. Hahnemann's miasm theory remains disputed and controversial within homeopathy even in modern times. The theory of miasms has been criticized as an explanation developed to preserve the system of homeopathy in the face of treatment failures, and for being inadequate to cover the many hundreds of sorts of diseases, as well as for failing to explain disease predispositions, as well as genetics, environmental factors, and the unique disease history of each patient. 19th century: rise to popularity and early criticism Homeopathy achieved its greatest popularity in the 19th century. It was introduced to the United States in 1825 by Hans Birch Gram, a student of Hahnemann. The first homeopathic school in the United States opened in 1835 and the American Institute of Homeopathy was established in 1844. Throughout the 19th century, dozens of homeopathic institutions appeared in Europe and the United States, and by 1900, there were 22 homeopathic colleges and 15,000 practitioners in the United States. Because medical practice of the time relied on treatments which were often ineffective and harmful, patients of homeopaths often had better outcomes than those being treated by medical practitioners. Though ineffective, homeopathic preparations are rarely detrimental, thus users are less likely to be harmed by the treatment that is supposed to be helping them. The relative success of homeopathy in the 19th century may have led to the abandonment of the ineffective and harmful treatments of bloodletting and purging and begun the move towards more effective, science-based medicine. One reason for the growing popularity of homeopathy was its apparent success in treating people suffering from infectious disease epidemics. During 19th-century epidemics of diseases such as cholera, death rates in homeopathic hospitals were often lower than in conventional hospitals, where the treatments used at the time were often harmful and did little or nothing to combat the diseases. Even during its rise in popularity, homeopathy was criticized by scientists and physicians. Sir John Forbes, physician to Queen Victoria, said in 1843 that the extremely small doses of homeopathy were regularly derided as useless and considered it "an outrage to human reason". James Young Simpson said in 1853 of the highly diluted drugs: "No poison, however strong or powerful, the billionth or decillionth of which would in the least degree affect a man or harm a fly." Nineteenth-century American physician and author Oliver Wendell Holmes was also a vocal critic of homeopathy and published an essay entitled Homœopathy and Its Kindred Delusions (1842). The members of the French Homeopathic Society observed in 1867 that some leading homeopaths of Europe not only were abandoning the practice of administering infinitesimal doses but were also no longer defending it. The last school in the United States exclusively teaching homeopathy closed in 1920. Revival in the 20th century According to academics and Edzard Ernst, the Nazi regime in Germany was fond of homeopathy, and spent large sums of money on researching its mechanisms, but without gaining a positive result. Unschuld also states that homeopathy never subsequently took root in the United States, but remained more deeply established in European thinking. In the United States, the Food, Drug, and Cosmetic Act of 1938 (sponsored by Royal Copeland, a Senator from New York and homeopathic physician) recognized homeopathic preparations as drugs. In the 1950s, there were only 75 solely homeopathic practitioners in the U.S. By the mid to late 1970s, homeopathy made a significant comeback and the sales of some homeopathic companies increased tenfold. Some homeopaths credit the revival to Greek homeopath George Vithoulkas, who conducted a "great deal of research to update the scenarios and refine the theories and practice of homeopathy" in the 1970s, but Ernst and Simon Singh consider it to be linked to the rise of the New Age movement. Bruce Hood has argued that the increased popularity of homeopathy in recent times may be due to the comparatively long consultations practitioners are willing to give their patients, and to a preference for "natural" products, which people think are the basis of homeopathic preparations. Towards the end of the century opposition to homeopathy began to increase again; with William T. Jarvis, the President of the National Council Against Health Fraud, saying that "Homeopathy is a fraud perpetrated on the public with the government's blessing, thanks to the abuse of political power of Sen. Royal S. Copeland." 21st century: renewed criticism Since the beginning of the 21st century, a series of meta-analyses have further shown that the therapeutic claims of homeopathy lack scientific justification. This had led to a decrease or suspension of funding by many governments. In a 2010 report, the Science and Technology Committee of the United Kingdom House of Commons recommended that homeopathy should no longer receive National Health Service (NHS) funding due its lack of scientific credibility; NHS funding for homeopathy ceased in 2017. They also asked the Department of Health in the UK to add homeopathic remedies to the list of forbidden prescription items. In 2015, the National Health and Medical Research Council of Australia found that "there are no health conditions for which there is reliable evidence that homeopathy is effective". The federal government only ended up accepting three of the 45 recommendations made by the 2018 review of Pharmacy Remuneration and Regulation. The same year the US Food and Drug Administration (FDA) held a hearing requesting public comment on the regulation of homeopathic drugs. In 2017 the FDA announced it would strengthen regulation of homeopathic products. The American non-profit Center for Inquiry (CFI) filed a lawsuit in 2018 against the CVS pharmacy for consumer fraud over its sale of homeopathic medicines. It claimed that CVS was selling homeopathic products on an easier-to-obtain basis than standard medication. In 2019, CFI brought a similar lawsuit against Walmart for "committing wide-scale consumer fraud and endangering the health of its customers through its sale and marketing of homeopathic medicines". They also conducted a survey in which they found consumers felt ripped off when informed of the lack of evidence for the efficacy of homeopathic remedies, such as those sold by Walmart and CVS. In 2021, the French healthcare minister phased out social security reimbursements for homeopathic drugs. France has long had a stronger belief in the virtues of homeopathic drugs than many other countries and the world's biggest manufacturer of alternative medicine drugs, Boiron, is located in that country. Spain has also announced moves to ban homeopathy and other pseudotherapies. In 2016, the University of Barcelona cancelled its master's degree in Homeopathy citing "lack of scientific basis", after advice from the Spanish Ministry of Health. Shortly afterwards the University of Valencia announced the elimination of its Masters in Homeopathy. Preparations and treatment Homeopathic preparations are referred to as "homeopathic remedies". Practitioners rely on two types of reference when prescribing: Materia medica and repertories. A homeopathic materia medica is a collection of "drug pictures", organized alphabetically. A homeopathic repertory is a quick reference version of the materia medica that indexes the symptoms and then the associated remedies for each. In both cases different compilers may dispute particular inclusions in the references. The first symptomatic homeopathic materia medica was arranged by Hahnemann. The first homeopathic repertory was Georg Jahr's Symptomenkodex, published in German in 1835, and translated into English as the Repertory to the more Characteristic Symptoms of Materia Medica in 1838. This version was less focused on disease categories and was the forerunner to later works by James Tyler Kent. There are over 118 repertories published in English, with Kent's being one of the most used. Consultation Homeopaths generally begin with a consultation, which can be a 10–15 minute appointment or last for over an hour, where the patient describes their medical history. The patient describes the "modalities", or if their symptoms change depending on the weather and other external factors. The practitioner also solicits information on mood, likes and dislikes, physical, mental and emotional states, life circumstances, and any physical or emotional illnesses. This information (also called the "symptom picture") is matched to the "drug picture" in the materia medica or repertory and used to determine the appropriate homeopathic remedies. In classical homeopathy, the practitioner attempts to match a single preparation to the totality of symptoms (the simlilum), while "clinical homeopathy" involves combinations of preparations based on the illness's symptoms. Preparation Homeopathy uses animal, plant, mineral, and synthetic substances in its preparations, generally referring to them using Latin names. Examples include arsenicum album (arsenic oxide), natrum muriaticum (sodium chloride or table salt), Lachesis muta (the venom of the bushmaster snake), opium, and thyroidinum (thyroid hormone). Homeopaths say this is to ensure accuracy. In the USA the common name must be displayed, although the Latin one can also be present. Homeopathic pills are made from an inert substance (often sugars, typically lactose), upon which a drop of liquid homeopathic preparation is placed and allowed to evaporate. Isopathy is a therapy derived from homeopathy in which the preparations come from diseased or pathological products such as fecal, urinary and respiratory discharges, blood, and tissue. They are called nosodes (from the Greek nosos, disease) with preparations made from "healthy" specimens being termed "sarcodes". Many so-called "homeopathic vaccines" are a form of isopathy. Tautopathy is a form of isopathy where the preparations are composed of drugs or vaccines that a person has consumed in the past, in the belief that this can reverse the supposed lingering damage caused by the initial use. There is no convincing scientific evidence for isopathy as an effective method of treatment. Some modern homeopaths use preparations they call "imponderables" because they do not originate from a substance but some other phenomenon presumed to have been "captured" by alcohol or lactose. Examples include X-rays and sunlight. Another derivative is electrohomeopathy, where an electric bio-energy of therapeutic value is supposedly extracted from plants. Popular in the late nineteenth century, electrohomeopathy is extremely pseudo-scientific. In 2012, the Allahabad High Court in Uttar Pradesh, India, handed down a decree stating that electrohomeopathy was quackery and no longer recognized it as a system of medicine. Other minority practices include paper preparations, in which the terms for substances and dilutions are written on pieces of paper and either pinned to the patients' clothing, put in their pockets, or placed under glasses of water that are then given to the patients. Radionics, the use of electromagnetic radiation such as radio waves, can also be used to manufacture preparations. Such practices have been strongly criticized by classical homeopaths as unfounded, speculative, and verging upon magic and superstition. Flower preparations are produced by placing flowers in water and exposing them to sunlight. The most famous of these are the Bach flower remedies, which were developed by Edward Bach. Dilutions Hahnemann claimed that undiluted doses caused reactions, sometimes dangerous ones, and thus that preparations be given at the lowest possible dose. A solution that is more dilute is described as having a higher "potency", and thus are claimed to be stronger and deeper-acting. The general method of dilution is serial dilution, where solvent is added to part of the previous mixture, but the "Korsakovian" method may also be used. In the Korsakovian method, the vessel in which the preparations are manufactured is emptied, refilled with solvent, with the volume of fluid adhering to the walls of the vessel deemed sufficient for the new batch. The Korsakovian method is sometimes referred to as K on the label of a homeopathic preparation. Another method is Fluxion, which dilutes the substance by continuously passing water through the vial. Insoluble solids, such as granite, diamond, and platinum, are diluted by grinding them with lactose ("trituration"). Three main logarithmic dilution scales are in regular use in homeopathy. Hahnemann created the "centesimal" or "C scale", diluting a substance by a factor of 100 at each stage. There is also a decimal dilution scale (notated as "X" or "D") in which the preparation is diluted by a factor of 10 at each stage. The centesimal scale was favoured by Hahnemann for most of his life, although in his last ten years Hahnemann developed a quintamillesimal (Q) scale which diluted the drug 1 part in 50,000. A 2C dilution works out to one part of the original substance in 10,000 parts of the solution. In standard chemistry, this produces a substance with a concentration of 0.01% (volume-volume percentage). A 6C dilution ends up with the original substance diluted by a factor of 100−6 (one part in one trillion). The end product is usually so diluted as to be indistinguishable from the diluent (pure water, sugar or alcohol). The greatest dilution reasonably likely to contain at least one molecule of the original substance is approximately 12C. Hahnemann advocated dilutions of 1 part to 1060 or 30C. Hahnemann regularly used dilutions of up to 30C but opined that "there must be a limit to the matter". To counter the reduced potency at high dilutions he formed the view that vigorous shaking by striking on an elastic surface – a process termed succussion – was necessary. Homeopaths are unable to agree on the number and force of strikes needed, and there is no way that the claimed results of succussion can be tested. Critics of homeopathy commonly emphasize the dilutions involved in homeopathy, using analogies. One mathematically correct example is that a 12C solution is equivalent to "a pinch of salt in both the North and South Atlantic Oceans". One-third of a drop of some original substance diluted into all the water on Earth would produce a preparation with a concentration of about 13C. Robert L. Park points out that a 200C dilution of duck liver, marketed under the name Oscillococcinum, would require 10320 universes worth of molecules to contain just one original molecule in the final substance. The high dilutions characteristically used are often considered to be the most controversial and implausible aspect of homeopathy. Provings Homeopaths claim that they can determine the properties of their preparations by following a method which they call "proving". As performed by Hahnemann, provings involved administering various preparations to healthy volunteers. The volunteers were then observed, often for months at a time. They were made to keep extensive journals detailing all of their symptoms at specific times throughout the day. They were forbidden from consuming coffee, tea, spices, or wine for the duration of the experiment; playing chess was also prohibited because Hahnemann considered it to be "too exciting", though they were allowed to drink beer and encouraged to exercise in moderation. At first Hahnemann used undiluted doses for provings, but he later advocated provings with preparations at a 30C dilution, and most modern provings are carried out using ultra-dilute preparations. Provings are claimed to have been important in the development of the clinical trial, due to their early use of simple control groups, systematic and quantitative procedures, and some of the first application of statistics in medicine. The lengthy records of self-experimentation by homeopaths have occasionally proven useful in the development of modern drugs: For example, evidence that nitroglycerin might be useful as a treatment for angina was discovered by looking through homeopathic provings, though homeopaths themselves never used it for that purpose at that time. The first recorded provings were published by Hahnemann in his 1796 Essay on a New Principle. His Fragmenta de Viribus (1805) contained the results of 27 provings, and his 1810 Materia Medica Pura contained 65. For James Tyler Kent's 1905 Lectures on Homoeopathic Materia Medica, 217 preparations underwent provings and newer substances are continually added to contemporary versions. Though the proving process has superficial similarities with clinical trials, it is fundamentally different in that the process is subjective, not blinded, and modern provings are unlikely to use pharmacologically active levels of the substance under proving. As early as 1842, Oliver Holmes had noted that provings were impossibly vague, and the purported effect was not repeatable among different subjects. Evidence and efficacy Outside of the alternative medicine community, scientists have long considered homeopathy a sham or a pseudoscience, and the medical community regards it as quackery. There is an overall absence of sound statistical evidence of therapeutic efficacy, which is consistent with the lack of any biologically plausible pharmacological agent or mechanism. Proponents argue that homeopathic medicines must work by some, as yet undefined, biophysical mechanism. No homeopathic preparation has been shown to be different from placebo. Lack of scientific evidence The lack of convincing scientific evidence supporting its efficacy and its use of preparations without active ingredients have led to characterizations of homeopathy as pseudoscience and quackery, or, in the words of a 1998 medical review, "placebo therapy at best and quackery at worst". The Russian Academy of Sciences considers homeopathy a "dangerous 'pseudoscience' that does not work", and "urges people to treat homeopathy 'on a par with magic. The Chief Medical Officer for England, Dame Sally Davies, has stated that homeopathic preparations are "rubbish" and do not serve as anything more than placebos. In 2013, Mark Walport, the UK Government Chief Scientific Adviser and head of the Government Office for Science said "homeopathy is nonsense, it is non-science." His predecessor, John Beddington, also said that homeopathy "has no underpinning of scientific basis" and is being "fundamentally ignored" by the Government. Jack Killen, acting deputy director of the National Center for Complementary and Alternative Medicine, says homeopathy "goes beyond current understanding of chemistry and physics". He adds: "There is, to my knowledge, no condition for which homeopathy has been proven to be an effective treatment." Ben Goldacre says that homeopaths who misrepresent scientific evidence to a scientifically illiterate public, have "... walled themselves off from academic medicine, and critique has been all too often met with avoidance rather than argument". Homeopaths often prefer to ignore meta-analyses in favour of cherry picked positive results, such as by promoting a particular observational study (one which Goldacre describes as "little more than a customer-satisfaction survey") as if it were more informative than a series of randomized controlled trials. In an article entitled "Should We Maintain an Open Mind about Homeopathy?" published in the American Journal of Medicine, Michael Baum and Edzard Ernstwriting to other physicianswrote that "Homeopathy is among the worst examples of faith-based medicine... These axioms [of homeopathy] are not only out of line with scientific facts but also directly opposed to them. If homeopathy is correct, much of physics, chemistry, and pharmacology must be incorrect...". Plausibility of dilutions The exceedingly low concentration of homeopathic preparations, which often lack even a single molecule of the diluted substance, has been the basis of questions about the effects of the preparations since the 19th century. The laws of chemistry give this dilution limit, which is related to the Avogadro number, as being roughly equal to 12C homeopathic dilutions (1 part in 1024). James Randi and the 10:23 campaign groups have highlighted the lack of active ingredients by taking large 'overdoses'. None of the hundreds of demonstrators in the UK, Australia, New Zealand, Canada and the US were injured and "no one was cured of anything, either". Modern advocates of homeopathy have proposed a concept of "water memory", according to which water "remembers" the substances mixed in it, and transmits the effect of those substances when consumed. This concept is inconsistent with the current understanding of matter, and water memory has never been demonstrated to have any detectable effect, biological or otherwise. Existence of a pharmacological effect in the absence of any true active ingredient is inconsistent with the law of mass action and the observed dose-response relationships characteristic of therapeutic drugs. Homeopaths contend that their methods produce a therapeutically active preparation, selectively including only the intended substance, though in reality any water will have been in contact with millions of different substances throughout its history, and homeopaths cannot account for the selected homeopathic substance being isolated as a special case in their process. Practitioners also hold that higher dilutions produce stronger medicinal effects. This idea is also inconsistent with observed dose-response relationships, where effects are dependent on the concentration of the active ingredient in the body. Some contend that the phenomenon of hormesis may support the idea of dilution increasing potency, but the dose-response relationship outside the zone of hormesis declines with dilution as normal, and nonlinear pharmacological effects do not provide any credible support for homeopathy. Efficacy No individual homeopathic preparation has been unambiguously shown by research to be different from placebo. The methodological quality of the early primary research was low, with problems such as weaknesses in study design and reporting, small sample size, and selection bias. Since better quality trials have become available, the evidence for efficacy of homeopathy preparations has diminished; the highest-quality trials indicate that the preparations themselves exert no intrinsic effect. A review conducted in 2010 of all the pertinent studies of "best evidence" produced by the Cochrane Collaboration concluded that this evidence "fails to demonstrate that homeopathic medicines have effects beyond placebo." In 2009, the United Kingdom's House of Commons Science and Technology Committee concluded that there was no compelling evidence of effect other than placebo. The Australian National Health and Medical Research Council completed a comprehensive review of the effectiveness of homeopathic preparations in 2015, in which it concluded that "there were no health conditions for which there was reliable evidence that homeopathy was effective." The European Academies' Science Advisory Council (EASAC) published its official analysis in 2017 finding a lack of evidence that homeopathic products are effective, and raising concerns about quality control. In contrast a 2011 book was published, purportedly financed by the Swiss government, that concluded that homeopathy was effective and cost efficient. Although hailed by proponents as proof that homeopathy works, it was found to be scientifically, logically and ethically flawed, with most authors having a conflict of interest. The Swiss Federal Office of Public Health later released a statement saying the book was published without the consent of the Swiss government. Meta-analyses, essential tools to summarize evidence of therapeutic efficacy, and systematic reviews have found that the methodological quality in the majority of randomized trials in homeopathy have shortcomings and that such trials were generally of lower quality than trials of conventional medicine. A major issue has been publication bias, where positive results are more likely to be published in journals. This has been particularly marked in alternative medicine journals, where few of the published articles (just 5% during the year 2000) tend to report null results. A systematic review of the available systematic reviews confirmed in 2002 that higher-quality trials tended to have less positive results, and found no convincing evidence that any homeopathic preparation exerts clinical effects different from placebo. The same conclusion was also reached in 2005 in a meta-analysis published in The Lancet. A 2017 systematic review and meta-analysis found that the most reliable evidence did not support the effectiveness of non-individualized homeopathy. Health organizations, including the UK's National Health Service, the American Medical Association, the FASEB, and the National Health and Medical Research Council of Australia, have issued statements saying that there is no good-quality evidence that homeopathy is effective as a treatment for any health condition. In 2009, World Health Organization official Mario Raviglione criticized the use of homeopathy to treat tuberculosis; similarly, another WHO spokesperson argued there was no evidence homeopathy would be an effective treatment for diarrhoea. They warned against the use of homeopathy for serious conditions such as depression, HIV and malaria. The American College of Medical Toxicology and the American Academy of Clinical Toxicology recommend that no one use homeopathic treatment for disease or as a preventive health measure. These organizations report that no evidence exists that homeopathic treatment is effective, but that there is evidence that using these treatments produces harm and can bring indirect health risks by delaying conventional treatment. Purported effects in other biological systems While some articles have suggested that homeopathic solutions of high dilution can have statistically significant effects on organic processes including the growth of grain and enzyme reactions, such evidence is disputed since attempts to replicate them have failed. In 2001 and 2004, Madeleine Ennis published a number of studies that reported that homeopathic dilutions of histamine exerted an effect on the activity of basophils. In response to the first of these studies, Horizon aired a programme in which British scientists attempted to replicate Ennis' results; they were unable to do so. A 2007 systematic review of high-dilution experiments found that none of the experiments with positive results could be reproduced by all investigators. In 1988, French immunologist Jacques Benveniste published a paper in the journal Nature while working at INSERM. The paper purported to have discovered that basophils released histamine when exposed to a homeopathic dilution of anti-immunoglobulin E antibody. Skeptical of the findings, Nature assembled an independent investigative team to determine the accuracy of the research. After investigation the team found that the experiments were "statistically ill-controlled", "interpretation has been clouded by the exclusion of measurements in conflict with the claim", and concluded, "We believe that experimental data have been uncritically assessed and their imperfections inadequately reported." Ethics and safety The provision of homeopathic preparations has been described as unethical. Michael Baum, professor emeritus of surgery and visiting professor of medical humanities at University College London (UCL), has described homeopathy as a "cruel deception". Edzard Ernst, the first professor of complementary medicine in the United Kingdom and a former homeopathic practitioner, has expressed his concerns about pharmacists who violate their ethical code by failing to provide customers with "necessary and relevant information" about the true nature of the homeopathic products they advertise and sell. In 2013 the UK Advertising Standards Authority concluded that the Society of Homeopaths were targeting vulnerable ill people and discouraging the use of essential medical treatment while making misleading claims of efficacy for homeopathic products. In 2015 the Federal Court of Australia imposed penalties on a homeopathic company for making false or misleading statements about the efficacy of the whooping cough vaccine and recommending homeopathic remedies as an alternative.A 2000 review by homeopaths reported that homeopathic preparations are "unlikely to provoke severe adverse reactions". In 2012, a systematic review evaluating evidence of homeopathy's possible adverse effects concluded that "homeopathy has the potential to harm patients and consumers in both direct and indirect ways". A 2016 systematic review and meta-analysis found that, in homeopathic clinical trials, adverse effects were reported among the patients who received homeopathy about as often as they were reported among patients who received placebo or conventional medicine. Some homeopathic preparations involve poisons such as Belladonna, arsenic, and poison ivy. In rare cases, the original ingredients are present at detectable levels. This may be due to improper preparation or intentional low dilution. Serious adverse effects such as seizures and death have been reported or associated with some homeopathic preparations. Instances of arsenic poisoning have occurred. In 2009, the FDA advised consumers to stop using three discontinued cold remedy Zicam products because it could cause permanent damage to users' sense of smell. In 2016 the FDA issued a safety alert to consumers warning against the use of homeopathic teething gels and tablets following reports of adverse events after their use. A previous FDA investigation had found that these products were improperly diluted and contained "unsafe levels of belladonna" and that the reports of serious adverse events in children using this product were "consistent with belladonna toxicity". Patients who choose to use homeopathy rather than evidence-based medicine risk missing timely diagnosis and effective treatment, thereby worsening the outcomes of serious conditions such as cancer. The Russian Commission on Pseudoscience has said homeopathy is not safe because "patients spend significant amounts of money, buying medicines that do not work and disregard already known effective treatment." Critics have cited cases of patients failing to receive proper treatment for diseases that could have been easily managed with conventional medicine and who have died as a result. They have also condemned the "marketing practice" of criticizing and downplaying the effectiveness of medicine. Homeopaths claim that use of conventional medicines will "push the disease deeper" and cause more serious conditions, a process referred to as "suppression". In 1978, Anthony Campbell, a consultant physician at the Royal London Homeopathic Hospital, criticized statements by George Vithoulkas claiming that syphilis, when treated with antibiotics, would develop into secondary and tertiary syphilis with involvement of the central nervous system. Vithoulkas' claims echo the idea that treating a disease with external medication used to treat the symptoms would only drive it deeper into the body and conflict with scientific studies, which indicate that penicillin treatment produces a complete cure of syphilis in more than 90% of cases. The use of homeopathy as a preventive for serious infectious diseases, called homeoprophylaxis, is especially controversial. Some homeopaths (particularly those who are non-physicians) advise their patients against immunization. Others have suggested that vaccines be replaced with homeopathic "nosodes". While Hahnemann was opposed to such preparations, modern homeopaths often use them although there is no evidence to indicate they have any beneficial effects. Promotion of homeopathic alternatives to vaccines has been characterized as dangerous, inappropriate and irresponsible. In December 2014, the Australian homeopathy supplier Homeopathy Plus! was found to have acted deceptively in promoting homeopathic alternatives to vaccines. In 2019, an investigative journalism piece by the Telegraph revealed that homeopathy practitioners were actively discouraging patients from vaccinating their children. Cases of homeopaths advising against the use of anti-malarial drugs have also been identified, putting visitors to the tropics in severe danger. A 2006 review recommends that pharmacy colleges include a required course where ethical dilemmas inherent in recommending products lacking proven safety and efficacy data be discussed and that students should be taught where unproven systems such as homeopathy depart from evidence-based medicine. Regulation and prevalence Homeopathy is fairly common in some countries while being uncommon in others; is highly regulated in some countries and mostly unregulated in others. It is practiced worldwide and professional qualifications and licences are needed in most countries. A 2019 WHO report found that 100 out of 133 Member States surveyed in 2012 acknowledged that their population used homeopathy, with 22 saying the practice was regulated and 13 providing health insurance coverage. In some countries, there are no specific legal regulations concerning the use of homeopathy, while in others, licences or degrees in conventional medicine from accredited universities are required. In 2001 homeopathy had been integrated into the national health care systems of many countries, including India, Mexico, Pakistan, Sri Lanka, and the United Kingdom. Regulation Some homeopathic treatment is covered by the public health service of several European countries, including Scotland, and Luxembourg. It used to be covered in France until 2021. In other countries, such as Belgium, homeopathy is not covered. In Austria, the public health service requires scientific proof of effectiveness in order to reimburse medical treatments and homeopathy is listed as not reimbursable, but exceptions can be made; private health insurance policies sometimes include homeopathic treatments. In 2018, Austria's Medical University of Vienna stopped teaching homeopathy. The Swiss government withdrew coverage of homeopathy and four other complementary treatments in 2005, stating that they did not meet efficacy and cost-effectiveness criteria, but following a referendum in 2009 the five therapies were reinstated for a further 6-year trial period. In Germany, homeopathic treatments are covered by 70 percent of government medical plans, and available in almost every pharmacy. In January 2024, German health minister Karl Lauterbach announced plans to withdraw all statutory health insurance coverage for homeopathic and anthroposophic treatments, citing a lack of scientific evidence for their efficacy. The English NHS recommended against prescribing homeopathic preparations in 2017. In 2018, prescriptions worth £55,000 were written in defiance of the guidelines, representing less than 0.001% of the total NHS prescribing budget. In 2016 the UK's Committee of Advertising Practice compliance team wrote to homeopaths in the UK to "remind them of the rules that govern what they can and can't say in their marketing materials". The letter told homeopaths to "ensure that they do not make any direct or implied claims that homeopathy can treat medical conditions" and asks them to review their marketing communications "including websites and social media pages" to ensure compliance. Homeopathic services offered at Bristol Homeopathic Hospital in the UK ceased in October 2015. Member states of the European Union are required to ensure that homeopathic products are registered, although this process does not require any proof of efficacy. In Spain, the Association for the protection of patients from pseudo-scientific therapies is lobbying to get rid of the easy registration procedure for homeopathic remedies. In Bulgaria, Hungary, Latvia, Romania and Slovenia homeopathy, by law, can only be practiced by medical practitioners. However, in Slovenia if doctors practice homeopathy their medical license will be revoked. In Germany, to become a homeopathic physician, one must attend a three-year training program, while France, Austria and Denmark mandate licences to diagnose any illness or dispense of any product whose purpose is to treat any illness. Homeopaths in the UK are under no legal regulations, meaning anyone can call themselves homeopaths and administer homeopathic remedies. The Indian government recognizes homeopathy as one of its national systems of medicine and they are sold with medical claims. It has established the Department of Ayurveda, Yoga and Naturopathy, Unani, Siddha and Homoeopathy (AYUSH) under the Ministry of Health & Family Welfare. The south Indian state of Kerala also has a cabinet-level AYUSH department. The Central Council of Homoeopathy was established in 1973 to monitor higher education in homeopathy, and the National Institute of Homoeopathy in 1975. Principals and standards for homeopathic products are covered by the Homoeopathic pharmacopoeia of India. A minimum of a recognized diploma in homeopathy and registration on a state register or the Central Register of Homoeopathy is required to practice homeopathy in India. Some medical schools in Pakistan, India, and Bangladesh, offer an undergraduate degree programme in homeopathy. Upon completion the college may award a (B.H.M.S.). In the United States each state is responsible for the laws and licensing requirements for homeopathy. In 2015, the FDA held a hearing on homeopathic product regulation. At the hearing, representatives from the Center for Inquiry and the Committee for Skeptical Inquiry summarized the harm that is done to the general public from homeopathics and proposed regulatory actions: In 2016 the United States Federal Trade Commission (FTC) issued an "Enforcement Policy Statement Regarding Marketing Claims for Over-the-Counter Homeopathic Drugs" which specified that the FTC will apply the same standard to homeopathic drugs that it applies to other products claiming similar benefits. A related report concluded that claims of homeopathy effectiveness "are not accepted by most modern medical experts and do not constitute competent and reliable scientific evidence that these products have the claimed treatment effects." In 2019, the FDA removed an enforcement policy that permitted unapproved homeopathics to be sold. Currently no homeopathic products are approved by the FDA. Homeopathic remedies are regulated as natural health products in Canada. Ontario became the first province in the country to regulate the practice of homeopathy, a move that was widely criticized by scientists and doctors. Health Canada requires all products to have a licence before being sold and applicants have to submit evidence on "the safety, efficacy and quality of a homeopathic medicine". In 2015 the Canadian Broadcasting Corporation tested the system by applying for and then receiving a government approved licence for a made-up drug aimed at kids. In Australia, the sale of homeopathic products is regulated by the Therapeutic Goods Administration. In 2015, the National Health and Medical Research Council of Australia concluded that there is "no reliable evidence that homeopathy is effective and should not be used to treat health conditions that are chronic, serious, or could become serious". They recommended anyone considering using homeopathy should first get advice from a registered health practitioner. A 2017 review into Pharmacy Remuneration and Regulation recommended that products be banned from pharmacies; while noting the concerns the government did not adopt the recommendation. In New Zealand there are no regulations specific to homeopathy and the New Zealand Medical Association does not oppose the use of homeopathy, a stance that has been called unethical by some doctors. Prevalence Homeopathy is one of the most commonly used forms of alternative medicines and it has a large worldwide market. The exact size is uncertain, but information available on homeopathic sales suggests it forms a large share of the medical market. In 1999, about 1000 UK doctors practiced homeopathy, most being general practitioners who prescribe a limited number of remedies. A further 1500 homeopaths with no medical training are also thought to practice. Over ten thousand German and French doctors use homeopathy. In the United States a National Health Interview Survey estimated 5 million adults and 1 million children used homeopathy in 2011. An analysis of this survey concluded that most cases were self-prescribed for colds and musculoskeletal pain. Major retailers like Walmart, CVS, and Walgreens sell homeopathic products that are packaged to resemble conventional medicines. The homeopathic drug market in Germany is worth about 650 million euro with a 2014 survey finding that 60 percent of Germans reported trying homeopathy. A 2009 survey found that only 17 percent of respondents knew how homeopathic medicine was made. France spent more than US$408 million on homeopathic products in 2008. In the United States the homeopathic market is worth about $3 billion-a-year; with 2.9 billion spent in 2007. Australia spent US$7.3 million on homeopathic medicines in 2008. In India, a 2014 national health survey found that homeopathy was used by about 3% of the population. Homeopathy is used in China, although it arrived a lot later than in many other countries, partly due to the restriction on foreigners that persisted until late in the nineteenth century. Throughout Africa there is a high reliance on traditional medicines, which can be attributed to the cost of modern medicines and the relative prevalence of practitioners. Many African countries do not have any official training facilities. Veterinary use Using homeopathy as a treatment for animals is termed "veterinary homeopathy" and dates back to the inception of homeopathy; Hahnemann himself wrote and spoke of the use of homeopathy in animals other than humans. The use of homeopathy in the organic farming industry is heavily promoted. Given that homeopathy's effects in humans are due to the placebo effect and the counseling aspects of the consultation, such treatments are even less effective in animals. Studies have also found that giving animals placebos can play active roles in influencing pet owners to believe in the effectiveness of the treatment when none exists. This means that animals given homeopathic remedies will continue to suffer, resulting in animal welfare concerns. Little existing research on the subject is of a high enough scientific standard to provide reliable data on efficacy. A 2016 review of peer-reviewed articles from 1981 to 2014 by scientists from the University of Kassel, Germany, concluded that there is not enough evidence to support homeopathy as an effective treatment of infectious diseases in livestock. The UK's Department for Environment, Food and Rural Affairs (Defra) has adopted a robust position against use of "alternative" pet preparations including homeopathy. The British Veterinary Association's position statement on alternative medicines says that it "cannot endorse" homeopathy, and the Australian Veterinary Association includes it on its list of "ineffective therapies".
Biology and health sciences
Alternative and traditional medicine
Health
14263
https://en.wikipedia.org/wiki/Horner%27s%20method
Horner's method
In mathematics and computer science, Horner's method (or Horner's scheme) is an algorithm for polynomial evaluation. Although named after William George Horner, this method is much older, as it has been attributed to Joseph-Louis Lagrange by Horner himself, and can be traced back many hundreds of years to Chinese and Persian mathematicians. After the introduction of computers, this algorithm became fundamental for computing efficiently with polynomials. The algorithm is based on Horner's rule, in which a polynomial is written in nested form: This allows the evaluation of a polynomial of degree with only multiplications and additions. This is optimal, since there are polynomials of degree that cannot be evaluated with fewer arithmetic operations. Alternatively, Horner's method and also refers to a method for approximating the roots of polynomials, described by Horner in 1819. It is a variant of the Newton–Raphson method made more efficient for hand calculation by application of Horner's rule. It was widely used until computers came into general use around 1970. Polynomial evaluation and long division Given the polynomial where are constant coefficients, the problem is to evaluate the polynomial at a specific value of For this, a new sequence of constants is defined recursively as follows: Then is the value of . To see why this works, the polynomial can be written in the form Thus, by iteratively substituting the into the expression, Now, it can be proven that; This expression constitutes Horner's practical application, as it offers a very quick way of determining the outcome of; with (which is equal to ) being the division's remainder, as is demonstrated by the examples below. If is a root of , then (meaning the remainder is ), which means you can factor as . To finding the consecutive -values, you start with determining , which is simply equal to . Then you then work recursively using the formula: till you arrive at . Examples Evaluate for . We use synthetic division as follows: x│ x x x x 3 │ 2 −6 2 −1 │ 6 0 6 └──────────────────────── 2 0 2 5 The entries in the third row are the sum of those in the first two. Each entry in the second row is the product of the -value ( in this example) with the third-row entry immediately to the left. The entries in the first row are the coefficients of the polynomial to be evaluated. Then the remainder of on division by is . But by the polynomial remainder theorem, we know that the remainder is . Thus, . In this example, if we can see that , the entries in the third row. So, synthetic division (which was actually invented and published by Ruffini 10 years before Horner's publication) is easier to use; it can be shown to be equivalent to Horner's method. As a consequence of the polynomial remainder theorem, the entries in the third row are the coefficients of the second-degree polynomial, the quotient of on division by . The remainder is . This makes Horner's method useful for polynomial long division. Divide by : 2 │ 1 −6 11 −6 │ 2 −8 6 └──────────────────────── 1 −4 3 0 The quotient is . Let and . Divide by using Horner's method. 0.5 │ 4 −6 0 3 −5 │ 2 −2 −1 1 └─────────────────────── 2 −2 −1 1 −4 The third row is the sum of the first two rows, divided by . Each entry in the second row is the product of with the third-row entry to the left. The answer is Efficiency Evaluation using the monomial form of a degree polynomial requires at most additions and multiplications, if powers are calculated by repeated multiplication and each monomial is evaluated individually. The cost can be reduced to additions and multiplications by evaluating the powers of by iteration. If numerical data are represented in terms of digits (or bits), then the naive algorithm also entails storing approximately times the number of bits of : the evaluated polynomial has approximate magnitude , and one must also store itself. By contrast, Horner's method requires only additions and multiplications, and its storage requirements are only times the number of bits of . Alternatively, Horner's method can be computed with fused multiply–adds. Horner's method can also be extended to evaluate the first derivatives of the polynomial with additions and multiplications. Horner's method is optimal, in the sense that any algorithm to evaluate an arbitrary polynomial must use at least as many operations. Alexander Ostrowski proved in 1954 that the number of additions required is minimal. Victor Pan proved in 1966 that the number of multiplications is minimal. However, when is a matrix, Horner's method is not optimal. This assumes that the polynomial is evaluated in monomial form and no preconditioning of the representation is allowed, which makes sense if the polynomial is evaluated only once. However, if preconditioning is allowed and the polynomial is to be evaluated many times, then faster algorithms are possible. They involve a transformation of the representation of the polynomial. In general, a degree- polynomial can be evaluated using only +2 multiplications and additions. Parallel evaluation A disadvantage of Horner's rule is that all of the operations are sequentially dependent, so it is not possible to take advantage of instruction level parallelism on modern computers. In most applications where the efficiency of polynomial evaluation matters, many low-order polynomials are evaluated simultaneously (for each pixel or polygon in computer graphics, or for each grid square in a numerical simulation), so it is not necessary to find parallelism within a single polynomial evaluation. If, however, one is evaluating a single polynomial of very high order, it may be useful to break it up as follows: More generally, the summation can be broken into k parts: where the inner summations may be evaluated using separate parallel instances of Horner's method. This requires slightly more operations than the basic Horner's method, but allows k-way SIMD execution of most of them. Modern compilers generally evaluate polynomials this way when advantageous, although for floating-point calculations this requires enabling (unsafe) reassociative math. Application to floating-point multiplication and division Horner's method is a fast, code-efficient method for multiplication and division of binary numbers on a microcontroller with no hardware multiplier. One of the binary numbers to be multiplied is represented as a trivial polynomial, where (using the above notation) , and . Then, x (or x to some power) is repeatedly factored out. In this binary numeral system (base 2), , so powers of 2 are repeatedly factored out. Example For example, to find the product of two numbers (0.15625) and m: Method To find the product of two binary numbers d and m: A register holding the intermediate result is initialized to d. Begin with the least significant (rightmost) non-zero bit in m. If all the non-zero bits were counted, then the intermediate result register now holds the final result. Otherwise, add d to the intermediate result, and continue in step 2 with the next most significant bit in m. Derivation In general, for a binary number with bit values () the product is At this stage in the algorithm, it is required that terms with zero-valued coefficients are dropped, so that only binary coefficients equal to one are counted, thus the problem of multiplication or division by zero is not an issue, despite this implication in the factored equation: The denominators all equal one (or the term is absent), so this reduces to or equivalently (as consistent with the "method" described above) In binary (base-2) math, multiplication by a power of 2 is merely a register shift operation. Thus, multiplying by 2 is calculated in base-2 by an arithmetic shift. The factor (2−1) is a right arithmetic shift, a (0) results in no operation (since 20 = 1 is the multiplicative identity element), and a (21) results in a left arithmetic shift. The multiplication product can now be quickly calculated using only arithmetic shift operations, addition and subtraction. The method is particularly fast on processors supporting a single-instruction shift-and-addition-accumulate. Compared to a C floating-point library, Horner's method sacrifices some accuracy, however it is nominally 13 times faster (16 times faster when the "canonical signed digit" (CSD) form is used) and uses only 20% of the code space. Other applications Horner's method can be used to convert between different positional numeral systems – in which case x is the base of the number system, and the ai coefficients are the digits of the base-x representation of a given number – and can also be used if x is a matrix, in which case the gain in computational efficiency is even greater. However, for such cases faster methods are known. Polynomial root finding Using the long division algorithm in combination with Newton's method, it is possible to approximate the real roots of a polynomial. The algorithm works as follows. Given a polynomial of degree with zeros make some initial guess such that . Now iterate the following two steps: Using Newton's method, find the largest zero of using the guess . Using Horner's method, divide out to obtain . Return to step 1 but use the polynomial and the initial guess . These two steps are repeated until all real zeros are found for the polynomial. If the approximated zeros are not precise enough, the obtained values can be used as initial guesses for Newton's method but using the full polynomial rather than the reduced polynomials. Example Consider the polynomial which can be expanded to From the above we know that the largest root of this polynomial is 7 so we are able to make an initial guess of 8. Using Newton's method the first zero of 7 is found as shown in black in the figure to the right. Next is divided by to obtain which is drawn in red in the figure to the right. Newton's method is used to find the largest zero of this polynomial with an initial guess of 7. The largest zero of this polynomial which corresponds to the second largest zero of the original polynomial is found at 3 and is circled in red. The degree 5 polynomial is now divided by to obtain which is shown in yellow. The zero for this polynomial is found at 2 again using Newton's method and is circled in yellow. Horner's method is now used to obtain which is shown in green and found to have a zero at −3. This polynomial is further reduced to which is shown in blue and yields a zero of −5. The final root of the original polynomial may be found by either using the final zero as an initial guess for Newton's method, or by reducing and solving the linear equation. As can be seen, the expected roots of −8, −5, −3, 2, 3, and 7 were found. Divided difference of a polynomial Horner's method can be modified to compute the divided difference Given the polynomial (as before) proceed as follows At completion, we have This computation of the divided difference is subject to less round-off error than evaluating and separately, particularly when . Substituting in this method gives , the derivative of . History Horner's paper, titled "A new method of solving numerical equations of all orders, by continuous approximation", was read before the Royal Society of London, at its meeting on July 1, 1819, with a sequel in 1823. Horner's paper in Part II of Philosophical Transactions of the Royal Society of London for 1819 was warmly and expansively welcomed by a reviewer in the issue of The Monthly Review: or, Literary Journal for April, 1820; in comparison, a technical paper by Charles Babbage is dismissed curtly in this review. The sequence of reviews in The Monthly Review for September, 1821, concludes that Holdred was the first person to discover a direct and general practical solution of numerical equations. Fuller showed that the method in Horner's 1819 paper differs from what afterwards became known as "Horner's method" and that in consequence the priority for this method should go to Holdred (1820). Unlike his English contemporaries, Horner drew on the Continental literature, notably the work of Arbogast. Horner is also known to have made a close reading of John Bonneycastle's book on algebra, though he neglected the work of Paolo Ruffini. Although Horner is credited with making the method accessible and practical, it was known long before Horner. In reverse chronological order, Horner's method was already known to: Paolo Ruffini in 1809 (see Ruffini's rule) Isaac Newton in 1669 the Chinese mathematician Zhu Shijie in the 14th century the Chinese mathematician Qin Jiushao in his Mathematical Treatise in Nine Sections in the 13th century the Persian mathematician Sharaf al-Dīn al-Ṭūsī in the 12th century (the first to use that method in a general case of cubic equation) the Chinese mathematician Jia Xian in the 11th century (Song dynasty) The Nine Chapters on the Mathematical Art, a Chinese work of the Han dynasty (202 BC – 220 AD) edited by Liu Hui (fl. 3rd century). Qin Jiushao, in his Shu Shu Jiu Zhang (Mathematical Treatise in Nine Sections; 1247), presents a portfolio of methods of Horner-type for solving polynomial equations, which was based on earlier works of the 11th century Song dynasty mathematician Jia Xian; for example, one method is specifically suited to bi-quintics, of which Qin gives an instance, in keeping with the then Chinese custom of case studies. Yoshio Mikami in Development of Mathematics in China and Japan (Leipzig 1913) wrote: Ulrich Libbrecht concluded: It is obvious that this procedure is a Chinese invention ... the method was not known in India. He said, Fibonacci probably learned of it from Arabs, who perhaps borrowed from the Chinese. The extraction of square and cube roots along similar lines is already discussed by Liu Hui in connection with Problems IV.16 and 22 in Jiu Zhang Suan Shu, while Wang Xiaotong in the 7th century supposes his readers can solve cubics by an approximation method described in his book Jigu Suanjing.
Mathematics
Other algebra topics
null
14283
https://en.wikipedia.org/wiki/Heavy%20water
Heavy water
Heavy water (deuterium oxide, , ) is a form of water in which hydrogen atoms are all deuterium ( or D, also known as heavy hydrogen) rather than the common hydrogen-1 isotope (, also called protium) that makes up most of the hydrogen in normal water. The presence of the heavier isotope gives the water different nuclear properties, and the increase in mass gives it slightly different physical and chemical properties when compared to normal water. Deuterium is a heavy hydrogen isotope. Heavy water contains deuterium atoms and is used in nuclear reactors. Semiheavy water (HDO) is more common than pure heavy water, while heavy-oxygen water is denser but lacks unique properties. Tritiated water is radioactive due to tritium content. Heavy water has different physical properties from regular water, such as being 10.6% denser and having a higher melting point. Heavy water is less dissociated at a given temperature, and it does not have the slightly blue color of regular water. It can taste slightly sweeter than regular water, though not to a significant degree. Heavy water affects biological systems by altering enzymes, hydrogen bonds, and cell division in eukaryotes. It can be lethal to multicellular organisms at concentrations over 50%. However, some prokaryotes like bacteria can survive in a heavy hydrogen environment. Heavy water can be toxic to humans, but a large amount would be needed for poisoning to occur. The most cost-effective process for producing heavy water is the Girdler sulfide process. Heavy water is used in various industries and is sold in different grades of purity. Some of its applications include nuclear magnetic resonance, infrared spectroscopy, neutron moderation, neutrino detection, metabolic rate testing, neutron capture therapy, and the production of radioactive materials such as plutonium and tritium. Composition The deuterium nucleus consists of a neutron and a proton; the nucleus of a protium (normal hydrogen) atom consists of just a proton. The additional neutron makes a deuterium atom roughly twice as heavy as a protium atom. A molecule of heavy water has two deuterium atoms in place of the two protium atoms of ordinary water. The term heavy water as defined by the IUPAC Gold Book can also refer to water in which a higher than usual proportion of hydrogen atoms are deuterium. For comparison, Vienna Standard Mean Ocean Water (the "ordinary water" used for a deuterium standard) contains about 156 deuterium atoms per million hydrogen atoms; that is, 0.0156% of the hydrogen atoms are H. Thus heavy water as defined by the Gold Book includes semiheavy water (hydrogen-deuterium oxide, HDO) and other mixtures of , , and HDO in which the proportion of deuterium is greater than usual. For instance, the heavy water used in CANDU reactors is a highly enriched water mixture that is mostly deuterium oxide , but also some hydrogen-deuterium oxide and a smaller amount of ordinary water . It is 99.75% enriched by hydrogen atom-fraction; that is, 99.75% of the hydrogen atoms are of the heavy type; however, heavy water in the Gold Book sense need not be so highly enriched. The weight of a heavy water molecule, however, is not very different from that of a normal water molecule, because about 89% of the mass of the molecule comes from the single oxygen atom rather than the two hydrogen atoms. Heavy water is not radioactive. In its pure form, it has a density about 11% greater than water but is otherwise physically and chemically similar. Nevertheless, the various differences in deuterium-containing water (especially affecting the biological properties) are larger than in any other commonly occurring isotope-substituted compound because deuterium is unique among heavy stable isotopes in being twice as heavy as the lightest isotope. This difference increases the strength of water's hydrogen–oxygen bonds, and this in turn is enough to cause differences that are important to some biochemical reactions. The human body naturally contains deuterium equivalent to about five grams of heavy water, which is harmless. When a large fraction of water (> 50%) in higher organisms is replaced by heavy water, the result is cell dysfunction and death. Heavy water was first produced in 1932, a few months after the discovery of deuterium. With the discovery of nuclear fission in late 1938, and the need for a neutron moderator that captured few neutrons, heavy water became a component of early nuclear energy research. Since then, heavy water has been an essential component in some types of reactors, both those that generate power and those designed to produce isotopes for nuclear weapons. These heavy water reactors have the advantage of being able to run on natural uranium without using graphite moderators that pose radiological and dust explosion hazards in the decommissioning phase. The graphite moderated Soviet RBMK design tried to avoid using either enriched uranium or heavy water (being cooled with ordinary water instead) which produced the positive void coefficient that was one of a series of flaws in reactor design leading to the Chernobyl disaster. Most modern reactors use enriched uranium with ordinary water as the moderator. Other heavy forms of water Semiheavy water Semiheavy water, HDO, exists whenever there is water with light hydrogen (protium, ) and deuterium (D or ) in the mix. This is because hydrogen atoms (H and H) are rapidly exchanged between water molecules. Water containing 50% and 50% in its hydrogen, is actually about 50% HDO and 25% each of and , in dynamic equilibrium. In normal water, about 1 molecule in 3,200 is HDO (one hydrogen in 6,400 is ), and heavy water molecules () only occur in a proportion of about 1 molecule in 41 million (i.e. one in 6,400). Thus semiheavy water molecules are far more common than "pure" (homoisotopic) heavy water molecules. Heavy-oxygen water Water enriched in the heavier oxygen isotopes and is also commercially available. It is "heavy water" as it is denser than normal water ( is approximately as dense as , is about halfway between and )—but is rarely called heavy water, since it does not contain the excess deuterium that gives DO its unusual nuclear and biological properties. It is more expensive than DO due to the more difficult separation of O and O. HO is also used for production of fluorine-18 in radiopharmaceuticals and radiotracers, and positron emission tomography. Small amounts of and are naturally present in water, and most processes enriching heavy water also enrich heavier isotopes of oxygen as a side-effect. This is undesirable if the heavy water is to be used as a neutron moderator in nuclear reactors, as can undergo neutron capture, followed by emission of an alpha particle, producing radioactive . However, doubly labeled water, containing both a heavy oxygen and hydrogen, is useful as a non-radioactive isotopic tracer. Compared to the isotopic change of hydrogen atoms, the isotopic change of oxygen has a smaller effect on the physical properties. Tritiated water Tritiated water contains tritium (H) in place of protium (H) or deuterium (H). Since tritium is radioactive, tritiated water is also radioactive. Physical properties The physical properties of water and heavy water differ in several respects. Heavy water is less dissociated than light water at given temperature, and the true concentration of D ions is less than ions would be for light water at the same temperature. The same is true of OD vs. ions. For heavy water Kw DO (25.0 °C) = 1.35 × 10, and [D] must equal [OD] for neutral water. Thus pKw DO = p[OD] + p[D] = 7.44 + 7.44 = 14.87 (25.0 °C), and the p[D] of neutral heavy water at 25.0 °C is 7.44. The pD of heavy water is generally measured using pH electrodes giving a pH (apparent) value, or pHa, and at various temperatures a true acidic pD can be estimated from the directly pH meter measured pHa, such that pD+ = pHa (apparent reading from pH meter) + 0.41. The electrode correction for alkaline conditions is 0.456 for heavy water. The alkaline correction is then pD+ = pH(apparent reading from pH meter) + 0.456. These corrections are slightly different from the differences in p[D+] and p[OD-] of 0.44 from the corresponding ones in heavy water. Heavy water is 10.6% denser than ordinary water, and heavy water's physically different properties can be seen without equipment if a frozen sample is dropped into normal water, as it will sink. If the water is ice-cold the higher melting temperature of heavy ice can also be observed: it melts at 3.7 °C, and thus does not melt in ice-cold normal water. A 1935 experiment reported not the "slightest difference" in taste between ordinary and heavy water. However, a more recent study confirmed anecdotal observation that heavy water tastes slightly sweet to humans, with the effect mediated by the TAS1R2/TAS1R3 taste receptor. Rats given a choice between distilled normal water and heavy water were able to avoid the heavy water based on smell, and it may have a different taste. Some people report that minerals in water affect taste, e.g. potassium lending a sweet taste to hard water, but there are many factors of a perceived taste in water besides mineral contents. Heavy water lacks the characteristic blue color of light water; this is because the molecular vibration harmonics, which in light water cause weak absorption in the red part of the visible spectrum, are shifted into the infrared and thus heavy water does not absorb red light. No physical properties are listed for "pure" semi-heavy water because it is unstable as a bulk liquid. In the liquid state, a few water molecules are always in an ionized state, which means the hydrogen atoms can exchange among different oxygen atoms. Semi-heavy water could, in theory, be created via a chemical method, but it would rapidly transform into a dynamic mixture of 25% light water, 25% heavy water, and 50% semi-heavy. However, if it were made in the gas phase and directly deposited into a solid, semi-heavy water in the form of ice could be stable. This is due to collisions between water vapor molecules being almost completely negligible in the gas phase at standard temperatures, and once crystallized, collisions between the molecules cease altogether due to the rigid lattice structure of solid ice. Heavy water exchanges with atmospheric water until it reaches the usual hydrogen-isotopic ratio. History The US scientist and Nobel laureate Harold Urey discovered the isotope deuterium in 1931 and was later able to concentrate it in water. Urey's mentor Gilbert Newton Lewis isolated the first sample of pure heavy water by electrolysis in 1933. George de Hevesy and Erich Hofer used heavy water in 1934 in one of the first biological tracer experiments, to estimate the rate of turnover of water in the human body. The history of large-quantity production and use of heavy water, in early nuclear experiments, is described below. Emilian Bratu and Otto Redlich studied the autodissociation of heavy water in 1934. In the 1930s, it was suspected by the United States and Soviet Union that Austrian chemist Fritz Johann Hansgirg built a pilot plant for the Empire of Japan in Japanese ruled northern Korea to produce heavy water by using a new process he had invented. During the second World War, the company Fosfatbolaget in Ljungaverk, Sweden, produced 2,300 liters per year of heavy water. The heavy water was then sold both to Germany and to the Manhattan Project for the price of 1,40 SEK per gram of heavy water. In October 1939, Soviet physicists Yakov Borisovich Zel'dovich and Yulii Borisovich Khariton concluded that heavy water and carbon were the only feasible moderators for a natural uranium reactor, and in August 1940, along with Georgy Flyorov, submitted a plan to the Russian Academy of Sciences calculating that 15 tons of heavy water were needed for a reactor. With the Soviet Union having no uranium mines at the time, young Academy workers were sent to Leningrad photographic shops to buy uranium nitrate, but the entire heavy water project was halted in 1941 when German forces invaded during Operation Barbarossa. By 1943, Soviet scientists had discovered that all scientific literature relating to heavy water had disappeared from the West, which Flyorov in a letter warned Soviet leader Joseph Stalin about, and at which time there was only 2–3 kg of heavy water in the entire country. In late 1943, the Soviet purchasing commission in the U.S. obtained 1 kg of heavy water and a further 100 kg in February 1945, and upon World War II ending, the NKVD took over the project. In October 1946, as part of the Russian Alsos, the NKVD deported to the Soviet Union from Germany the German scientists who had worked on heavy water production during the war, including Karl-Hermann Geib, the inventor of the Girdler sulfide process. These German scientists worked under the supervision of German physical chemist Max Volmer at the Institute of Physical Chemistry in Moscow with the plant they constructed producing large quantities of heavy water by 1948. Effect on biological systems Different isotopes of chemical elements have slightly different chemical behaviors, but for most elements the differences are far too small to have a biological effect. In the case of hydrogen, larger differences in chemical properties among protium, deuterium, and tritium occur because chemical bond energy depends on the reduced mass of the nucleus–electron system; this is altered in heavy-hydrogen compounds (hydrogen-deuterium oxide is the most common) more than for heavy-isotope substitution involving other chemical elements. The isotope effects are especially relevant in biological systems, which are very sensitive to even the smaller changes, due to isotopically influenced properties of water when it acts as a solvent. To perform their tasks, enzymes rely on their finely tuned networks of hydrogen bonds, both in the active center with their substrates and outside the active center, to stabilize their tertiary structures. As a hydrogen bond with deuterium is slightly stronger than one involving ordinary hydrogen, in a highly deuterated environment, some normal reactions in cells are disrupted. Particularly hard-hit by heavy water are the delicate assemblies of mitotic spindle formations necessary for cell division in eukaryotes. Plants stop growing and seeds do not germinate when given only heavy water, because heavy water stops eukaryotic cell division. Tobacco does not germinate, but wheat does. The deuterium cell is larger and is a modification of the direction of division. The cell membrane also changes, and it reacts first to the impact of heavy water. In 1972, it was demonstrated that an increase in the percentage of deuterium in water reduces plant growth. Research conducted on the growth of prokaryote microorganisms in artificial conditions of a heavy hydrogen environment showed that in this environment, all the hydrogen atoms of water could be replaced with deuterium. Experiments showed that bacteria can live in 98% heavy water. Concentrations over 50% are lethal to multicellular organisms, but a few exceptions are known: plant species such as switchgrass (Panicum virgatum) which is able to grow on 50% D2O; Arabidopsis thaliana (70% DO); Vesicularia dubyana (85% D2O); Funaria hygrometrica (90% DO); and the anhydrobiotic species of nematode Panagrolaimus superbus (nearly 100% DO). A comprehensive study of heavy water on the fission yeast Schizosaccharomyces pombe showed that the cells displayed an altered glucose metabolism and slow growth at high concentrations of heavy water. In addition, the cells activated the heat-shock response pathway and the cell integrity pathway, and mutants in the cell integrity pathway displayed increased tolerance to heavy water. Despite its toxicity at high levels, heavy water has been observed to extend lifespan of certain yeasts by up to 85%, with the hypothesized mechanism being the reduction of reactive oxygen species turnover. Heavy water affects the period of circadian oscillations, consistently increasing the length of each cycle. The effect has been demonstrated in unicellular organisms, green plants, isopods, insects, birds, mice, and hamsters. The mechanism is unknown. Like ethanol, heavy water temporarily changes the relative density of cupula relative to the endolymph in the vestibular organ, causing positional nystagmus, illusions of bodily rotations, dizziness, and nausea. However, the direction of nystagmus is in the opposite direction of ethanol, since it is denser than water, not lighter. Effect on animals Experiments with mice, rats, and dogs have shown that a degree of 25% deuteration prevents gametes or zygotes from developing, causing (sometimes irreversible) sterility. High concentrations of heavy water (90%) rapidly kill fish, tadpoles, flatworms, and Drosophila. Mice raised from birth with 30% heavy water have 25% deuteration in body fluid and 10% in brains. They are normal except for sterility. Deuteration during pregnancy induces fetal abnormality. Higher deuteration in body fluid causes death. Mammals (for example, rats) given heavy water to drink die after a week, at a time when their body water approaches about 50% deuteration. The mode of death appears to be the same as that in cytotoxic poisoning (such as chemotherapy) or in acute radiation syndrome (though deuterium is not radioactive), and is caused by deuterium's action in generally inhibiting cell division. It is more toxic to malignant cells than normal cells, but the concentrations needed are too high for regular use. As may occur in chemotherapy, deuterium-poisoned mammals die of a failure of bone marrow (producing bleeding and infections) and of intestinal-barrier functions (producing diarrhea and loss of fluids). Despite the problems of plants and animals in living with too much deuterium, prokaryotic organisms such as bacteria, which do not have the mitotic problems induced by deuterium, may be grown and propagated in fully deuterated conditions, resulting in replacement of all hydrogen atoms in the bacterial proteins and DNA with the deuterium isotope. This leads to a process of bootstrapping. With prokaryotes producing fully deuterated glucose, fully deuterated Escherichia coli and Torula were raised, and they could produce even more complex fully deuterated chemicals. Molds like Aspergillus could not replicate under fully deuterated conditions. In higher organisms, full replacement with heavy isotopes can be accomplished with other non-radioactive heavy isotopes (such as carbon-13, nitrogen-15, and oxygen-18), but this cannot be done for deuterium. This is a consequence of the ratio of nuclear masses between the isotopes of hydrogen, which is much greater than for any other element. Deuterium oxide is used to enhance boron neutron capture therapy, but this effect does not rely on the biological or chemical effects of deuterium, but instead on deuterium's ability to moderate (slow) neutrons without capturing them. 2021 experimental evidence indicates that systemic administration of deuterium oxide (30% drinking water supplementation) suppresses tumor growth in a standard mouse model of human melanoma, an effect attributed to selective induction of cellular stress signaling and gene expression in tumor cells. Toxicity in humans Because it would take a very large amount of heavy water to replace 25% to 50% of a human being's body water (water being in turn 50–75% of body weight) with heavy water, accidental or intentional poisoning with heavy water is unlikely to the point of practical disregard. Poisoning would require that the victim ingest large amounts of heavy water without significant normal water intake for many days to produce any noticeable toxic effects. Oral doses of heavy water in the range of several grams, as well as heavy oxygen O, are routinely used in human metabolic experiments. (See doubly labeled water testing.) Since one in about every 6,400 hydrogen atoms is deuterium, a human containing of body water would normally contain enough deuterium (about ) to make of pure heavy water, so roughly this dose is required to double the amount of deuterium in the body. A loss of blood pressure may partially explain the reported incidence of dizziness upon ingestion of heavy water. However, it is more likely that this symptom can be attributed to altered vestibular function. Heavy water, like ethanol, causes a temporary difference in the density of endolymph within the cupula, which confuses the vestibulo-ocular reflex and causes motion sickness symptoms. Heavy water radiation contamination confusion Although many people associate heavy water primarily with its use in nuclear reactors, pure heavy water is not radioactive. Commercial-grade heavy water is slightly radioactive due to the presence of minute traces of natural tritium, but the same is true of ordinary water. Heavy water that has been used as a coolant in nuclear power plants contains substantially more tritium as a result of neutron bombardment of the deuterium in the heavy water (tritium is a health risk when ingested in large quantities). In 1990, a disgruntled employee at the Point Lepreau Nuclear Generating Station in Canada obtained a sample (estimated as about a "half cup") of heavy water from the primary heat transport loop of the nuclear reactor, and loaded it into a cafeteria drink dispenser. Eight employees drank some of the contaminated water. The incident was discovered when employees began leaving bioassay urine samples with elevated tritium levels. The quantity of heavy water involved was far below levels that could induce heavy water toxicity, but several employees received elevated radiation doses from tritium and neutron-activated chemicals in the water. This was not an incident of heavy water poisoning, but rather radiation poisoning from other isotopes in the heavy water. Some news services were not careful to distinguish these points, and some of the public were left with the impression that heavy water is normally radioactive and more severely toxic than it actually is. Even if pure heavy water had been used in the water dispenser indefinitely, it is not likely the incident would have been detected or caused harm, since no employee would be expected to get much more than 25% of their daily drinking water from such a source. Production methods The most cost-effective process for producing heavy water is the dual temperature exchange sulfide process (known as the Girdler sulfide process) developed in parallel by Karl-Hermann Geib and Jerome S. Spevack in 1943. An alternative process, patented by Graham M. Keyser, uses lasers to selectively dissociate deuterated hydrofluorocarbons to form deuterium fluoride, which can then be separated by physical means. Although the energy consumption for this process is much less than for the Girdler sulfide process, this method is currently uneconomical due to the expense of procuring the necessary hydrofluorocarbons. As noted, modern commercial heavy water is almost universally referred to, and sold as, deuterium oxide. It is most often sold in various grades of purity, from 98% enrichment to 99.75–99.98% deuterium enrichment (nuclear reactor grade) and occasionally even higher isotopic purity. Production by country Argentina Argentina was the main producer of heavy water, using an ammonia/hydrogen exchange based plant supplied by Switzerland's Sulzer company. It was also a major exporter to Canada, Germany, the US and other countries. The heavy water production facility located in Arroyito was the world's largest heavy water production facility. Argentina produced of heavy water per year in 2015 using the monothermal ammonia-hydrogen isotopic exchange method. Since 2017, the Arroyito plant has not been operational. United States During the Manhattan Project the United States constructed three heavy water production plants as part of the P-9 Project at Morgantown Ordnance Works, near Morgantown, West Virginia; at the Wabash River Ordnance Works, near Dana and Newport, Indiana; and at the Alabama Ordnance Works, near Childersburg and Sylacauga, Alabama. Heavy water was also acquired from the Cominco plant in Trail, British Columbia, Canada. The Chicago Pile-3 experimental reactor used heavy water as a moderator and went critical in 1944. The three domestic production plants were shut down in 1945 after producing around of product. The Wabash plant resumed heavy water production in 1952. In 1953, the United States began using heavy water in plutonium production reactors at the Savannah River Site. The first of the five heavy water reactors came online in 1953, and the last was placed in cold shutdown in 1996. The reactors were heavy water reactors so that they could produce both plutonium and tritium for the US nuclear weapons program. The U.S. developed the Girdler sulfide chemical exchange production process—which was first demonstrated on a large scale at the Dana, Indiana plant in 1945 and at the Savannah River Site in 1952. India India is the world's largest producer of heavy water through its Heavy Water Board. It exports heavy water to countries including the Republic of Korea, China, and the United States. Norway In 1934, Norsk Hydro built the first commercial heavy water plant at Vemork, Tinn, eventually producing per day. From 1940 and throughout World War II, the plant was under German control, and the Allies decided to destroy the plant and its heavy water to inhibit German development of nuclear weapons. In late 1942, a planned raid called Operation Freshman by British airborne troops failed, both gliders crashing. The raiders were killed in the crash or subsequently executed by the Germans. On the night of 27 February 1943 Operation Gunnerside succeeded. Norwegian commandos and local resistance managed to demolish small, but key parts of the electrolytic cells, dumping the accumulated heavy water down the factory drains. On 16 November 1943, the Allied air forces dropped more than 400 bombs on the site. The Allied air raid prompted the Nazi government to move all available heavy water to Germany for safekeeping. On 20 February 1944, a Norwegian partisan sank the ferry M/F Hydro carrying heavy water across Lake Tinn, at the cost of 14 Norwegian civilian lives, and most of the heavy water was presumably lost. A few of the barrels were only half full, hence buoyant, and may have been salvaged and transported to Germany. Recent investigation of production records at Norsk Hydro and analysis of an intact barrel that was salvaged in 2004 revealed that although the barrels in this shipment contained water of pH 14—indicative of the alkaline electrolytic refinement process—they did not contain high concentrations of DO. Despite the apparent size of the shipment, the total quantity of pure heavy water was quite small, most barrels only containing 0.5–1% pure heavy water. The Germans would have needed about 5 tons of heavy water to get a nuclear reactor running. The manifest clearly indicated that there was only half a ton of heavy water being transported to Germany. Hydro was carrying far too little heavy water for one reactor, let alone the 10 or more tons needed to make enough plutonium for a nuclear weapon. The German nuclear weapons program was much less advanced than the Manhattan Project, and no reactor constructed in Nazi Germany came close to reaching criticality. No amount of heavy water would have changed that. Israel admitted running the Dimona reactor with Norwegian heavy water sold to it in 1959. Through re-export using Romania and Germany, India probably also used Norwegian heavy water. Canada As part of its contribution to the Manhattan Project, Canada built and operated a per month (design capacity) electrolytic heavy water plant at Trail, British Columbia, which started operation in 1943. The Atomic Energy of Canada Limited (AECL) design of power reactor requires large quantities of heavy water to act as a neutron moderator and coolant. AECL ordered two heavy water plants, which were built and operated in Atlantic Canada at Glace Bay, Nova Scotia (by Deuterium of Canada Limited) and Point Tupper, Richmond County, Nova Scotia (by Canadian General Electric). These plants proved to have significant design, construction and production problems. The Glace Bay plant reached full production in 1984 after being taken over by AECL in 1971. The Point Tupper plant reached full production in 1974 and AECL purchased the plant in 1975. Design changes from the Point Tupper plant were carried through as AECL built the Bruce Heavy Water Plant (), which it later sold to Ontario Hydro, to ensure a reliable supply of heavy water for future power plants. The two Nova Scotia plants were shut down in 1985 when their production proved unnecessary. The Bruce Heavy Water Plant (BHWP) in Ontario was the world's largest heavy water production plant with a capacity of 1600 tonnes per year at its peak (800 tonnes per year per full plant, two fully operational plants at its peak). It used the Girdler sulfide process to produce heavy water, and required 340,000 tonnes of feed water to produce one tonne of heavy water. It was part of a complex that included eight CANDU reactors, which provided heat and power for the heavy water plant. The site was located at Douglas Point/Bruce Nuclear Generating Station near Tiverton, Ontario, on Lake Huron where it had access to the waters of the Great Lakes. AECL issued the construction contract in 1969 for the first BHWP unit (BHWP A). Commissioning of BHWP A was done by Ontario Hydro from 1971 through 1973, with the plant entering service on 28 June 1973, and design production capacity being achieved in April 1974. Due to the success of BHWP A and the large amount of heavy water that would be required for the large numbers of upcoming planned CANDU nuclear power plant construction projects, Ontario Hydro commissioned three additional heavy water production plants for the Bruce site (BHWP B, C, and D). BHWP B was placed into service in 1979. These first two plants were significantly more efficient than planned, and the number of CANDU construction projects ended up being significantly lower than originally planned, which led to the cancellation of construction on BHWP C & D. In 1984, BHWP A was shut down. By 1993 Ontario Hydro had produced enough heavy water to meet all of its anticipated domestic needs (which were lower than expected due to improved efficiency in the use and recycling of heavy water), so they shut down and demolished half of the capacity of BHWP B. The remaining capacity continued to operate in order to fulfil demand for heavy water exports until it was permanently shut down in 1997, after which the plant was gradually dismantled and the site cleared. AECL is currently researching other more efficient and environmentally benign processes for creating heavy water. This is relevant for CANDU reactors since heavy water represented about 15–20% of the total capital cost of each CANDU plant in the 1970s and 1980s. Iran Since 1996 a plant for production of heavy water was being constructed at Khondab near Arak. On 26 August 2006, Iranian President Ahmadinejad inaugurated the expansion of the country's heavy-water plant. Iran has indicated that the heavy-water production facility will operate in tandem with a 40 MW research reactor that had a scheduled completion date in 2009. Iran produced deuterated solvents in early 2011 for the first time. The core of the IR-40 is supposed to be re-designed based on the nuclear agreement in July 2015. Under the Joint Comprehensive Plan of Action, Iran is permitted to store only of heavy water. Iran exports excess production, making Iran the world's third largest exporter of heavy water. In 2023, Iran sells heavy water; customers have proposed a price over 1,000 dollars per liter. Pakistan In Pakistan, there are two heavy water production sites that are based in Punjab. Commissioned in 1995–96, the Khushab Nuclear Complex is a central element of Pakistan's stockpile program for production of weapon-grade plutonium, deuterium, and tritium for advanced compact warheads (i.e. thermonuclear weapons). Another heavy water facility for producing the heavy water is located in Multan, that it sells to nuclear power plants in Karachi and Chashma. In early 1980s, Pakistan succeeded in acquiring a tritium purification and storage plant and deuterium and tritium precursor materials from two former East German firms. Unlike India and Iran, the heavy water produced by Pakistan is not exported nor available for purchase to any nation and is solely used for its weapons complex and energy generation at its local nuclear power plants. Other countries Romania produced heavy water at the now-decommissioned Drobeta Girdler sulfide plant for domestic and export purposes. France operated a small plant during the 1950s and 1960s. Applications Nuclear magnetic resonance Deuterium oxide is used in nuclear magnetic resonance spectroscopy when using water as a solvent if the nuclide of interest is hydrogen. This is because the signal from light-water (HO) solvent molecules would overwhelm the signal from the molecule of interest dissolved in it. Deuterium has a different magnetic moment and therefore does not contribute to the H-NMR signal at the hydrogen-1 resonance frequency. For some experiments, it may be desirable to identify the labile hydrogens on a compound, that is hydrogens that can easily exchange away as H ions on some positions in a molecule. With addition of DO, sometimes referred to as a DO shake, labile hydrogens exchange between the compound of interest and the solvent, leading to replacement of those specific H atoms in the compound with H. These positions in the molecule then do not appear in the H-NMR spectrum. Organic chemistry Deuterium oxide is often used as the source of deuterium for preparing specifically labelled isotopologues of organic compounds. For example, C-H bonds adjacent to ketonic carbonyl groups can be replaced by C-D bonds, using acid or base catalysis. Trimethylsulfoxonium iodide, made from dimethyl sulfoxide and methyl iodide can be recrystallized from deuterium oxide, and then dissociated to regenerate methyl iodide and dimethyl sulfoxide, both deuterium labelled. In cases where specific double labelling by deuterium and tritium is contemplated, the researcher must be aware that deuterium oxide, depending upon age and origin, can contain some tritium. Infrared spectroscopy Deuterium oxide is often used instead of water when collecting FTIR spectra of proteins in solution. HO creates a strong band that overlaps with the amide I region of proteins. The band from DO is shifted away from the amide I region. Neutron moderator Heavy water is used in certain types of nuclear reactors, where it acts as a neutron moderator to slow down neutrons so that they are more likely to react with the fissile uranium-235 than with uranium-238, which captures neutrons without fissioning. The CANDU reactor uses this design. Light water also acts as a moderator, but because light water absorbs more neutrons than heavy water, reactors using light water for a reactor moderator must use enriched uranium rather than natural uranium, otherwise criticality is impossible. A significant fraction of outdated power reactors, such as the RBMK reactors in the USSR, were constructed using normal water for cooling but graphite as a moderator. However, the danger of graphite in power reactors (graphite fires in part led to the Chernobyl disaster) has led to the discontinuation of graphite in standard reactor designs. The breeding and extraction of plutonium can be a relatively rapid and cheap route to building a nuclear weapon, as chemical separation of plutonium from fuel is easier than isotopic separation of U-235 from natural uranium. Among current and past nuclear weapons states, Israel, India, and North Korea first used plutonium from heavy water moderated reactors burning natural uranium, while China, South Africa and Pakistan first built weapons using highly enriched uranium. The Nazi nuclear program, operating with more modest means than the contemporary Manhattan Project and hampered by many leading scientists having been driven into exile (many of them ending up working for the Manhattan Project), as well as continuous infighting, wrongly dismissed graphite as a moderator due to not recognizing the effect of impurities. Given that isotope separation of uranium was deemed too big a hurdle, this left heavy water as a potential moderator. Other problems were the ideological aversion regarding what propaganda dismissed as "Jewish physics" and the mistrust between those who had been enthusiastic Nazis even before 1933 and those who were Mitläufer or trying to keep a low profile. In part due to allied sabotage and commando raids on Norsk Hydro (then the world's largest producer of heavy water) as well as the aforementioned infighting, the German nuclear program never managed to assemble enough uranium and heavy water in one place to achieve criticality despite possessing enough of both by the end of the war. In the U.S., however, the first experimental atomic reactor (1942), as well as the Manhattan Project Hanford production reactors that produced the plutonium for the Trinity test and Fat Man bombs, all used pure carbon (graphite) neutron moderators combined with normal water cooling pipes. They functioned with neither enriched uranium nor heavy water. Russian and British plutonium production also used graphite-moderated reactors. There is no evidence that civilian heavy water power reactors—such as the CANDU or Atucha designs—have been used to produce military fissile materials. In nations that do not already possess nuclear weapons, nuclear material at these facilities is under IAEA safeguards to discourage any diversion. Due to its potential for use in nuclear weapons programs, the possession or import/export of large industrial quantities of heavy water are subject to government control in several countries. Suppliers of heavy water and heavy water production technology typically apply IAEA (International Atomic Energy Agency) administered safeguards and material accounting to heavy water. (In Australia, the Nuclear Non-Proliferation (Safeguards) Act 1987.) In the U.S. and Canada, non-industrial quantities of heavy water (i.e., in the gram to kg range) are routinely available without special license through chemical supply dealers and commercial companies such as the world's former major producer Ontario Hydro. Neutrino detector The Sudbury Neutrino Observatory (SNO) in Sudbury, Ontario uses 1,000 tonnes of heavy water on loan from Atomic Energy of Canada Limited. The neutrino detector is underground in a mine, to shield it from muons produced by cosmic rays. SNO was built to answer the question of whether or not electron-type neutrinos produced by fusion in the Sun (the only type the Sun should be producing directly, according to theory) might be able to turn into other types of neutrinos on the way to Earth. SNO detects the Cherenkov radiation in the water from high-energy electrons produced from electron-type neutrinos as they undergo charged current (CC) interactions with neutrons in deuterium, turning them into protons and electrons (however, only the electrons are fast enough to produce Cherenkov radiation for detection). SNO also detects neutrino electron scattering (ES) events, where the neutrino transfers energy to the electron, which then proceeds to generate Cherenkov radiation distinguishable from that produced by CC events. The first of these two reactions is produced only by electron-type neutrinos, while the second can be caused by all of the neutrino flavors. The use of deuterium is critical to the SNO function, because all three "flavours" (types) of neutrinos may be detected in a third type of reaction as well, neutrino-disintegration, in which a neutrino of any type (electron, muon, or tau) scatters from a deuterium nucleus (deuteron), transferring enough energy to break up the loosely bound deuteron into a free neutron and proton via a neutral current (NC) interaction. This event is detected when the free neutron is absorbed by 35Cl− present from NaCl deliberately dissolved in the heavy water, causing emission of characteristic capture gamma rays. Thus, in this experiment, heavy water not only provides the transparent medium necessary to produce and visualize Cherenkov radiation, but it also provides deuterium to detect exotic mu type (μ) and tau (τ) neutrinos, as well as a non-absorbent moderator medium to preserve free neutrons from this reaction, until they can be absorbed by an easily detected neutron-activated isotope. Metabolic rate and water turnover testing in physiology and biology Heavy water is employed as part of a mixture with HO for a common and safe test of mean metabolic rate in humans and animals undergoing their normal activities.The elimination rate of deuterium alone is a measure of body water turnover. This is highly variable between individuals and depends on environmental conditions as well as subject size, sex, age and physical activity. Tritium production Tritium is the active substance in self-powered lighting and controlled nuclear fusion, its other uses including autoradiography and radioactive labeling. It is also used in nuclear weapon design for boosted fission weapons and initiators. Tritium undergoes beta decay into helium-3, which is a stable, but rare, isotope of helium that is itself highly sought after. Some tritium is created in heavy water moderated reactors when deuterium captures a neutron. This reaction has a small cross-section (probability of a single neutron-capture event) and produces only small amounts of tritium, although enough to justify cleaning tritium from the moderator every few years to reduce the environmental risk of tritium escape. Given that helium-3 is a neutron poison with orders of magnitude higher capture cross section than any component of heavy or tritiated water, its accumulation in a heavy water neutron moderator or target for tritium production must be kept to a minimum. Producing a lot of tritium in this way would require reactors with very high neutron fluxes, or with a very high proportion of heavy water to nuclear fuel and very low neutron absorption by other reactor material. The tritium would then have to be recovered by isotope separation from a much larger quantity of deuterium, unlike production from lithium-6 (the present method), where only chemical separation is needed. Deuterium's absorption cross section for thermal neutrons is 0.52 millibarn (5.2 × 10 m; 1 barn = 10 m), while those of oxygen-16 and oxygen-17 are 0.19 and 0.24 millibarn, respectively. O makes up 0.038% of natural oxygen, making the overall cross section 0.28 millibarns. Therefore, in DO with natural oxygen, 21% of neutron captures are on oxygen, rising higher as O builds up from neutron capture on O. Also, O may emit an alpha particle on neutron capture, producing radioactive carbon-14.
Physical sciences
Water
Chemistry
14306
https://en.wikipedia.org/wiki/Hammerhead%20shark
Hammerhead shark
The hammerhead sharks are a group of sharks that form the family Sphyrnidae, named for the unusual and distinctive form of their heads, which are flattened and laterally extended into a cephalofoil (a T-shape or "hammer"). The shark's eyes are placed one on each end of this T-shaped structure, with their small mouths directly centered and underneath. Most hammerhead species are placed in the genus Sphyrna, while the winghead shark is placed in its own genus, Eusphyra. Many different—but not necessarily mutually exclusive—functions have been postulated for the cephalofoil, including sensory reception, manoeuvering, and prey manipulation. The cephalofoil gives the shark superior binocular vision and depth perception. Hammerheads are found worldwide, preferring life in warmer waters along coastlines and continental shelves. Unlike most sharks, some hammerhead species will congregate and swim in large schools during the day, becoming solitary hunters at night. Description The known species vary in size, ranging from in length and weighing from 3 to 580 kg (6.6 to 1,300 lb). One specimen caught off the Florida coast in 1906 weighed over . They are usually light gray and have a greenish tint. Their bellies are white, which allows them to blend into the background when viewed from below and sneak up to their prey. Their heads have lateral projections that give them a hammer-like shape. While overall similar, this shape differs somewhat between species; examples are: a distinct T-shape in the great hammerhead, a rounded head with a central notch in the scalloped hammerhead, and an unnotched rounded head in the smooth hammerhead. Hammerheads have disproportionately small mouths compared to other shark species. Some species are also known to form schools. In the evening, like most other sharks, they become solitary hunters. National Geographic explained that hammerheads can be found in warm, tropical waters, but during the summer, they begin a mass migration period in search of colder waters. Taxonomy and evolution Since sharks do not have mineralized bones and rarely fossilize, only their teeth are commonly found as fossils. Their closest relatives are the requiem sharks (Carcharinidae). Based on DNA studies and fossils, the ancestor of the hammerheads probably lived in the Early Miocene epoch about 20 million years ago. Using mitochondrial DNA, a phylogenetic tree of the hammerhead sharks showed the winghead shark as sister to the rest of the hammerhead sharks. As the winghead shark has proportionately the largest "hammer" of the hammerhead sharks, this suggests that the first ancestral hammerhead sharks also had large hammers. Cephalofoil The hammer-like shape of the head may have evolved at least in part to enhance the animal's vision. The positioning of the eyes, mounted on the sides of the shark's distinctive hammer head, allows 360° of vision in the vertical plane, meaning the animals can see above and below them at all times. They also have an increased binocular vision and depth of visual field as a result of the cephalofoil. The shape of the head was previously thought to help the shark find food, aiding in close-quarters maneuverability, and allowing sharp turning movement without losing stability. The unusual structure of its vertebrae, though, has been found to be instrumental in making the turns correctly, more often than the shape of its head, though it would also shift and provide lift. From what is known about the winghead shark, the shape of the hammerhead apparently has to do with an evolved sensory function. Like all sharks, hammerheads have electroreceptory sensory pores called ampullae of Lorenzini. The pores on the shark's head lead to sensory tubes, which detect electric fields generated by other living creatures. By distributing the receptors over a wider area, like a larger radio antenna, hammerheads can sweep for prey more effectively. Reproduction Reproduction occurs only once a year for hammerhead sharks, and usually occurs with the male shark biting the female shark violently until she agrees to mate with him. The hammerhead sharks exhibit a viviparous mode of reproduction with females giving birth to live young. Like other sharks, fertilization is internal, with the male transferring sperm to the female through one of two intromittent organs called claspers. The developing embryos are at first sustained by a yolk sac. When the supply of yolk is exhausted, the depleted yolk sac transforms into a structure analogous to a mammalian placenta (called a "yolk sac placenta" or "pseudoplacenta"), through which the mother delivers sustenance until birth. Once the baby sharks are born, they are not taken care of by the parents in any way. Usually, a litter consists of 12 to 15 pups, except for the great hammerhead, which gives birth to litters of 20 to 40 pups. These baby sharks huddle together and swim toward warmer water until they are old and large enough to survive on their own. In 2007, the bonnethead shark was found to be capable of asexual reproduction via automictic parthenogenesis, in which a female's ovum fuses with a polar body to form a zygote without the need for a male. This was the first shark known to do this. Diet Hammerhead sharks eat a large range of prey such as fish (including other sharks), squid, octopus, and crustaceans. Stingrays are a particular favorite, with the positioning of their (comparatively) smaller, crescent-shaped mouths underneath their T-shaped heads allowing for skilled skate, ray, and flounder hunting, among other seafloor-dwellers. These sharks will often be found swimming above the sand along the bottom of the ocean, stalking their prey. Their unique heads are further utilized as a tool (or weapon) if hunting rays and flatfishes; the shark uses its head to pin down and briefly stun the prey, and only eats once their quarry is clearly weakened and in shock. The great hammerhead, tending to be larger and more aggressive to its own kind than other hammerheads, occasionally engages in cannibalism, eating other hammerhead sharks, including mothers consuming their own young. In addition to the typical animal prey, bonnetheads have been found to feed on seagrass, which sometimes makes up as much as half their stomach contents. They may swallow it unintentionally, but they are able to partially digest it. At the time of discovery, this was the only known case of a potentially omnivorous species of shark (since then, whale sharks were also found to be omnivorous). Species There are ten distinct species of Hammerhead shark in the wild: Relationship with humans According to the International Shark Attack File, humans have been subjects of 17 documented, unprovoked attacks by hammerhead sharks within the genus Sphyrna since AD 1580. No human fatalities have been recorded. Most hammerhead shark species are too small to inflict serious damage to humans. The great and the scalloped hammerheads are listed on the World Conservation Union's (IUCN) 2008 Red List as endangered, whereas the smalleye hammerhead is listed as vulnerable. The status given to these sharks is as a result of overfishing and demand for their fins, an expensive delicacy. Among others, scientists expressed their concern about the plight of the scalloped hammerhead at the American Association for the Advancement of Science annual meeting in Boston. The young swim mostly in shallow waters along shores all over the world to avoid predators. Shark fins are prized as a delicacy in certain countries in Asia (such as China), and overfishing is putting many hammerhead sharks at risk of extinction. Fishermen who harvest the animals typically cut off the fins and toss the remainder of the fish, which is often still alive, back into the sea. This practice, known as finning, is lethal to the shark. In captivity The relatively small bonnethead is regular at public aquariums, as it has proven easier to keep in captivity than the larger hammerhead species, and it has been bred at a handful of facilities. Nevertheless, at up to in length and with highly specialized requirements, very few private aquarists have the experience and resources necessary to maintain a bonnethead in captivity. The larger hammerhead species can reach more than twice that size and are considered difficult, even compared to most other similar-sized sharks (such as Carcharhinus species, lemon shark, and sand tiger shark) regularly kept by public aquariums. They are particularly vulnerable during transport between facilities, may rub on surfaces in tanks, and may collide with rocks, causing injuries to their heads, so they require very large, specially adapted tanks. As a consequence, relatively few public aquaria have kept them for long periods. The scalloped hammerhead is the most frequently maintained large species, and it has been kept long term at public aquaria in most continents, but primarily in North America, Europe, and Asia. In 2014, fewer than 15 public aquaria in the world kept scalloped hammerheads. Great hammerheads have been kept at a few facilities in North America, including Atlantis Paradise Island Resort (Bahamas), Adventure Aquarium (New Jersey), Georgia Aquarium (Atlanta), Mote Marine Laboratory (Florida), and the Shark Reef at Mandalay Bay (Las Vegas). Smooth hammerheads have also been kept in the past. Protection Humans are the main threat to hammerhead sharks. Although they are not usually the primary target, hammerhead sharks are caught in fisheries all over the world. Tropical fisheries are the most common place for hammerheads to be caught because of their preference to reside in warm waters. The total number of hammerheads caught in fisheries is recorded in the Food and Agriculture Organization of the United Nations Global Capture Production dataset. The number steadily increased from 75 metric tons in 1990, to 6,313 metric tons by 2010. Shark fin traders say that hammerheads have some of the best quality fin needles which makes them good to eat when prepared properly. Hong Kong is the world's largest fin trade market and accounts for about 1.5% of the total annual amount of fins traded. It is estimated that around 375,000 great hammerhead sharks alone are traded per year which is equivalent to 21,000 metric tons of biomass. However, most sharks that are caught are only used for their fins and then discarded. The meat of hammerheads is generally unwanted. Consumption of hammerhead meat has been recorded in Trinidad and Tobago, Venezuela, Kenya and Japan. In March 2013, three endangered, commercially valuable sharks, the hammerheads, the oceanic whitetip, and porbeagle, were added to Appendix II of CITES, bringing shark fishing and commerce of these species under licensing and regulation. Cultural significance Among Torres Strait Islanders, the hammerhead shark, known as the beizam, is a common family totem and often represented in cultural artefacts such as the elaborate headdresses worn for ceremonial dances, known as dhari (dari). They are associated with law and order. Renowned artist Ken Thaiday Snr is known for his representations of beizam in his sculptural dari and other works. In native Hawaiian culture, sharks are considered to be gods of the sea, protectors of humans, and cleaners of excessive ocean life. Some of these sharks are believed to be family members who died and have been reincarnated into shark form, but others are considered man-eaters, also known as niuhi. These sharks include great white sharks, tiger sharks, and bull sharks. The hammerhead shark, also known as mano kihikihi, is not considered a man-eater or niuhi; it is considered to be one of the most respected sharks of the ocean, an aumakua. Many Hawaiian families believe that they have an aumakua watching over them and protecting them from the niuhi. The hammerhead shark is thought to be the birth animal of some children. Hawaiian children who are born with the hammerhead shark as an animal sign are believed to be warriors and are meant to sail the oceans. Hammerhead sharks rarely pass through the waters of Maui, but many Maui natives believe that their swimming by is a sign that the gods are watching over the families, and the oceans are clean and balanced.
Biology and health sciences
Sharks
null
14307
https://en.wikipedia.org/wiki/Hall%20effect
Hall effect
The Hall effect is the production of a potential difference (the Hall voltage) across an electrical conductor that is transverse to an electric current in the conductor and to an applied magnetic field perpendicular to the current. It was discovered by Edwin Hall in 1879. The Hall coefficient is defined as the ratio of the induced electric field to the product of the current density and the applied magnetic field. It is a characteristic of the material from which the conductor is made, since its value depends on the type, number, and properties of the charge carriers that constitute the current. Discovery Wires carrying current in a magnetic field experience a mechanical force perpendicular to both the current and magnetic field. In the 1820s, André-Marie Ampère observed this underlying mechanism that led to the discovery of the Hall effect. However it was not until a solid mathematical basis for electromagnetism was systematized by James Clerk Maxwell's "On Physical Lines of Force" (published in 1861–1862) that details of the interaction between magnets and electric current could be understood. Edwin Hall then explored the question of whether magnetic fields interacted with the conductors or the electric current, and reasoned that if the force was specifically acting on the current, it should crowd current to one side of the wire, producing a small measurable voltage. In 1879, he discovered this Hall effect while he was working on his doctoral degree at Johns Hopkins University in Baltimore, Maryland. Eighteen years before the electron was discovered, his measurements of the tiny effect produced in the apparatus he used were an experimental tour de force, published under the name "On a New Action of the Magnet on Electric Currents". Hall effect within voids The term ordinary Hall effect can be used to distinguish the effect described in the introduction from a related effect which occurs across a void or hole in a semiconductor or metal plate when current is injected via contacts that lie on the boundary or edge of the void. The charge then flows outside the void, within the metal or semiconductor material. The effect becomes observable, in a perpendicular applied magnetic field, as a Hall voltage appearing on either side of a line connecting the current-contacts. It exhibits apparent sign reversal in comparison to the "ordinary" effect occurring in the simply connected specimen. It depends only on the current injected from within the void. Hall effect superposition Superposition of these two forms of the effect, the ordinary and void effects, can also be realized. First imagine the "ordinary" configuration, a simply connected (void-less) thin rectangular homogeneous element with current-contacts on the (external) boundary. This develops a Hall voltage, in a perpendicular magnetic field. Next, imagine placing a rectangular void within this ordinary configuration, with current-contacts, as mentioned above, on the interior boundary of the void. (For simplicity, imagine the contacts on the boundary of the void lined up with the ordinary-configuration contacts on the exterior boundary.) In such a combined configuration, the two Hall effects may be realized and observed simultaneously in the same doubly connected device: A Hall effect on the external boundary that is proportional to the current injected only via the outer boundary, and an apparently sign-reversed Hall effect on the interior boundary that is proportional to the current injected only via the interior boundary. The superposition of multiple Hall effects may be realized by placing multiple voids within the Hall element, with current and voltage contacts on the boundary of each void. Further "Hall effects" may have additional physical mechanisms but are built on these basics. Theory The Hall effect is due to the nature of the current in a conductor. Current consists of the movement of many small charge carriers, typically electrons, holes, ions (see Electromigration) or all three. When a magnetic field is present, these charges experience a force, called the Lorentz force. When such a magnetic field is absent, the charges follow approximately straight paths between collisions with impurities, phonons, etc. However, when a magnetic field with a perpendicular component is applied, their paths between collisions are curved; thus, moving charges accumulate on one face of the material. This leaves equal and opposite charges exposed on the other face, where there is a scarcity of mobile charges. The result is an asymmetric distribution of charge density across the Hall element, arising from a force that is perpendicular to both the straight path and the applied magnetic field. The separation of charge establishes an electric field that opposes the migration of further charge, so a steady electric potential is established for as long as the charge is flowing. In classical electromagnetism electrons move in the opposite direction of the current (by convention "current" describes a theoretical "hole flow"). In some metals and semiconductors it appears "holes" are actually flowing because the direction of the voltage is opposite to the derivation below. For a simple metal where there is only one type of charge carrier (electrons), the Hall voltage can be derived by using the Lorentz force and seeing that, in the steady-state condition, charges are not moving in the -axis direction. Thus, the magnetic force on each electron in the -axis direction is cancelled by a -axis electrical force due to the buildup of charges. The term is the drift velocity of the current which is assumed at this point to be holes by convention. The term is negative in the -axis direction by the right hand rule. In steady state, , so , where is assigned in the direction of the -axis, (and not with the arrow of the induced electric field as in the image (pointing in the direction), which tells you where the field caused by the electrons is pointing). In wires, electrons instead of holes are flowing, so and . Also . Substituting these changes gives The conventional "hole" current is in the negative direction of the electron current and the negative of the electrical charge which gives where is charge carrier density, is the cross-sectional area, and is the charge of each electron. Solving for and plugging into the above gives the Hall voltage: If the charge build up had been positive (as it appears in some metals and semiconductors), then the assigned in the image would have been negative (positive charge would have built up on the left side). The Hall coefficient is defined as or where is the current density of the carrier electrons, and is the induced electric field. In SI units, this becomes (The units of are usually expressed as m3/C, or Ω·cm/G, or other variants.) As a result, the Hall effect is very useful as a means to measure either the carrier density or the magnetic field. One very important feature of the Hall effect is that it differentiates between positive charges moving in one direction and negative charges moving in the opposite. In the diagram above, the Hall effect with a negative charge carrier (the electron) is presented. But consider the same magnetic field and current are applied but the current is carried inside the Hall effect device by a positive particle. The particle would of course have to be moving in the opposite direction of the electron in order for the current to be the same—down in the diagram, not up like the electron is. And thus, mnemonically speaking, your thumb in the Lorentz force law, representing (conventional) current, would be pointing the same direction as before, because current is the same—an electron moving up is the same current as a positive charge moving down. And with the fingers (magnetic field) also being the same, interestingly the charge carrier gets deflected to the left in the diagram regardless of whether it is positive or negative. But if positive carriers are deflected to the left, they would build a relatively positive voltage on the left whereas if negative carriers (namely electrons) are, they build up a negative voltage on the left as shown in the diagram. Thus for the same current and magnetic field, the electric polarity of the Hall voltage is dependent on the internal nature of the conductor and is useful to elucidate its inner workings. This property of the Hall effect offered the first real proof that electric currents in most metals are carried by moving electrons, not by protons. It also showed that in some substances (especially p-type semiconductors), it is contrarily more appropriate to think of the current as positive "holes" moving rather than negative electrons. A common source of confusion with the Hall effect in such materials is that holes moving one way are really electrons moving the opposite way, so one expects the Hall voltage polarity to be the same as if electrons were the charge carriers as in most metals and n-type semiconductors. Yet we observe the opposite polarity of Hall voltage, indicating positive charge carriers. However, of course there are no actual positrons or other positive elementary particles carrying the charge in p-type semiconductors, hence the name "holes". In the same way as the oversimplistic picture of light in glass as photons being absorbed and re-emitted to explain refraction breaks down upon closer scrutiny, this apparent contradiction too can only be resolved by the modern quantum mechanical theory of quasiparticles wherein the collective quantized motion of multiple particles can, in a real physical sense, be considered to be a particle in its own right (albeit not an elementary one). Unrelatedly, inhomogeneity in the conductive sample can result in a spurious sign of the Hall effect, even in ideal van der Pauw configuration of electrodes. For example, a Hall effect consistent with positive carriers was observed in evidently n-type semiconductors. Another source of artefact, in uniform materials, occurs when the sample's aspect ratio is not long enough: the full Hall voltage only develops far away from the current-introducing contacts, since at the contacts the transverse voltage is shorted out to zero. Hall effect in semiconductors When a current-carrying semiconductor is kept in a magnetic field, the charge carriers of the semiconductor experience a force in a direction perpendicular to both the magnetic field and the current. At equilibrium, a voltage appears at the semiconductor edges. The simple formula for the Hall coefficient given above is usually a good explanation when conduction is dominated by a single charge carrier. However, in semiconductors and many metals the theory is more complex, because in these materials conduction can involve significant, simultaneous contributions from both electrons and holes, which may be present in different concentrations and have different mobilities. For moderate magnetic fields the Hall coefficient is or equivalently with Here is the electron concentration, the hole concentration, the electron mobility, the hole mobility and the elementary charge. For large applied fields the simpler expression analogous to that for a single carrier type holds. Relationship with star formation Although it is well known that magnetic fields play an important role in star formation, research models indicate that Hall diffusion critically influences the dynamics of gravitational collapse that forms protostars. Quantum Hall effect For a two-dimensional electron system which can be produced in a MOSFET, in the presence of large magnetic field strength and low temperature, one can observe the quantum Hall effect, in which the Hall conductance undergoes quantum Hall transitions to take on the quantized values. Spin Hall effect The spin Hall effect consists in the spin accumulation on the lateral boundaries of a current-carrying sample. No magnetic field is needed. It was predicted by Mikhail Dyakonov and V. I. Perel in 1971 and observed experimentally more than 30 years later, both in semiconductors and in metals, at cryogenic as well as at room temperatures. The quantity describing the strength of the Spin Hall effect is known as Spin Hall angle, and it is defined as: Where is the spin current generated by the applied current density . Quantum spin Hall effect For mercury telluride two dimensional quantum wells with strong spin-orbit coupling, in zero magnetic field, at low temperature, the quantum spin Hall effect has been observed in 2007. Anomalous Hall effect In ferromagnetic materials (and paramagnetic materials in a magnetic field), the Hall resistivity includes an additional contribution, known as the anomalous Hall effect (or the extraordinary Hall effect), which depends directly on the magnetization of the material, and is often much larger than the ordinary Hall effect. (Note that this effect is not due to the contribution of the magnetization to the total magnetic field.) For example, in nickel, the anomalous Hall coefficient is about 100 times larger than the ordinary Hall coefficient near the Curie temperature, but the two are similar at very low temperatures. Although a well-recognized phenomenon, there is still debate about its origins in the various materials. The anomalous Hall effect can be either an extrinsic (disorder-related) effect due to spin-dependent scattering of the charge carriers, or an intrinsic effect which can be described in terms of the Berry phase effect in the crystal momentum space (-space). Hall effect in ionized gases The Hall effect in an ionized gas (plasma) is significantly different from the Hall effect in solids (where the Hall parameter is always much less than unity). In a plasma, the Hall parameter can take any value. The Hall parameter, , in a plasma is the ratio between the electron gyrofrequency, , and the electron-heavy particle collision frequency, : where is the elementary charge (approximately ) is the magnetic field (in teslas) is the electron mass (approximately ). The Hall parameter value increases with the magnetic field strength. Physically, the trajectories of electrons are curved by the Lorentz force. Nevertheless, when the Hall parameter is low, their motion between two encounters with heavy particles (neutral or ion) is almost linear. But if the Hall parameter is high, the electron movements are highly curved. The current density vector, , is no longer collinear with the electric field vector, . The two vectors and make the Hall angle, , which also gives the Hall parameter: Other Hall effects The Hall Effects family has expanded to encompass other quasi-particles in semiconductor nanostructures. Specifically, a set of Hall Effects has emerged based on excitons and exciton-polaritons n 2D materials and quantum wells. Applications Hall sensors amplify and use the Hall effect for a variety of sensing applications. Corbino effect The Corbino effect, named after its discoverer Orso Mario Corbino, is a phenomenon involving the Hall effect, but a disc-shaped metal sample is used in place of a rectangular one. Because of its shape the Corbino disc allows the observation of Hall effect–based magnetoresistance without the associated Hall voltage. A radial current through a circular disc, subjected to a magnetic field perpendicular to the plane of the disc, produces a "circular" current through the disc. The absence of the free transverse boundaries renders the interpretation of the Corbino effect simpler than that of the Hall effect.
Physical sciences
Electrodynamics
Physics
14308
https://en.wikipedia.org/wiki/Hoover%20Dam
Hoover Dam
Hoover Dam is a concrete arch-gravity dam in the Black Canyon of the Colorado River, on the border between the U.S. states of Nevada and Arizona. Constructed between 1931 and 1936, during the Great Depression, it was dedicated on September 30, 1935, by President Franklin D. Roosevelt. Its construction was the result of a massive effort involving thousands of workers, and cost over 100 lives. Bills passed by Congress during its construction referred to it as Hoover Dam (after President Herbert Hoover), but the Roosevelt administration named it Boulder Dam. In 1947, Congress restored the name Hoover Dam. Since about 1900, the Black Canyon and nearby Boulder Canyon had been investigated for their potential to support a dam that would control floods, provide irrigation water, and produce hydroelectric power. In 1928, Congress authorized the project. The winning bid to build the dam was submitted by a consortium named Six Companies, Inc., which began construction in early 1931. Such a large concrete structure had never been built before, and some of the techniques used were unproven. The torrid summer weather and lack of facilities near the site also presented difficulties. Nevertheless, Six Companies turned the dam over to the federal government on March 1, 1936, more than two years ahead of schedule. Hoover Dam impounds Lake Mead and is located near Boulder City, Nevada, a municipality originally constructed for workers on the construction project, about southeast of Las Vegas, Nevada. The dam's generators provide power for public and private utilities in Nevada, Arizona, and California. Hoover Dam is a major tourist attraction, with 7 million tourists a year. The heavily traveled U.S. Route 93 (US 93) ran along the dam's crest until October 2010, when the Hoover Dam Bypass opened. Background Search for resources As the United States developed the Southwest, the Colorado River was seen as a potential source of irrigation water. An initial attempt at diverting the river for irrigation purposes occurred in the late 1890s, when land speculator William Beatty built the Alamo Canal just north of the Mexican border; the canal dipped into Mexico before running to a desolate area Beatty named the Imperial Valley. Though water from the Alamo Canal allowed for the widespread settlement of the valley, the canal proved expensive to operate. After a catastrophic breach that caused the Colorado River to fill the Salton Sea, the Southern Pacific Railroad spent $3 million in 1906–07 to stabilize the waterway, an amount it hoped in vain that it would be reimbursed for by the federal government. Even after the waterway was stabilized, it proved unsatisfactory because of constant disputes with landowners on the Mexican side of the border. As the technology of electric power transmission improved, the Lower Colorado was considered for its hydroelectric-power potential. In 1902, the Edison Electric Company of Los Angeles surveyed the river in the hope of building a rock dam which could generate . However, at the time, the limit of transmission of electric power was , and there were few customers (mostly mines) within that limit. Edison allowed land options it held on the river to lapse—including an option for what became the site of Hoover Dam. In the following years, the Bureau of Reclamation (BOR), known as the Reclamation Service at the time, also considered the Lower Colorado as the site for a dam. Service chief Arthur Powell Davis proposed using dynamite to collapse the walls of Boulder Canyon, north of the eventual dam site, into the river. The river would carry off the smaller pieces of debris, and a dam would be built incorporating the remaining rubble. In 1922, after considering it for several years, the Reclamation Service finally rejected the proposal, citing doubts about the unproven technique and questions as to whether it would, in fact, save money. Planning and agreements In 1922, the Reclamation Service presented a report calling for the development of a dam on the Colorado River for flood control and electric power generation. The report was principally authored by Davis and was called the Fall-Davis report after Interior Secretary Albert Fall. The Fall-Davis report cited use of the Colorado River as a federal concern because the river's basin covered several states, and the river eventually entered Mexico. Though the Fall-Davis report called for a dam "at or near Boulder Canyon", the Reclamation Service (which was renamed the Bureau of Reclamation the following year) found that canyon unsuitable. One potential site at Boulder Canyon was bisected by a geologic fault; two others were so narrow there was no space for a construction camp at the bottom of the canyon or for a spillway. The Service investigated Black Canyon and found it ideal; a railway could be laid from the railhead in Las Vegas to the top of the dam site. Despite the site change, the dam project was referred to as the "Boulder Canyon Project". With little guidance on water allocation from the Supreme Court, proponents of the dam feared endless litigation. Delph Carpenter, a Colorado attorney, proposed that the seven states which fell within the river's basin (California, Nevada, Arizona, Utah, New Mexico, Colorado and Wyoming) form an interstate compact, with the approval of Congress. Such compacts were authorized by Article I of the United States Constitution but had never been concluded among more than two states. In 1922, representatives of seven states met with then-Secretary of Commerce Herbert Hoover. Initial talks produced no result, but when the Supreme Court handed down the Wyoming v. Colorado decision undermining the claims of the upstream states, they became anxious to reach an agreement. The resulting Colorado River Compact was signed on November 24, 1922. Legislation to authorize the dam was introduced repeatedly by two California Republicans, Representative Phil Swing and Senator Hiram Johnson, but representatives from other parts of the country considered the project as hugely expensive and one that would mostly benefit California. The 1927 Mississippi flood made Midwestern and Southern congressmen and senators more sympathetic toward the dam project. On March 12, 1928, the failure of the St. Francis Dam, constructed by the city of Los Angeles, caused a disastrous flood that killed up to 600 people. As that dam was a curved-gravity type, similar in design to the arch-gravity as was proposed for the Black Canyon dam, opponents claimed that the Black Canyon dam's safety could not be guaranteed. Congress authorized a board of engineers to review plans for the proposed dam. The Colorado River Board found the project feasible, but warned that should the dam fail, every downstream Colorado River community would be destroyed, and that the river might change course and empty into the Salton Sea. The Board cautioned: "To avoid such possibilities, the proposed dam should be constructed on conservative if not ultra-conservative lines." On December 21, 1928, President Coolidge signed the bill authorizing the dam. The Boulder Canyon Project Act appropriated $165 million for the project along with the downstream Imperial Dam and All-American Canal, a replacement for Beatty's canal entirely on the U.S. side of the border. It also permitted the compact to go into effect when at least six of the seven states approved it. This occurred on March 6, 1929, with Utah's ratification; Arizona did not approve it until 1944. Design, preparation and contracting Even before Congress approved the Boulder Canyon Project, the Bureau of Reclamation was considering what kind of dam should be used. Officials eventually decided on a massive concrete arch-gravity dam, the design of which was overseen by the Bureau's chief design engineer John L. Savage. The monolithic dam would be thick at the bottom and thin near the top and would present a convex face towards the water above the dam. The curving arch of the dam would transmit the water's force into the abutments, in this case the rock walls of the canyon. The wedge-shaped dam would be thick at the bottom, narrowing to at the top, leaving room for a highway connecting Nevada and Arizona. On January 10, 1931, the Bureau made the bid documents available to interested parties, at five dollars a copy. The government was to provide the materials, and the contractor was to prepare the site and build the dam. The dam was described in minute detail, covering 100 pages of text and 76 drawings. A $2 million bid bond was to accompany each bid; the winner would have to post a $5 million performance bond. The contractor had seven years to build the dam, or penalties would ensue. The Wattis Brothers, heads of the Utah Construction Company, were interested in bidding on the project, but lacked the money for the performance bond. They lacked sufficient resources even in combination with their longtime partners, Morrison-Knudsen, which employed the nation's leading dam builder, Frank Crowe. They formed a joint venture to bid for the project with Pacific Bridge Company of Portland, Oregon; Henry J. Kaiser & W. A. Bechtel Company of San Francisco; MacDonald & Kahn Ltd. of Los Angeles; and the J.F. Shea Company of Portland, Oregon. The joint venture was called Six Companies, Inc. as Bechtel and Kaiser were considered one company for purposes of Six in the name. The name was descriptive and was an inside joke among the San Franciscans in the bid, where "Six Companies" was also a Chinese benevolent association in the city. There were three valid bids, and Six Companies' bid of $48,890,955 was the lowest, within $24,000 of the confidential government estimate of what the dam would cost to build, and five million dollars less than the next-lowest bid. The city of Las Vegas had lobbied hard to be the headquarters for the dam construction, closing its many speakeasies when the decision maker, Secretary of the Interior Ray Wilbur, came to town. Instead, Wilbur announced in early 1930 that a model city was to be built in the desert near the dam site. This town became known as Boulder City, Nevada. Construction of a rail line joining Las Vegas and the dam site began in September 1930. Construction Labor force Soon after the dam was authorized, increasing numbers of unemployed people converged on southern Nevada. Las Vegas, then a small city of some 5,000, saw between 10,000 and 20,000 unemployed descend on it. A government camp was established for surveyors and other personnel near the dam site; this soon became surrounded by a squatters' camp. Known as McKeeversville, the camp was home to men hoping for work on the project, together with their families. Another camp, on the flats along the Colorado River, was officially called Williamsville, but was known to its inhabitants as "Ragtown". When construction began, Six Companies hired large numbers of workers, with more than 3,000 on the payroll by 1932 and with employment peaking at 5,251 in July 1934. "Mongolian" (Chinese) labor was prevented by the construction contract, while the number of black people employed by Six Companies never exceeded thirty, mostly lowest-pay-scale laborers in a segregated crew, who were issued separate water buckets. As part of the contract, Six Companies, Inc. was to build Boulder City to house the workers. The original timetable called for Boulder City to be built before the dam project began, but President Hoover ordered work on the dam to begin in March 1931 rather than in October. The company built bunkhouses, attached to the canyon wall, to house 480 single men at what became known as River Camp. Workers with families were left to provide their own accommodations until Boulder City could be completed, and many lived in Ragtown. The site of Hoover Dam endures extremely hot weather, and the summer of 1931 was especially torrid, with the daytime high averaging . Sixteen workers and other riverbank residents died of heat prostration between June 25 and July 26, 1931. The Industrial Workers of the World (IWW or "Wobblies"), though much-reduced from their heyday as militant labor organizers in the early years of the century, hoped to unionize the Six Companies workers by capitalizing on their discontent. They sent eleven organizers, several of whom were arrested by Las Vegas police. On August 7, 1931, the company cut wages for all tunnel workers. Although the workers sent the organizers away, not wanting to be associated with the "Wobblies", they formed a committee to represent them with the company. The committee drew up a list of demands that evening and presented them to Crowe the following morning. He was noncommittal. The workers hoped that Crowe, the general superintendent of the job, would be sympathetic; instead, he gave a scathing interview to a newspaper, describing the workers as "malcontents". On the morning of the 9th, Crowe met with the committee and told them that management refused their demands, was stopping all work, and was laying off the entire work force, except for a few office workers and carpenters. The workers were given until 5 p.m. to vacate the premises. Concerned that a violent confrontation was imminent, most workers took their paychecks and left for Las Vegas to await developments. Two days later, the remainder were talked into leaving by law enforcement. On August 13, the company began hiring workers again, and two days later, the strike was called off. While the workers received none of their demands, the company guaranteed there would be no further reductions in wages. Living conditions began to improve as the first residents moved into Boulder City in late 1931. A second labor action took place in July 1935, as construction on the dam wound down. When a Six Companies manager altered working times to force workers to take lunch on their own time, workers responded with a strike. Emboldened by Crowe's reversal of the lunch decree, workers raised their demands to include a $1-per-day raise. The company agreed to ask the Federal government to supplement the pay, but no money was forthcoming from Washington. The strike ended. River diversion Before the dam could be built, the Colorado River needed to be diverted away from the construction site. To accomplish this, four diversion tunnels were driven through the canyon walls, two on the Nevada side and two on the Arizona side. These tunnels were in diameter. Their combined length was nearly 16,000 ft, or more than . The contract required these tunnels to be completed by October 1, 1933, with a $3,000-per-day fine to be assessed for any delay. To meet the deadline, Six Companies had to complete work by early 1933, since only in late fall and winter was the water level in the river low enough to safely divert. Tunneling began at the lower portals of the Nevada tunnels in May 1931. Shortly afterward, work began on two similar tunnels in the Arizona canyon wall. In March 1932, work began on lining the tunnels with concrete. First the base, or invert, was poured. Gantry cranes, running on rails through the entire length of each tunnel were used to place the concrete. The sidewalls were poured next. Movable sections of steel forms were used for the sidewalls. Finally, using pneumatic guns, the overheads were filled in. The concrete lining is thick, reducing the finished tunnel diameter to . The river was diverted into the two Arizona tunnels on November 13, 1932; the Nevada tunnels were kept in reserve for high water. This was done by exploding a temporary cofferdam protecting the Arizona tunnels while at the same time dumping rubble into the river until its natural course was blocked. Following the completion of the dam, the entrances to the two outer diversion tunnels were sealed at the opening and halfway through the tunnels with large concrete plugs. The downstream halves of the tunnels following the inner plugs are now the main bodies of the spillway tunnels. The inner diversion tunnels were plugged at approximately one-third of their length, beyond which they now carry steel pipes connecting the intake towers to the power plant and outlet works. The inner tunnels' outlets are equipped with gates that can be closed to drain the tunnels for maintenance. Groundworks, rock clearance and grout curtain To protect the construction site from the Colorado River and to facilitate the river's diversion, two cofferdams were constructed. Work on the upper cofferdam began in September 1932, even though the river had not yet been diverted. The cofferdams were designed to protect against the possibility of the river's flooding a site at which two thousand men might be at work, and their specifications were covered in the bid documents in nearly as much detail as the dam itself. The upper cofferdam was high, and thick at its base, thicker than the dam itself. It contained of material. When the cofferdams were in place and the construction site was drained of water, excavation for the dam foundation began. For the dam to rest on solid rock, it was necessary to remove accumulated erosion soils and other loose materials in the riverbed until sound bedrock was reached. Work on the foundation excavations was completed in June 1933. During this excavation, approximately of material was removed. Since the dam was an arch-gravity type, the side-walls of the canyon would bear the force of the impounded lake. Therefore, the side-walls were also excavated to reach virgin rock, as weathered rock might provide pathways for water seepage. Shovels for the excavation came from the Marion Power Shovel Company. The men who removed this rock were called "high scalers". While suspended from the top of the canyon with ropes, the high-scalers climbed down the canyon walls and removed the loose rock with jackhammers and dynamite. Falling objects were the most common cause of death on the dam site; the high scalers' work thus helped ensure worker safety. One high scaler was able to save a life in a more direct manner: when a government inspector lost his grip on a safety line and began tumbling down a slope towards almost certain death, a high scaler was able to intercept him and pull him into the air. The construction site had become a magnet for tourists. The high scalers were prime attractions and showed off for the watchers. The high scalers received considerable media attention, with one worker dubbed the "Human Pendulum" for swinging co-workers (and, at other times, cases of dynamite) across the canyon. To protect themselves against falling objects, some high scalers dipped cloth hats in tar and allowed them to harden. When workers wearing such headgear were struck hard enough to inflict broken jaws, they sustained no skull damage. Six Companies ordered thousands of what initially were called "hard boiled hats" (later "hard hats") and strongly encouraged their use. The cleared, underlying rock foundation of the dam site was reinforced with grout, forming a grout curtain. Holes were driven into the walls and base of the canyon, as deep as into the rock, and any cavities encountered were to be filled with grout. This was done to stabilize the rock, to prevent water from seeping past the dam through the canyon rock, and to limit "uplift"—upward pressure from water seeping under the dam. The workers were under severe time constraints due to the beginning of the concrete pour. When they encountered hot springs or cavities too large to readily fill, they moved on without resolving the problem. A total of 58 of the 393 holes were incompletely filled. After the dam was completed and the lake began to fill, large numbers of significant leaks caused the Bureau of Reclamation to examine the situation. It found that the work had been incompletely done, and was based on less than a full understanding of the canyon's geology. New holes were drilled from inspection galleries inside the dam into the surrounding bedrock. It took nine years (1938–47) under relative secrecy to complete the supplemental grout curtain. Concrete The first concrete was poured into the dam on June 6, 1933, 18 months ahead of schedule. Since concrete heats and contracts as it cures, the potential for uneven cooling and contraction of the concrete posed a serious problem. Bureau of Reclamation engineers calculated that if the dam were to be built in a single continuous pour, the concrete would take 125 years to cool, and the resulting stresses would cause the dam to crack and crumble. Instead, the ground where the dam would rise was marked with rectangles, and concrete blocks in columns were poured, some as large as and high. Each five-foot form contained a set of steel pipes; cool river water would be poured through the pipes, followed by ice-cold water from a refrigeration plant. When an individual block had cured and had stopped contracting, the pipes were filled with grout. Grout was also used to fill the hairline spaces between columns, which were grooved to increase the strength of the joints. The concrete was delivered in huge steel buckets and almost 7 feet in diameter; Crowe was awarded two patents for their design. These buckets, which weighed when full, were filled at two massive concrete plants on the Nevada side, and were delivered to the site in special railcars. The buckets were then suspended from aerial cableways which were used to deliver the bucket to a specific column. As the required grade of aggregate in the concrete differed depending on placement in the dam (from pea-sized gravel to stones), it was vital that the bucket be maneuvered to the proper column. When the bottom of the bucket opened up, disgorging of concrete, a team of men worked it throughout the form. Although there are myths that men were caught in the pour and are entombed in the dam to this day, each bucket deepened the concrete in a form by only , and Six Companies engineers would not have permitted a flaw caused by the presence of a human body. A total of of concrete was used in the dam before concrete pouring ceased on May 29, 1935. In addition, were used in the power plant and other works. More than of cooling pipes were placed within the concrete. Overall, there is enough concrete in the dam to pave a two-lane highway from San Francisco to New York. Concrete cores were removed from the dam for testing in 1995; they showed that "Hoover Dam's concrete has continued to slowly gain strength" and the dam is composed of a "durable concrete having a compressive strength exceeding the range typically found in normal mass concrete". Hoover Dam concrete is not subject to alkali–silica reaction (ASR), as the Hoover Dam builders happened to use nonreactive aggregate, unlike that at downstream Parker Dam, where ASR has caused measurable deterioration. Dedication and completion With most work finished on the dam itself (the powerhouse remained uncompleted), a formal dedication ceremony was arranged for September 30, 1935, to coincide with a western tour being made by President Franklin D. Roosevelt. The morning of the dedication, it was moved forward three hours from 2 p.m. Pacific time to 11 a.m.; this was done because Secretary of the Interior Harold L. Ickes had reserved a radio slot for the President for 2 p.m. but officials did not realize until the day of the ceremony that the slot was for 2 p.m. Eastern Time. Despite the change in the ceremony time, and temperatures of , 10,000 people were present for the President's speech, in which he avoided mentioning the name of former President Hoover, who was not invited to the ceremony. To mark the occasion, a three-cent stamp was issued by the United States Post Office Department—bearing the name "Boulder Dam", the official name of the dam between 1933 and 1947. After the ceremony, Roosevelt made the first visit by any American president to Las Vegas. Most work had been completed by the dedication, and Six Companies negotiated with the government through late 1935 and early 1936 to settle all claims and arrange for the formal transfer of the dam to the Federal Government. The parties came to an agreement and on March 1, 1936, Secretary Ickes formally accepted the dam on behalf of the government. Six Companies was not required to complete work on one item, a concrete plug for one of the bypass tunnels, as the tunnel had to be used to take in irrigation water until the powerhouse went into operation. Construction deaths There were 112 deaths reported as associated with the construction of the dam. The first was Bureau of Reclamation employee Harold Connelly who died on May 15, 1921, after falling from a barge while surveying the Colorado River for an ideal spot for the dam. Surveyor John Gregory ("J.G.") Tierney, who drowned on December 20, 1922, in a flash flood while looking for an ideal spot for the dam was the second person. The official list's final death occurred on December 20, 1935, when Patrick Tierney, electrician's helper and the son of J.G. Tierney, fell from one of the two Arizona-side intake towers. Included in the fatality list are three workers who took their own lives on site, one in 1932 and two in 1933. Of the 112 fatalities, 91 were Six Companies employees, three were Bureau of Reclamation employees, and one was a visitor to the site; the remainder were employees of various contractors not part of Six Companies. Ninety-six of the deaths occurred during construction at the site. Not included in the official number of fatalities were deaths that were recorded as pneumonia. Workers alleged that this diagnosis was a cover for death from carbon monoxide poisoning (brought on by the use of gasoline-fueled vehicles in the diversion tunnels), and a classification used by Six Companies to avoid paying compensation claims. The site's diversion tunnels frequently reached , enveloped in thick plumes of vehicle exhaust gases. A total of 42 workers were recorded as having died from pneumonia and were not included in the above total; none were listed as having died from carbon monoxide poisoning. No deaths of non-workers from pneumonia were recorded in Boulder City during the construction period. Architectural style The initial plans for the facade of the dam, the power plant, the outlet tunnels and ornaments clashed with the modern look of an arch dam. The Bureau of Reclamation, more concerned with the dam's functionality, adorned it with a Gothic-inspired balustrade and eagle statues. This initial design was criticized by many as being too plain and unremarkable for a project of such immense scale, so Los Angeles-based architect Gordon B. Kaufmann, then the supervising architect to the Bureau of Reclamation, was brought in to redesign the exteriors. Kaufmann greatly streamlined the design and applied an elegant Art Deco style to the entire project. He designed sculpted turrets rising seamlessly from the dam face and clock faces on the intake towers set for the time in Nevada and Arizona—both states are in different time zones, but since Arizona does not observe daylight saving time, the clocks display the same time for more than half the year. At Kaufmann's request, Denver artist Allen Tupper True was hired to handle the design and decoration of the walls and floors of the new dam. True's design scheme incorporated motifs of the Navajo and Pueblo tribes of the region. Although some were initially opposed to these designs, True was given the go-ahead and was officially appointed consulting artist. With the assistance of the National Laboratory of Anthropology, True researched authentic decorative motifs from Indian sand paintings, textiles, baskets and ceramics. The images and colors are based on Native American visions of rain, lightning, water, clouds, and local animals—lizards, serpents, birds—and on the Southwestern landscape of stepped mesas. In these works, which are integrated into the walkways and interior halls of the dam, True also reflected on the machinery of the operation, making the symbolic patterns appear both ancient and modern. With the agreement of Kaufmann and the engineers, True also devised for the pipes and machinery an innovative color-coding which was implemented throughout all BOR projects. True's consulting artist job lasted through 1942; it was extended so he could complete design work for the Parker, Shasta and Grand Coulee dams and power plants. True's work on the Hoover Dam was humorously referred to in a poem published in The New Yorker, part of which read, "lose the spark, and justify the dream; but also worthy of remark will be the color scheme". Complementing Kaufmann and True's work, sculptor Oskar J. W. Hansen designed many of the sculptures on and around the dam. His works include the monument of dedication plaza, a plaque to memorialize the workers killed and the bas-reliefs on the elevator towers. In his words, Hansen wanted his work to express "the immutable calm of intellectual resolution, and the enormous power of trained physical strength, equally enthroned in placid triumph of scientific accomplishment", because "[t]he building of Hoover Dam belongs to the sagas of the daring." Hansen's dedication plaza, on the Nevada abutment, contains a sculpture of two winged figures flanking a flagpole. Surrounding the base of the monument is a terrazzo floor embedded with a "star map". The map depicts the Northern Hemisphere sky at the moment of President Roosevelt's dedication of the dam. This is intended to help future astronomers, if necessary, calculate the exact date of dedication. The bronze figures, dubbed Winged Figures of the Republic, were both formed in a continuous pour. To put such large bronzes into place without marring the highly polished bronze surface, they were placed on ice and guided into position as the ice melted. Hansen's bas-relief on the Nevada elevator tower depicts the benefits of the dam: flood control, navigation, irrigation, water storage, and power. The bas-relief on the Arizona elevator depicts, in his words, "the visages of those Indian tribes who have inhabited mountains and plains from ages distant." Operation Power plant and water demands Excavation for the powerhouse was carried out simultaneously with the excavation for the dam foundation and abutments. The excavation of this U-shaped structure located at the downstream toe of the dam was completed in late 1933 with the first concrete placed in November 1933. Filling of Lake Mead began February 1, 1935, even before the last of the concrete was poured that May. The powerhouse was one of the projects uncompleted at the time of the formal dedication on September 30, 1935; a crew of 500 men remained to finish it and other structures. To make the powerhouse roof bombproof, it was constructed of layers of concrete, rock, and steel with a total thickness of about , topped with layers of sand and tar. In the latter half of 1936, water levels in Lake Mead were high enough to permit power generation, and the first three Allis Chalmers built Francis turbine-generators, all on the Nevada side, began operating. In March 1937, one more Nevada generator went online and the first Arizona generator by August. By September 1939, four more generators were operating, and the dam's power plant became the largest hydroelectricity facility in the world. The final generator was not placed in service until 1961, bringing the maximum generating capacity to 1,345 megawatts at the time. Original plans called for 16 large generators, eight on each side of the river, but two smaller generators were installed instead of one large one on the Arizona side for a total of 17. The smaller generators were used to serve smaller communities at a time when the output of each generator was dedicated to a single municipality, before the dam's total power output was placed on the grid and made arbitrarily distributable. Before water from Lake Mead reaches the turbines, it enters the intake towers and then four gradually narrowing penstocks which funnel the water down towards the powerhouse. The intakes provide a maximum hydraulic head (water pressure) of as the water reaches a speed of about . The entire flow of the Colorado River usually passes through the turbines. The spillways and outlet works (jet-flow gates) are rarely used. The jet-flow gates, located in concrete structures above the river and also at the outlets of the inner diversion tunnels at river level, may be used to divert water around the dam in emergency or flood conditions, but have never done so, and in practice are used only to drain water from the penstocks for maintenance. Following an uprating project from 1986 to 1993, the total gross power rating for the plant, including two 2.4 megawatt Pelton turbine-generators that power Hoover Dam's own operations is a maximum capacity of 2080 megawatts. The annual generation of Hoover Dam varies. The maximum net generation was 10.348 TWh in 1984, and the minimum since 1940 was 2.648 TWh in 1956. The average power generated was 4.2 TWh/year for 1947–2008. In 2015, the dam generated 3.6 TWh. The amount of electricity generated by Hoover Dam has been decreasing along with the falling water level in Lake Mead due to the prolonged drought since year 2000 and high demand for the Colorado River's water. By 2014 its generating capacity was downrated by 23% to 1592 MW and was providing power only during periods of peak demand. Lake Mead fell to a new record low elevation of on July 1, 2016, before beginning to rebound slowly. Under its original design, the dam would no longer be able to generate power once the water level fell below , which might have occurred in 2017 had water restrictions not been enforced. To lower the minimum power pool elevation from , five wide-head turbines, designed to work efficiently with less flow, were installed. Water levels were maintained at over in 2018 and 2019, but fell to a new record low of on June 10, 2021 and were projected to fall below by the end of 2021. Control of water was the primary concern in the building of the dam. Power generation has allowed the dam project to be self-sustaining: proceeds from the sale of power repaid the 50-year construction loan, and those revenues also finance the multimillion-dollar yearly maintenance budget. Power is generated in step with and only with the release of water in response to downstream water demands. Lake Mead and downstream releases from the dam also provide water for both municipal and irrigation uses. Water released from the Hoover Dam eventually reaches several canals. The Colorado River Aqueduct and Central Arizona Project branch off Lake Havasu while the All-American Canal is supplied by the Imperial Dam. In total, water from Lake Mead serves 18 million people in Arizona, Nevada, and California and supplies the irrigation of over of land. In 2018, the Los Angeles Department of Water and Power (LADWP) proposed a $3 billion pumped-storage hydroelectricity project—a "battery" of sorts—that would use wind and solar power to recirculate water back up to Lake Mead from a pumping station downriver. Power distribution Electricity from the dam's powerhouse was originally sold pursuant to a fifty-year contract, authorized by Congress in 1934, which ran from 1937 to 1987. In 1984, Congress passed a new statute which set power allocations to southern California, Arizona, and Nevada from the dam from 1987 to 2017. The powerhouse was run under the original authorization by the Los Angeles Department of Water and Power and Southern California Edison; in 1987, the Bureau of Reclamation assumed control. In 2011, Congress enacted legislation extending the current contracts until 2067, after setting aside 5% of Hoover Dam's power for sale to Native American tribes, electric cooperatives, and other entities. The new arrangement began on October 1, 2017. The Bureau of Reclamation reports that the energy generated under the contracts ending in 2017 was allocated as follows: Spillways The dam is protected against over-topping by two spillways. The spillway entrances are located behind each dam abutment, running roughly parallel to the canyon walls. The spillway entrance arrangement forms a classic side-flow weir with each spillway containing four and steel-drum gates. Each gate weighs and can be operated manually or automatically. Gates are raised and lowered depending on water levels in the reservoir and flood conditions. The gates cannot entirely prevent water from entering the spillways but can maintain an extra of lake level. Water flowing over the spillways falls dramatically into , spillway tunnels before connecting to the outer diversion tunnels and reentering the main river channel below the dam. This complex spillway entrance arrangement combined with the approximate elevation drop from the top of the reservoir to the river below was a difficult engineering problem and posed numerous design challenges. Each spillway's capacity of was empirically verified in post-construction tests in 1941. The large spillway tunnels have only been used twice, for testing in 1941 and because of flooding in 1983. Both times, when inspecting the tunnels after the spillways were used, engineers found major damage to the concrete linings and underlying rock. The 1941 damage was attributed to a slight misalignment of the tunnel invert (or base), which caused cavitation, a phenomenon in fast-flowing liquids in which vapor bubbles collapse with explosive force. In response to this finding, the tunnels were patched with special heavy-duty concrete and the surface of the concrete was polished mirror-smooth. The spillways were modified in 1947 by adding flip buckets, which both slow the water and decrease the spillway's effective capacity, in an attempt to eliminate conditions thought to have contributed to the 1941 damage. The 1983 damage, also due to cavitation, led to the installation of aerators in the spillways. Tests at Grand Coulee Dam showed that the technique worked, in principle. Roadway and tourism There are two lanes for automobile traffic across the top of the dam, which formerly served as the Colorado River crossing for U.S. Route 93. In the wake of the September 11 terrorist attacks, authorities expressed security concerns and the Hoover Dam Bypass project was expedited. Pending the completion of the bypass, restricted traffic was permitted over Hoover Dam. Some types of vehicles were inspected prior to crossing the dam while semi-trailer trucks, buses carrying luggage, and enclosed-box trucks over long were not allowed on the dam at all, and were diverted to U.S. Route 95 or Nevada State Routes 163/68. The four-lane Hoover Dam Bypass opened on October 19, 2010. It includes a composite steel and concrete arch bridge, the Mike O'Callaghan–Pat Tillman Memorial Bridge, downstream from the dam. With the opening of the bypass, through traffic is no longer allowed across Hoover Dam; dam visitors are allowed to use the existing roadway to approach from the Nevada side and cross to parking lots and other facilities on the Arizona side. Hoover Dam opened for tours in 1937 after its completion but following Japan's attack on Pearl Harbor on December 7, 1941, it was closed to the public when the United States entered World War II, during which only authorized traffic, in convoys, was permitted. After the war, it reopened September 2, 1945, and by 1953, annual attendance had risen to 448,081. The dam closed on November 25, 1963, and March 31, 1969, days of mourning in remembrance of Presidents Kennedy and Eisenhower. In 1995, a new visitors' center was built, and the following year, visits exceeded one million for the first time. The dam closed again to the public on September 11, 2001; modified tours were resumed in December and a new "Discovery Tour" was added the following year. Today, nearly a million people per year take the tours of the dam offered by the Bureau of Reclamation. The government's increased security concerns have led to the exclusion of visitors from most of the interior structures. As a result, few of True's decorations can now be seen by visitors. Visitors can only purchase tickets on-site and have the options of a guided tour of the whole facility or only the power plant area. The only self-guided tour option is for the visitor center itself, where visitors can view various exhibits and enjoy a 360-degree view of the dam. Environmental impact The changes in water flow and use caused by Hoover Dam's construction and operation have had a large impact on the Colorado River Delta. The construction of the dam has been implicated in causing the decline of this estuarine ecosystem. For six years after the construction of the dam, while Lake Mead filled, virtually no water reached the mouth of the river. The delta's estuary, which once had a freshwater-saltwater mixing zone stretching south of the river's mouth, was turned into an inverse estuary where the level of salinity was higher close to the river's mouth. The Colorado River had experienced natural flooding before the construction of the Hoover Dam. The dam eliminated the natural flooding, threatening many species adapted to the flooding, including both plants and animals. The construction of the dam devastated the populations of native fish in the river downstream from the dam. Four species of fish native to the Colorado River, the Bonytail chub, Colorado pikeminnow, Humpback chub, and Razorback sucker, are listed as endangered. Naming controversy During the years of lobbying leading up to the passage of legislation authorizing the dam in 1928, the press generally referred to the dam as "Boulder Dam" or as "Boulder Canyon Dam", even though the proposed site had shifted to Black Canyon. The Boulder Canyon Project Act of 1928 (BCPA) never mentioned a proposed name or title for the dam. The BCPA merely allows the government to "construct, operate, and maintain a dam and incidental works in the main stream of the Colorado River at Black Canyon or Boulder Canyon". When Secretary of the Interior Ray Wilbur spoke at the ceremony starting the building of the railway between Las Vegas and the dam site on September 17, 1930, he named the dam "Hoover Dam", citing a tradition of naming dams after Presidents, though none had been so honored during their terms of office. Wilbur justified his choice on the ground that Hoover was "the great engineer whose vision and persistence ... has done so much to make [the dam] possible". One writer complained in response that "the Great Engineer had quickly drained, ditched, and dammed the country." After Hoover's election defeat in 1932 and the accession of the Roosevelt administration, Secretary Ickes ordered on May 13, 1933, that the dam be referred to as Boulder Dam. Ickes stated that Wilbur had been imprudent in naming the dam after a sitting president, that Congress had never ratified his choice, and that it had long been referred to as Boulder Dam. Unknown to the general public, Attorney General Homer Cummings informed Ickes that Congress had indeed used the name "Hoover Dam" in five different bills appropriating money for construction of the dam. The official status this conferred to the name "Hoover Dam" had been noted on the floor of the House of Representatives by Congressman Edward T. Taylor of Colorado on December 12, 1930, but was likewise ignored by Ickes. When Ickes spoke at the dedication ceremony on September 30, 1935, he was determined, as he recorded in his diary, "to try to nail down for good and all the name Boulder Dam." At one point in the speech, he spoke the words "Boulder Dam" five times within thirty seconds. Further, he suggested that if the dam were to be named after any one person, it should be for California Senator Hiram Johnson, a lead sponsor of the authorizing legislation. Roosevelt also referred to the dam as Boulder Dam, and the Republican-leaning Los Angeles Times, which at the time of Ickes' name change had run an editorial cartoon showing Ickes ineffectively chipping away at an enormous sign "HOOVER DAM", reran it showing Roosevelt reinforcing Ickes, but having no greater success. In the following years, the name "Boulder Dam" failed to fully take hold, with many Americans using both names interchangeably and mapmakers divided as to which name should be printed. Memories of the Great Depression faded, and Hoover to some extent rehabilitated himself through good works during and after World War II. In 1947, a bill passed both Houses of Congress unanimously restoring the name "Hoover Dam." Ickes, who was by then a private citizen, opposed the change, stating, "I didn't know Hoover was that small a man to take credit for something he had nothing to do with." Recognition Hoover Dam was recognized as a National Historic Civil Engineering Landmark in 1984. It was listed on the National Register of Historic Places in 1981 and was designated a National Historic Landmark in 1985, cited for its engineering innovations.
Technology
Hydraulic infrastructure
null
14313
https://en.wikipedia.org/wiki/Hair
Hair
Hair is a protein filament that grows from follicles found in the dermis. Hair is one of the defining characteristics of mammals. The human body, apart from areas of glabrous skin, is covered in follicles which produce thick terminal and fine vellus hair. Most common interest in hair is focused on hair growth, hair types, and hair care, but hair is also an important biomaterial primarily composed of protein, notably alpha-keratin. Attitudes towards different forms of hair, such as hairstyles and hair removal, vary widely across different cultures and historical periods, but it is often used to indicate a person's personal beliefs or social position, such as their age, gender, or religion. Overview Meaning The word "hair" usually refers to two distinct structures: the part beneath the skin, called the hair follicle, or, when pulled from the skin, the bulb or root. This organ is located in the dermis and maintains stem cells, which not only re-grow the hair after it falls out, but also are recruited to regrow skin after a wound. the hair shaft, which is the hard filamentous part that extends above the skin surface. It is made of multi-layered keratinized (dead) flat cells whose rope-like filaments provide structure and strength to it. The protein called keratin makes up most of its volume. A cross section of the hair shaft may be divided roughly into three zones. Hair fibers have a structure consisting of several layers, starting from the outside: the cuticle, which consists of several layers of flat, thin cells laid out overlapping one another as roof shingles the cortex, which contains the keratin bundles in cell structures that remain roughly rod-like the medulla, a disorganized and open area at the fiber's center Etymology The word "hair" is derived from and , in turn derived from and , with influence from . Both the Old English and Old Norse words derive from and are related to terms for hair in other Germanic languages such as , Dutch and , and . The now broadly obsolete word "fax" refers specifically to head hair and is found in compounds such as Fairfax and Halifax. It is derived from and is cognate with terms such as Old Norse and . Description Each strand of hair is made up of the medulla, cortex, and cuticle. The innermost region, the medulla, is an open and unstructured region that is not always present. The highly structural and organized cortex, or second of three layers of the hair, is the primary source of mechanical strength and water uptake. The cortex contains melanin, which colors the fiber based on the number, distribution and types of melanin granules. The melanin may be evenly spaced or cluster around the edges of the hair. The shape of the follicle determines the shape of the cortex, and the shape of the fiber is related to how straight or curly the hair is. People with straight hair have round hair fibers. Oval and other shaped fibers are generally more wavy or curly. The cuticle is the outer covering. Its complex structure slides as the hair swells and is covered with a single molecular layer of lipid that makes the hair repel water. The diameter of human hair varies from . Some of these characteristics in humans' head hair vary by race: people of mostly African ancestry tend to have hair with a diameter of 60–90 μm and a flat cross-section, while people of mostly European or Middle Eastern ancestry tend to have hair with a diameter of 70–100 μm and an oval cross-section, and people of mostly Asian or Native American ancestry tend to have hair with a diameter of 90–120 μm and a round cross-section. There are roughly two million small, tubular glands and sweat glands that produce watery fluids that cool the body by evaporation. The glands at the opening of the hair produce a fatty secretion that lubricates the hair. Hair growth begins inside the hair follicle. The only "living" portion of the hair is found in the follicle. The hair that is visible is the hair shaft, which exhibits no biochemical activity and is considered "dead". The base of a hair's root (the "bulb") contains the cells that produce the hair shaft. Other structures of the hair follicle include the oil producing sebaceous gland which lubricates the hair and the arrector pili muscles, which are responsible for causing hairs to stand up. In humans with little body hair, the effect results in goose bumps. Root of the hair The root of the hair ends in an enlargement, the hair bulb, which is whiter in color and softer in texture than the shaft and is lodged in a follicular involution of the epidermis called the hair follicle. The bulb of hair consists of fibrous connective tissue, glassy membrane, external root sheath, internal root sheath composed of epithelium stratum (Henle's layer) and granular stratum (Huxley's layer), cuticle, cortex and medulla. Natural color All natural hair colors are the result of two types of hair pigments. Both of these pigments are melanin types, produced inside the hair follicle and packed into granules found in the fibers. Eumelanin is the dominant pigment in brown hair and black hair, while pheomelanin is dominant in red hair. Blond hair is the result of having little pigmentation in the hair strand. Gray hair occurs when melanin production decreases or stops, while poliosis is white hair (and often the skin to which the hair is attached), typically in spots that never possessed melanin at all, or ceased for natural reasons, generally genetic, in the first years of life. Human hair growth Hair grows everywhere on the external body except for mucous membranes and glabrous skin, such as that found on the palms of the hands, soles of the feet, and lips. The body has different types of hair, including vellus hair and androgenic hair, each with its own type of cellular construction. The different construction gives the hair unique characteristics, serving specific purposes, mainly, warmth and protection. The three stages of hair growth are the anagen, catagen, and telogen phases. Each strand of hair on the human body is at its own stage of development. Once the cycle is complete, it restarts and a new strand of hair begins to form. The growth rate of hair varies from individual to individual depending on their age, genetic predisposition and a number of environmental factors. It is commonly stated that hair grows about 1 cm per month on average; however reality is more complex, since not all hair grows at once. Scalp hair was reported to grow between 0.6 cm and 3.36 cm per month. The growth rate of scalp hair somewhat depends on age (hair tends to grow more slowly with age), sex, and ethnicity. Thicker hair (>60 μm) grows generally faster (11.4 mm per month) than thinner (20-30 μm) hair (7.6 mm per month). It was previously thought that Caucasian hair grew more quickly than Asian hair and that the growth rate of women's hair was faster than that of men. However, more recent research has shown that the growth rate of hair in men and women does not significantly differ and that the hair of Chinese people grew more quickly than the hair of French Caucasians and West and Central Africans. The quantity of hair hovers in a certain range depending on hair colour. An average blonde person has 150,000 hairs, a brown-haired person has 110,000, a black-haired person has 100,000, and a redhead has 90,000. Hair growth stops after a human's death. Visible growth of hair on the dead body happens only because of skin drying out due to water loss. The world record for longest hair on a living person stands with Smita Srivastava of Uttar Pradesh, India. At 7 feet and 9 inches long, she broke a Guinness World Record in November 2023, having grown her hair for 32 years. Texture Hair exists in a variety of textures. Three main aspects of hair texture are the curl pattern, volume, and consistency. All mammalian hair is composed of keratin, so the make-up of hair follicles is not the source of varying hair patterns. There are a range of theories pertaining to the curl patterns of hair. Scientists have come to believe that the shape of the hair shaft has an effect on the curliness of the individual's hair. A very round shaft allows for fewer disulfide bonds to be present in the hair strand. This means the bonds present are directly in line with one another, resulting in straight hair. The flatter the hair shaft becomes, the curlier hair gets, because the shape allows more cysteines to become compacted together resulting in a bent shape that, with every additional disulfide bond, becomes curlier in form. As the hair follicle shape determines curl pattern, the hair follicle size determines thickness. While the circumference of the hair follicle expands, so does the thickness of the hair follicle. An individual's hair volume, as a result, can be thin, normal, or thick. The consistency of hair can almost always be grouped into three categories: fine, medium, and coarse. This trait is determined by the hair follicle volume and the condition of the strand. Fine hair has the smallest circumference, coarse hair has the largest circumference, and medium hair is anywhere between the other two. Coarse hair has a more open cuticle than thin or medium hair causing it to be the most porous. Classification systems There are various systems that people use to classify their curl patterns. Being knowledgeable of an individual's hair type is a good start to knowing how to take care of one's hair. There is not just one method to discovering one's hair type. Additionally it is possible, and quite normal to have more than one kind of hair type, for instance having a mixture of both type 3a & 3b curls. Andre Walker system The Andre Walker Hair Typing System is the most widely used system to classify hair. The system was created by Oprah Winfrey's hairstylist, Andre Walker. According to this system there are four types of hair: straight, wavy, curly, kinky. Type 1 is straight hair, which reflects the most sheen and also the most resilient hair of all of the hair types. It is hard to damage and immensely difficult to curl this hair texture. Because the sebum easily spreads from the scalp to the ends without curls or kinks to interrupt its path, it is the most oily hair texture of all. Type 2 is wavy hair, whose texture and sheen ranges somewhere between straight and curly hair. Wavy hair is also more likely to become frizzy than straight hair. While type A waves can easily alternate between straight and curly styles, type B and C wavy hair is resistant to styling. Type 3 is curly hair known to have an S-shape. The curl pattern may resemble a lowercase "s", uppercase "S", or sometimes an uppercase "Z" or lowercase "z". Lack of proper care causes less defined curls. Type 4 is kinky hair, which features a tightly coiled curl pattern (or no discernible curl pattern at all) that is often fragile with a very high density. This type of hair shrinks when wet and because it has fewer cuticle layers than other hair types it is more susceptible to damage. FIA system This is a method which classifies the hair by curl pattern, hair-strand thickness and overall hair volume. Composition Hair is mainly composed of keratin proteins and keratin-associated proteins (KRTAPs). The human genome encodes 54 different keratin proteins which are present in various amounts in hair. Similarly, humans encode more than 100 different KRTAPs which crosslink keratins in hair. The content of KRTAPs ranges from less than 3% in human hair to 30–40% in echidna quill. Functions Many mammals have fur and other hairs that serve different functions. Hair provides thermal regulation and camouflage for many animals; for others it provides signals to other animals such as warnings, mating, or other communicative displays; and for some animals hair provides defensive functions and, rarely, even offensive protection. Hair also has a sensory function, extending the sense of touch beyond the surface of the skin. Guard hairs give warnings that may trigger a recoiling reaction. Warmth While humans have developed clothing and other means of keeping warm, the hair found on the head serves primarily as a source of heat insulation and cooling (when sweat evaporates from soaked hair) as well as protection from ultra-violet radiation exposure. The function of hair in other locations is debated. Hats and coats are still required while doing outdoor activities in cold weather to prevent frostbite and hypothermia, but the hair on the human body does help to keep the internal temperature regulated. When the body is too cold, the arrector pili muscles found attached to hair follicles stand up, causing the hair in these follicles to do the same. These hairs then form a heat-trapping layer above the epidermis. This process is formally called piloerection, derived from the Latin words 'pilus' ('hair') and 'erectio' ('rising up'), but is more commonly known as 'having goose bumps' in English. This is more effective in other mammals whose fur fluffs up to create air pockets between hairs that insulate the body from the cold. The opposite actions occur when the body is too warm; the arrector muscles make the hair lie flat on the skin which allows heat to leave. Protection In some mammals, such as hedgehogs and porcupines, the hairs have been modified into hard spines or quills. These are covered with thick plates of keratin and serve as protection against predators. Thick hair such as that of the lion's mane and grizzly bear's fur do offer some protection from physical damages such as bites and scratches. Touch sense Displacement and vibration of hair shafts are detected by hair follicle nerve receptors and nerve receptors within the skin. Hairs can sense movements of air as well as touch by physical objects and they provide sensory awareness of the presence of ectoparasites. Some hairs, such as eyelashes, are especially sensitive to the presence of potentially harmful matter. Eyebrows and eyelashes The eyebrows provide moderate protection to the eyes from dirt, sweat and rain. They also play a key role in non-verbal communication by displaying emotions such as sadness, anger, surprise and excitement. In many other mammals, they contain much longer, whisker-like hairs that act as tactile sensors. The eyelash grows at the edges of the eyelid and protects the eye from dirt. The eyelash is to humans, camels, horses, ostriches etc., what whiskers are to cats; they are used to sense when dirt, dust, or any other potentially harmful object is too close to the eye. The eye reflexively closes as a result of this sensation. Eyebrows and eyelashes do not grow beyond a certain length (eyelashes are rarely more than 10 mm long). However, trichomegaly can cause the lashes to grow remarkably long and prominent (in some cases the upper lashes grow to 15 mm long). Evolution Hair has its origins in the common ancestor of mammals, the synapsids, about 300 million years ago. It is currently unknown at what stage the synapsids acquired mammalian characteristics such as body hair and mammary glands, as the fossils only rarely provide direct evidence for soft tissues. Skin impression of the belly and lower tail of a pelycosaur, possibly Haptodus shows the basal synapsid stock bore transverse rows of rectangular scutes, similar to those of a modern crocodile, so the age of acquirement of hair logically could not have been earlier than ≈299 ma, based on the current understanding of the animal's phylogeny. An exceptionally well-preserved skull of Estemmenosuchus, a therapsid from the Upper Permian, shows smooth, hairless skin with what appears to be glandular depressions, though as a semi-aquatic species it might not have been particularly useful to determine the integument of terrestrial species. The oldest undisputed known fossils showing unambiguous imprints of hair are the Callovian (late middle Jurassic) Castorocauda and several contemporary haramiyidans, both near-mammal cynodonts, giving the age as no later than ≈220 ma based on the modern phylogenetic understanding of these clades. More recently, studies on terminal Permian Russian coprolites may suggest that non-mammalian synapsids from that era had fur. If this is the case, these are the oldest hair remnants known, showcasing that fur occurred as far back as the latest Paleozoic. Some modern mammals have a special gland in front of each orbit used to preen the fur, called the harderian gland. Imprints of this structure are found in the skull of the small early mammals like Morganucodon, but not in their cynodont ancestors like Thrinaxodon. The hairs of the fur in modern animals are all connected to nerves, and so the fur also serves as a transmitter for sensory input. Fur could have evolved from sensory hair (whiskers). The signals from this sensory apparatus is interpreted in the neocortex, a section of the brain that expanded markedly in animals like Morganucodon and Hadrocodium. The more advanced therapsids could have had a combination of naked skin, whiskers, and scutes. A full pelage likely did not evolve until the therapsid-mammal transition. The more advanced, smaller therapsids could have had a combination of hair and scutes, a combination still found in some modern mammals, such as rodents and the opossum. The high interspecific variability of the size, color, and microstructure of hair often enables the identification of species based on single hair filaments. In varying degrees most mammals have some skin areas without natural hair. On the human body, glabrous skin is found on the ventral portion of the fingers, palms, soles of feet and lips, which are all parts of the body most closely associated with interacting with the world around us, as are the labia minora and glans penis. There are four main types of mechanoreceptors in the glabrous skin of humans: Pacinian corpuscles, Meissner's corpuscles, Merkel's discs, and Ruffini corpuscles. The naked mole-rat (Heterocephalus glaber) has evolved skin lacking in general, pelagic hair covering, yet has retained long, very sparsely scattered tactile hairs over its body. Glabrousness is a trait that may be associated with neoteny. Human hairlessness Evolutionary variation Primates are relatively hairless compared to other mammals, and Hominini such as chimpanzees, have less dense hair than would be expected given their body size for a primate. Evolutionary biologists suggest that the genus Homo arose in East Africa approximately 2 million years ago. Part of this evolution was the development of endurance running and venturing out during the hot times of the day that required efficient thermoregulation through perspiration. The loss of heat through heat of evaporation by means of sweat glands is aided by air currents next to the skin surface, which are facilitated by the loss of body hair. Another factor in human evolution that also occurred in the prehistoric past was a preferential selection for neoteny, particularly in females. The idea that adult humans exhibit certain neotenous (juvenile) features, not evinced in the other great apes, is about a century old. Louis Bolk made a long list of such traits, and Stephen Jay Gould published a short list in Ontogeny and Phylogeny. In addition, paedomorphic characteristics in women are often acknowledged as desirable by men in developed countries. For instance, vellus hair is a juvenile characteristic. However, while men develop longer, coarser, thicker, and darker terminal hair through sexual differentiation, women do not, leaving their vellus hair visible. Texture Curly hair Jablonski asserts head hair was evolutionarily advantageous for pre-humans to retain because it protected the scalp as they walked upright in the intense African (equatorial) UV light. While some might argue that, by this logic, humans should also express hairy shoulders because these body parts would putatively be exposed to similar conditions, the protection of the head, the seat of the brain that enabled humanity to become one of the most successful species on the planet (and which also is very vulnerable at birth) was arguably a more urgent issue (axillary hair in the underarms and groin were also retained as signs of sexual maturity). Sometime during the gradual process by which Homo erectus began a transition from furry skin to the naked skin expressed by Homo sapiens, hair texture putatively gradually changed from straight hair (the condition of most mammals, including humanity's closest cousins—chimpanzees) to Afro-textured hair or 'kinky' (i.e. tightly coiled). This argument assumes that curly hair better impedes the passage of UV light into the body relative to straight hair (thus curly or coiled hair would be particularly advantageous for light-skinned hominids living at the equator). It is substantiated by Iyengar's findings (1998) that UV light can enter into straight human hair roots (and thus into the body through the skin) via the hair shaft. Specifically, the results of that study suggest that this phenomenon resembles the passage of light through fiber optic tubes (which do not function as effectively when kinked or sharply curved or coiled). In this sense, when hominids (i.e. Homo erectus) were gradually losing their straight body hair and thereby exposing the initially pale skin underneath their fur to the sun, straight hair would have been an adaptive liability. By inverse logic, later, as humans traveled farther from Africa and/or the equator, straight hair may have (initially) evolved to aid the entry of UV light into the body during the transition from dark, UV-protected skin to paler skin. Jablonski's assertions suggest that the adjective "woolly" in reference to Afro-hair is a misnomer in connoting the high heat insulation derivable from the true wool of sheep. Instead, the relatively sparse density of Afro-hair, combined with its springy coils actually results in an airy, almost sponge-like structure that in turn, Jablonski argues, more likely facilitates an increase in the circulation of cool air onto the scalp. Further, wet Afro-hair does not stick to the neck and scalp unless totally drenched and instead tends to retain its basic springy puffiness because it less easily responds to moisture and sweat than straight hair does. In this sense, the trait may enhance comfort levels in intense equatorial climates more than straight hair (which, on the other hand, tends to naturally fall over the ears and neck to a degree that provides slightly enhanced comfort levels in cold climates relative to tightly coiled hair). Further, it is notable that the most pervasive expression of this hair texture can be found in sub-Saharan Africa; a region of the world that abundant genetic and paleo-anthropological evidence suggests, was the relatively recent (≈200,000-year-old) point of origin for modern humanity. In fact, although genetic findings (Tishkoff, 2009) suggest that sub-Saharan Africans are the most genetically diverse continental group on Earth, Afro-textured hair approaches ubiquity in this region. This points to a strong, long-term selective pressure that, in stark contrast to most other regions of the genomes of sub-Saharan groups, left little room for genetic variation at the determining loci. Such a pattern, again, does not seem to support human sexual aesthetics as being the sole or primary cause of this distribution. The EDAR locus A group of studies have recently shown that genetic patterns at the EDAR locus, a region of the modern human genome that contributes to hair texture variation among most individuals of East Asian descent, support the hypothesis that (East Asian) straight hair likely developed in this branch of the modern human lineage subsequent to the original expression of tightly coiled natural afro-hair. Specifically, the relevant findings indicate that the EDAR mutation coding for the predominant East Asian 'coarse' or thick, straight hair texture arose within the past ≈65,000 years, which is a time frame that covers from the earliest of the 'Out of Africa' migrations up to now. Disease Ringworm is a fungal disease that targets hairy skin. Premature greying of hair is another condition that results in greying before the age of 20 years in Europeans, before 25 years in Asians, and before 30 years in Africans. Hair care Hair care involves the hygiene and cosmetology of hair including hair on the scalp, facial hair (beard and moustache), pubic hair and other body hair. Hair care routines differ according to an individual's culture and the physical characteristics of one's hair. Hair may be colored, trimmed, shaved, plucked, or otherwise removed with treatments such as waxing, sugaring, and threading. Removal practices Depilation is the removal of hair from the surface of the skin. This can be achieved through methods such as shaving. Epilation is the removal of the entire hair strand, including the part of the hair that has not yet left the follicle. A popular way to epilate hair is through waxing. Shaving Shaving is accomplished with bladed instruments, such as razors. The blade is brought close to the skin and stroked over the hair in the desired area to cut the terminal hairs and leave the skin feeling smooth. Depending upon the rate of growth, one can begin to feel the hair growing back within hours of shaving. This is especially evident in men who develop a five o'clock shadow after having shaved their faces. This new growth is called stubble. Stubble typically appears to grow back thicker because the shaved hairs are blunted instead of tapered off at the end, although the hair never actually grows back thicker. Waxing Waxing involves using a sticky wax and strip of paper or cloth to pull hair from the root. Waxing is the ideal hair removal technique to keep an area hair-free for long periods of time. It can take three to five weeks for waxed hair to begin to resurface again. Hair in areas that have been waxed consistently is known to grow back finer and thinner, especially compared to hair that has been shaved with a razor. Laser removal Laser hair removal is a cosmetic method where a small laser beam pulses selective heat on dark target matter in the area that causes hair growth without harming the skin tissue. This process is repeated several times over the course of many months to a couple of years with hair regrowing less frequently until it finally stops; this is used as a more permanent solution to waxing or shaving. Laser removal is practiced in many clinics along with many at-home products. Cutting and trimming Because the hair on one's head is normally longer than other types of body hair, it is cut with scissors or clippers. People with longer hair will most often use scissors to cut their hair, whereas shorter hair is maintained using a trimmer. Depending on the desired length and overall health of the hair, periods without cutting or trimming the hair can vary. Cut hair may be used in wigs. Global imports of hair in 2010 was worth $US 1.24 billion. Social role Hair has great social significance for human beings. It can grow on most external areas of the human body, except on the palms of the hands and the soles of the feet (among other areas). Hair is most noticeable on most people in a small number of areas, which are also the ones that are most commonly trimmed, plucked, or shaved. These include the face, ears, head, eyebrows, legs, and armpits, as well as the pubic region. The highly visible differences between male and female body and facial hair are a notable secondary sex characteristic. The world's longest documented hair belongs to Xie Qiuping (in China), at 5.627 m (18 ft 5.54 in) when measured on 8 May 2004. She has been growing her hair since 1973, from the age of 13. Indication of status Healthy hair indicates health and youth (important in evolutionary biology). Hair color and texture can be a sign of ethnic ancestry. Facial hair is a sign of puberty in men. White or gray hair is a sign of age or genetics, which may be concealed with hair dye (not easily for some), although many prefer to assume it (especially if it is a poliosis characteristic of the person since childhood). Pattern baldness in men is usually seen as a sign of aging that may be concealed with a toupee, hats, or religious and cultural adornments; however, the condition can be triggered by various hormonal factors at any age following puberty and is not uncommon in younger men. Although pattern baldness can be slowed down by drugs such as Finasteride and Minoxidil or treated with hair transplants, many men see this as unnecessary effort for the sake of vanity and instead shave their heads. In early modern China, the queue was a male hairstyle in which the hair at the front and top was shaved every 10 days in a style mimicking pattern baldness, while the remaining hair at the back was braided into a long pigtail. A hairstyle may be an indicator of group membership. During the English Civil War, followers of Oliver Cromwell cropped their hair close to their head in an act of defiance against the curls and ringlets of the king's men, which led to them being nicknamed Roundheads. Recent isotopic analysis of hair is helping to shed further light on sociocultural interaction, giving information on food procurement and consumption in the 19th century. Having bobbed hair was popular among the flappers in the 1920s as a sign of rebellion against traditional roles for women. Female art students known as the Cropheads also adopted the style, notably at the Slade School in London. Regional variations in hirsutism has caused practices regarding hair on the arms and legs to differ. Some religious groups may follow certain rules regarding hair as part of religious observance. The rules often differ for men and women. Many subcultures have hairstyles which may indicate an unofficial membership. Many hippies, metalheads, and Indian sadhus have long hair, as well many older hipsters. Many punks wear a hairstyle known as a mohawk or other spiked and dyed hairstyles, while skinheads have short-cropped or completely shaved heads. Long stylized bangs were very common for emos, scene kids, and younger hipsters in the 2000s and early 2010s. Heads were shaved in concentration camps, and head-shaving has been used as punishment, especially for women with long hair. The shaven head is common in military haircuts, while Western monks are known for the tonsure. By contrast, among some Indian holy men, the hair is worn extremely long. In the time of Confucius (5th century BCE), the Chinese grew out their hair and often tied it, as a symbol of filial piety. Regular hairdressing in some cultures is considered a sign of wealth or status. The dreadlocks of the Rastafari movement were despised early in the movement's history. In some cultures, having one's hair cut can symbolize a liberation from one's past, usually after a trying time in one's life. Cutting the hair also may be a sign of mourning. Tightly coiled hair in its natural state may be worn in an Afro. This hairstyle was once worn among African Americans as a symbol of racial pride. Given that the coiled texture is the natural state of some African Americans' hair, or perceived as being more "African", this simple style is now often seen as a sign of self-acceptance and an affirmation that the beauty norms of the (eurocentric) dominant culture are not absolute. African Americans as a whole have a variety of hair textures, as they are not an ethnically homogeneous group, but an ad-hoc of different racial admixtures. The film Easy Rider (1969) includes the assumption that the two main characters could have their long hairs forcibly shaved with a rusty razor when jailed, symbolizing the intolerance of some conservative groups toward members of the counterculture. At the conclusion of England's 1971 Oz trials, the defendants had their heads shaved by the police, causing public outcry. During the appeal trial, they appeared in the dock wearing wigs. A case where a 14-year-old student was expelled from school in Brazil in the mid-2000s, allegedly because of his fauxhawk haircut, sparked national debate and legal action resulting in compensation. Religious practices Women's hair may be hidden using headscarves, a common part of the hijab in Islam and a symbol of modesty required for certain religious rituals in Eastern Orthodoxy. Russian Orthodox Church requires all married women to wear headscarves inside the church; this tradition is often extended to all women, regardless of marital status. Orthodox Judaism also commands the use of scarves and other head coverings for married women for modesty reasons. Certain Hindu sects also wear head scarves for religious reasons. Sikhs have an obligation not to cut hair (a Sikh cutting hair becomes 'apostate' which means fallen from religion) and men keep it tied in a bun on the head, which is then covered appropriately using a turban. Multiple religions, both ancient and contemporary, require or advise one to allow their hair to become dreadlocks, though people also wear them for fashion. For men, Islam, Orthodox Judaism, Orthodox Christianity, Roman Catholicism, and other religious groups have at various times recommended or required the covering of the head and sections of the hair of men, and some have dictates relating to the cutting of men's facial and head hair. Some Christian sects throughout history and up to modern times have also religiously proscribed the cutting of women's hair. For some Sunni madhabs, the donning of a kufi or topi is a form of sunnah. Brahmin males are prescribed to shave their heads, but leave a tuft of hair unshaved, worn in the form of a topknot. In Arabic poetry Since ancient times, women's long, thick, wavy hair has featured prominently in Arabic poetry. Pre-Islamic poets used only limited imagery to describe women's hair. For example, al-A'sha wrote a verse comparing a lover's hair to "a garden whose grapes dangle down upon me", but Bashshar ibn Burd considered this unusual. One comparison used by early poets, such as Imru al-Qays, was to bunches of dates. In Abbasid times, however, the imagery for hair expanded significantly - particularly for the then-fashionable "love-locks" (sudgh) framing the temples, which came into style at the court of the caliph al-Amin. Hair curls were compared to hooks and chains, letters (such as fa, waw, lam, and nun), scorpions, annelids, and polo sticks. An example was the poet Ibn al-Mu'tazz, who compared a lock of hair and a birthmark to a polo stick driving a ball.
Biology and health sciences
Integumentary system
null
14340
https://en.wikipedia.org/wiki/Hydraulic%20ram
Hydraulic ram
A hydraulic ram pump, ram pump, or hydram is a cyclic water pump powered by hydropower. It takes in water at one "hydraulic head" (pressure) and flow rate, and outputs water at a higher hydraulic head and lower flow rate. The device uses the water hammer effect to develop pressure that allows a portion of the input water that powers the pump to be lifted to a point higher than where the water originally started. The hydraulic ram is sometimes used in remote areas, where there is both a source of low-head hydropower and a need for pumping water to a destination higher in elevation than the source. In this situation, the ram is often useful, since it requires no outside source of power other than the kinetic energy of flowing water. History In 1772, John Whitehurst of Cheshire, England, invented a manually controlled precursor of the hydraulic ram called the "pulsation engine" and installed the first one at Oulton, Cheshire to raise water to a height of . In 1783, he installed another in Ireland. He did not patent it, and details are obscure, but it is known to have had an air vessel. The first self-acting ram pump was invented by the Frenchman Joseph Michel Montgolfier (best known as a co-inventor of the hot air balloon) in 1796 for raising water in his paper mill at Voiron. His friend Matthew Boulton took out a British patent on his behalf in 1797. The sons of Montgolfier obtained a British patent for an improved version in 1816, and this was acquired, together with Whitehurst's design, in 1820 by Josiah Easton, a Somerset-born engineer who had just moved to London. Easton's firm, inherited by his son James (1796–1871), grew during the nineteenth century to become one of the more important engineering manufacturers in England, with a large works at Erith, Kent. They specialised in water supply and sewerage systems worldwide, as well as land drainage projects. Eastons had a good business supplying rams for water supply purposes to large country houses, farms, and village communities. Some of their installations still survived as of 2004, one such example being at the hamlet of Toller Whelme, in Dorset. Until about 1958 when the mains water arrived, the hamlet of East Dundry just south of Bristol had three working rams – their noisy "thump" every minute or so resonated through the valley night and day: these rams served farms that needed much water for their dairy herds. The firm closed in 1909, but the ram business was continued by James R. Easton. In 1929, it was acquired by Green & Carter of Winchester, Hampshire, who were engaged in the manufacturing and installation of Vulcan and Vacher Rams. The first US patent was issued to Joseph Cerneau (or Curneau) and Stephen (Étienne) S. Hallet (1755-1825) in 1809. US interest in hydraulic rams picked up around 1840, as further patents were issued and domestic companies started offering rams for sale. Toward the end of the 19th century, interest waned as electricity and electric pumps became widely available. Priestly's Hydraulic Ram, built in 1890 in Idaho, was a "marvelous" invention, apparently independent, which lifted water to provide irrigation. The ram survives and is listed on the U.S. National Register of Historic Places. By the end of the twentieth century, interest in hydraulic rams has revived, due to the needs of sustainable technology in developing countries, and energy conservation in developed ones. An example is Aid Foundation International in the Philippines, who won an Ashden Award for their work developing ram pumps that could be easily maintained for use in remote villages. The hydraulic ram principle has been used in some proposals for exploiting wave power, one of which was discussed as long ago as 1931 by Hanns Günther in his book In hundert Jahren. Some later ram designs in the UK called compound rams were designed to pump treated water using an untreated drive water source, which overcomes some of the problems of having drinking water sourced from an open stream. In 1996 English engineer Frederick Philip Selwyn patented a more compact hydraulic ram pump where the waste valve used the venturi effect and was arranged concentrically around the input pipe. Initially patented as a fluid pressure amplifier due to its different design, it is currently sold as the "Papa Pump". Additionally to this a large scale version named the "Venturo Pump" is also being manufactured. Construction and principle of operation A traditional hydraulic ram has only two moving parts, a spring or weight loaded "waste" valve sometimes known as the "clack" valve and a "delivery" check valve, making it cheap to build, easy to maintain, and very reliable. Priestly's Hydraulic Ram, described in detail in the 1947 Encyclopedia Britannica, has no moving parts. Sequence of operation A simplified hydraulic ram is shown in Figure 2. Initially, the waste valve [4] is open (i.e. lowered) because of its own weight, and the delivery valve [5] is closed under the pressure caused by the water column from the outlet [3]. The water in the inlet pipe [1] starts to flow under the force of gravity and picks up speed and kinetic energy until the increasing drag force lifts the waste valve's weight and closes it. The momentum of the water flow in the inlet pipe against the now closed waste valve causes a water hammer that raises the pressure in the pump beyond the pressure caused by the water column pressing down from the outlet. This pressure differential now opens the delivery valve [5], and forces some water to flow into the delivery pipe [3]. Because this water is being forced uphill through the delivery pipe farther than it is falling downhill from the source, the flow slows; when the flow reverses, the delivery check valve [5] closes. Meanwhile, the water hammer from the closing of the waste valve also produces a pressure pulse which propagates back up the inlet pipe to the source where it converts to a suction pulse that propagates back down the inlet pipe. This suction pulse, with the weight or spring on the valve, pulls the waste valve back open and allows the process to begin again. A pressure vessel [6] containing air cushions the hydraulic pressure shock when the waste valve closes, and it also improves the pumping efficiency by allowing a more constant flow through the delivery pipe. Although the pump could in theory work without it, the efficiency would drop drastically and the pump would be subject to extraordinary stresses that could shorten its life considerably. One problem is that the pressurized air will gradually dissolve into the water until none remains. One solution to this problem is to have the air separated from the water by an elastic diaphragm (similar to an expansion tank); however, this solution can be problematic in developing countries where replacements are difficult to procure. Another solution is a snifting valve installed close to the drive side of the delivery valve. This automatically inhales a small amount of air each time the delivery valve shuts and the partial vacuum develops. Another solution is to insert an inner tube of a car or bicycle tire into the pressure vessel with some air in it and the valve closed. This tube is in effect the same as the diaphragm, but it is implemented with more widely available materials. The air in the tube cushions the shock of the water the same as the air in other configurations does. Efficiency A typical energy efficiency is 60%, but up to 80% is possible. This should not be confused with the volumetric efficiency, which relates the volume of water delivered to total water taken from the source. The portion of water available at the delivery pipe will be reduced by the ratio of the delivery head to the supply head. Thus if the source is above the ram and the water is lifted to above the ram, only 20% of the supplied water can be available, the other 80% being spilled via the waste valve. These ratios assume 100% energy efficiency. Actual water delivered will be further reduced by the energy efficiency factor. In the above example, if the energy efficiency is 70%, the water delivered will be 70% of 20%, i.e. 14%. Assuming a 2-to-1 supply-head-to-delivery-head ratio and 70% efficiency, the delivered water would be 70% of 50%, i.e. 35%. Very high ratios of delivery to supply head usually result in lowered energy efficiency. Suppliers of rams often provide tables giving expected volume ratios based on actual tests. Drive and delivery pipe design Since both efficiency and reliable cycling depend on water hammer effects, the drive pipe design is important. It should be between 3 and 7 times longer than the vertical distance between the source and the ram. Commercial rams may have an input fitting designed to accommodate this optimum slope. The diameter of the supply pipe would normally match the diameter of the input fitting on the ram, which in turn is based on its pumping capacity. The drive pipe should be of constant diameter and material, and should be as straight as possible. Where bends are necessary, they should be smooth, large diameter curves. Even a large spiral is allowed, but elbows are to be avoided. PVC will work in some installations, but steel pipe is preferred, although much more expensive. If valves are used they should be a free flow type such as a ball valve or gate valve. The delivery pipe is much less critical since the pressure vessel prevents water hammer effects from traveling up it. Its overall design would be determined by the allowable pressure drop based on the expected flow. Typically the pipe size will be about half that of the supply pipe, but for very long runs a larger size may be indicated. PVC pipe and any necessary valves are not a problem. Starting operation A ram newly placed into operation or which has stopped cycling should start automatically if the waste valve weight or spring pressure is adjusted correctly, but it can be restarted as follows: If the waste valve is in the raised (closed) position, it must be pushed down manually into the open position and released. If the flow is sufficient, it will then cycle at least once. If it does not continue to cycle, it must be pushed down repeatedly until it cycles continuously on its own, usually after three or four manual cycles. If the ram stops with the waste valve in the down (open) position it must be lifted manually and kept up for as long as necessary for the supply pipe to fill with water and for any air bubbles to travel up the pipe to the source. This may take some time, depending on supply pipe length and diameter. Then it can be started manually by pushing it down a few times as described above. Having a valve on the delivery pipe at the ram makes starting easier. Closing the valve until the ram starts cycling, then gradually opening it to fill the delivery pipe. If opened too quickly it will stop the cycle. Once the delivery pipe is full the valve can be left open. Common operational problems Failure to deliver sufficient water may be due to improper adjustment of the waste valve, having too little air in the pressure vessel, or simply attempting to raise the water higher than the level of which the ram is capable. The ram may be damaged by freezing in winter, or loss of air in the pressure vessel leading to excess stress on the ram parts. These failures will require welding or other repair methods and perhaps parts replacement. It is not uncommon for an operating ram to require occasional restarts. The cycling may stop due to poor adjustment of the waste valve, or insufficient water flow at the source. Air can enter if the supply water level is not at least a few inches above the input end of the supply pipe. Other problems are blockage of the valves with debris, or improper installation, such as using a supply pipe of non-uniform diameter or material, having sharp bends or a rough interior, or one that is too long or short for the drop, or is made of an insufficiently rigid material. A PVC supply pipe will work in some installations but a steel pipe is better.
Technology
Hydraulics and pneumatics
null
14347
https://en.wikipedia.org/wiki/Homininae
Homininae
Homininae (the hominines), is a subfamily of the family Hominidae (hominids). (The Homininae——encompass humans, and are also called "African hominids" or "African apes".) This subfamily includes two tribes, Hominini and Gorillini, both having extant (or living) species as well as extinct species. Tribe Hominini includes: the extant genus Homo, which comprises only one extant species—the modern humans (Homo sapiens), and numerous extinct human species; and the extant genus Pan, which includes two extant species, chimpanzees and bonobos. Tribe Gorillini (gorillas) contains one extant genus, Gorilla, with two extant species, with variants, and one known extinct genus. Alternatively, the genus Pan is considered by some to belong, instead of to a subtribe Panina, to its own separate tribe, (so-called) "Panini"—which would be a third tribe for Homininae. Some classification schemes provide a more comprehensive account of extinct groups—(see section "Taxonomic Classification", below). For example, tribe Hominini shows two subtribes: subtribe Hominina, which contains at least two extinct genera; and subtribe Panina, which presents only the extant genus, Pan (chimpanzees/bonobos), as fossils of extinct chimpanzees/bonobos are very rarely found. The Homininae comprise all hominids that arose after the subfamily Ponginae (orangutans} split from the line of the great apes. The Homininae cladogram has three main branches leading: to gorillas (via the tribe Gorillini); to humans and to chimpanzees (via the tribe Hominini and subtribes Hominina and Panina―(see graphic "Evolutionary tree", below). There are two living species of Panina, chimpanzees and bonobos, and two living species of gorillas and one that is extinct. Traces of extinct Homo species, including Homo floresiensis, have been found with dates as recent as 40,000 years ago. Individual members of this subfamily are called hominine or hominines—not to be confused with the terms hominins or Hominini. History of discoveries and classification Until 1970, the family (and term) Hominidae meant humans only; the non-human great apes were assigned to the then-family Pongidae. Later discoveries led to revised classifications, with the great apes then united with humans (now in subfamily Homininae) as members of family Hominidae By 1990, it was recognized that gorillas and chimpanzees are more closely related to humans than they are to orangutans, leading to their (gorillas' and chimpanzees') placement in subfamily Homininae as well. The subfamily Homininae can be further subdivided into three branches, the tribe Gorillini (gorillas), the tribe Hominini with subtribes Panina (chimpanzees/bonobos) and Hominina (humans and their extinct relatives), and the extinct tribe Dryopithecini. The Late Miocene fossil Nakalipithecus nakayamai, described in 2007, is a basal member of this clade, as is, perhaps, its contemporary Ouranopithecus; that is, they are not assignable to any of the three extant branches. Their existence suggests that the Homininae tribes diverged not earlier than about 8 million years ago (see Human evolutionary genetics). Today, chimpanzees and gorillas live in tropical forests with acid soils that rarely preserve fossils. Although no fossil gorillas have been reported, four chimpanzee teeth about 500,000 years old have been discovered in the East-African rift valley (Kapthurin Formation, Kenya), where many fossils from the human lineage (hominins) have been found. This shows that some chimpanzees lived close to Homo (H. erectus or H. rhodesiensis) at the time; the same is likely true for gorillas. Taxonomic classification Homininae Tribe Dryopithecini† Kenyapithecus (?) Kenyapitheus wickeri Ouranopithecus Ouranopithecus macedoniensis Otavipithecus Otavipithecus namibiensis Oreopithecus (?) Oreopithecus bambolii Nakalipithecus Nakalipithecus nakayamai Anoiapithecus Anoiapithecus brevirostris Dryopithecus Dryopithecus fontani Hispanopithecus (?) Hispanopithecus laietanus Hispanopithecus crusafonti Pierolapithecus Pierolapithecus catalaunicus Rudapithecus (?) Rudapithecus hungaricus Samburupithecus Samburupithecus kiptalami Danuvius Danuvius guggenmosi Tribe Gorillini Chororapithecus † Chororapithecus abyssinicus Genus Gorilla Western gorilla, Gorilla gorilla Western lowland gorilla, Gorilla gorilla gorilla Cross River gorilla, Gorilla gorilla diehli Eastern gorilla, Gorilla beringei Mountain gorilla, Gorilla beringei beringei Eastern lowland gorilla, Gorilla beringei graueri Tribe Hominini Subtribe Panina Genus Pan Chimpanzee (common chimpanzee), Pan troglodytes Central chimpanzee, Pan troglodytes troglodytes Western chimpanzee, Pan troglodytes verus Nigeria-Cameroon chimpanzee, Pan troglodytes ellioti Eastern chimpanzee, Pan troglodytes schweinfurthii Bonobo (pygmy chimpanzee), Pan paniscus Subtribe Hominina Graecopithecus † Graecopithecus freybergi. Note: Graecopithecus has also been subsumed by other authors into Dryopithecus. The placement of Graecopithecus within the Hominina, as shown here, represents a hypothesis, but not scientific consensus. Sahelanthropus (?)† Sahelanthropus tchadensis Orrorin† Orrorin tugenensis Orrorin praegens Ardipithecus† Ardipithecus ramidus Ardipithecus kadabba Kenyanthropus† Kenyanthropus platyops Australopithecus† Australopithecus bahrelghazali Australopithecus anamensis Australopithecus afarensis Australopithecus africanus Australopithecus garhi Australopithecus sediba Paranthropus† Paranthropus aethiopicus Paranthropus robustus Paranthropus boisei Homo – immediate ancestors of modern humans Homo gautengensis† (probable H. habilis specimens) Homo rudolfensis† Homo habilis† Homo floresiensis† Homo erectus† Homo ergaster† Homo antecessor† Homo heidelbergensis† Homo cepranensis† (probable early H. sapiens specimens) Denisovans (scientific name has not yet been assigned)† Homo neanderthalensis† Homo rhodesiensis† (probable late H. heidelbergensis specimens) Homo sapiens Anatomically modern human, Homo sapiens sapiens Archaic Homo sapiens (Cro-magnon)† Red Deer Cave people† (scientific name has not yet been assigned) Homo sapiens idaltu† (classification not widely accepted) Evolution The age of the subfamily Homininae (of the Homininae–Ponginae last common ancestor) is estimated at some 14 to 12.5 million years (Sivapithecus). Its separation into Gorillini and Hominini (the "gorilla–human last common ancestor", GHLCA) is estimated to have occurred at about (TGHLCA) during the late Miocene, close to the age of Nakalipithecus nakayamai. There is evidence there was interbreeding of Gorillas and the Pan–Homo ancestors until right up to the Pan–Homo split. Evolution of bipedalism Recent studies of Ardipithecus ramidus (4.4 million years old) and Orrorin tugenensis (6 million years old) suggest some degree of bipedalism. Australopithecus and early Paranthropus may have been bipedal. Very early hominins such as Ardipithecus ramidus may have possessed an arboreal type of bipedalism. The evolution of bipedalism encouraged multiple changes among hominins especially when it came to bipedalism in humans as they were now able to do many other things as they began to walk with their feet. These changes included the ability to now use their hands to create tools or carry things with their hands, the ability to travel longer distances at a faster speed, and the ability to hunt for food. According to researchers, humans were able to be bipedalists due to Darwin's Principle of natural selection. Darwin himself believed that larger brains in humans made an upright gait necessary, but had no hypothesis for how the mechanism evolved. The first major theory attempting to directly explanation the origins of bipedalism was the Savannah hypothesis (Dart 1925.) This theory hypothesized that hominins became bipedalists due to the environment of the Savanna such as the tall grass and dry climate. This was later proven to be incorrect due to fossil records that showed that hominins were still climbing trees during this era. Anthropologist Owen Lovejoy has suggested that bipedalism was a result of sexual dimorphism in efforts to help with the collecting of food. In his Male Provisioning Hypothesis introduced in 1981, lowered birth rates in early hominids increased pressure on males to provide for females and offspring. While females groomed and cared for their children with the family group, males ranged to seek food and returned bipadally with full arms. Males who could better provide for females in this model were more likely to mate and produce offspring. Anthropologist Yohannes Haile-Selassie, an expert on Australopithecus anamensis, discusses the evidence that Australopithecus were one of the first hominins to evolve into obligate bipedalists. The remains of this subfamily are very important in the field of research as it presents possible information regarding how these primates adapted from tree life to terrestrial life. This was a huge adaptation as it encouraged many evolutionary changes within hominins including the ability to use their hand to make tools and gather food, as well as a larger brain development due to their change in diet. Brain size evolution There has been a gradual increase in brain volume (brain size) as the ancestors of modern humans progressed along the timeline of human evolution, starting from about 600 cm3 in Homo habilis up to 1500 cm3 in Homo neanderthalensis. However, modern Homo sapiens have a brain volume slightly smaller (1250 cm3) than Neanderthals, women have a brain slightly smaller than men and the Flores hominids (Homo floresiensis), nicknamed hobbits, had a cranial capacity of about 380 cm3 (considered small for a chimpanzee), about a third of the Homo erectus average. It is proposed that they evolved from H. erectus as a case of insular dwarfism. In spite of their smaller brain, there is evidence that H. floresiensis used fire and made stone tools at least as sophisticated as those of their proposed ancestors H. erectus. In this case, it seems that for intelligence, the structure of the brain is more important than its size. The current size of the human brain is a big distinguishing factor that separates humans from other primates. Recent examination of the human brain shows that the brain of a human is about more than four times the size of great apes and 20 times larger than the brain size of old world monkeys. A study was conducted to help determine the evolution of the brain size within the sub family Homininae that tested the genes ASPM (abnormal spindle-like microcephaly associated) and MCHP1 (microcephalin-1) and their association with the human brain. In this study researchers discovered that the increase in brain size is correlated to the increase of both ASP and MCPH1. MCPH1 is very polymorphic in humans compared to gibbons, Old World monkeys. This gene helps encourage the growth of the brain. Further research indicated that the MCPH1 gene in humans could have also been an encouraging factor of population expansion. Other researchers have included that the diet was an encouraging factor to brain size as protein intake increased this helped brain development. Evolution of family structure and sexuality Sexuality is related to family structure and partly shapes it. The involvement of fathers in education is quite unique to humans, at least when compared to other Homininae. Concealed ovulation and menopause in women both also occur in a few other primates however, but are uncommon in other species. Testis and penis size seems to be related to family structure: monogamy or promiscuity, or harem, in humans, chimpanzees or gorillas, respectively. The levels of sexual dimorphism are generally seen as a marker of sexual selection. Studies have suggested that the earliest hominins were dimorphic and that this lessened over the course of the evolution of the genus Homo, correlating with humans becoming more monogamous, whereas gorillas, who live in harems, show a large degree of sexual dimorphism. Concealed (or "hidden") ovulation means that the phase of fertility is not detectable in women, whereas chimpanzees advertise ovulation via an obvious swelling of the genitals. Women can be partly aware of their ovulation along the menstrual phases, but men are essentially unable to detect ovulation in women. Most primates have semi-concealed ovulation, thus one can think that the common ancestor had semi-concealed ovulation, that was inherited by gorillas, and that later evolved in concealed ovulation in humans and advertised ovulation in chimpanzees. Menopause also occurs in rhesus monkeys, and possibly in chimpanzees, but does not in gorillas and is quite uncommon in other primates (and other mammal groups).
Biology and health sciences
Apes
Animals
14348
https://en.wikipedia.org/wiki/Homo%20habilis
Homo habilis
Homo habilis ( 'handy man') is an extinct species of archaic human from the Early Pleistocene of East and South Africa about 2.4 million years ago to 1.4 million years ago (mya). Upon species description in 1964, H. habilis was highly contested, with many researchers recommending it be synonymised with Australopithecus africanus, the only other early hominin known at the time, but H. habilis received more recognition as time went on and more relevant discoveries were made. By the 1980s, H. habilis was proposed to have been a human ancestor, directly evolving into Homo erectus, which directly led to modern humans. This viewpoint is now debated. Several specimens with insecure species identification were assigned to H. habilis, leading to arguments for splitting, namely into "H. rudolfensis" and "H. gautengensis" of which only the former has received wide support. Like contemporary Homo, H. habilis brain size generally varied from . The body proportions of H. habilis are only known from two highly fragmentary skeletons, and is based largely on assuming a similar anatomy to the earlier australopithecines. Because of this, it has also been proposed H. habilis be moved to the genus Australopithecus as Australopithecus habilis. However, the interpretation of H. habilis as a small-statured human with inefficient long-distance travel capabilities has been challenged. The presumed female specimen OH 62 is traditionally interpreted as having been in height and in weight assuming australopithecine-like proportions, but assuming humanlike proportions she would have been about and . Nonetheless, Homo Habilis may have been at least partially arboreal like what is postulated for australopithecines. Early hominins are typically reconstructed as having thick hair and marked sexual dimorphism with males much larger than females, though relative male and female size is not definitively known. H. habilis manufactured the Oldowan stone-tool industry and mainly used tools in butchering. Early Homo, compared to australopithecines, are generally thought to have consumed high quantities of meat and, in the case of H. habilis, scavenged meat. Typically, early hominins are interpreted as having lived in polygynous societies, though this is highly speculative. Assuming H. habilis society was similar to that of modern savanna chimpanzees and baboons, groups may have numbered 70–85 members. This configuration would be advantageous with multiple males to defend against open savanna predators, such as big cats, hyenas and crocodiles. H. habilis coexisted with H. rudolfensis, H. ergaster / H. erectus and Paranthropus boisei. Taxonomy Research history The first recognised remains—OH 7, partial juvenile skull, hand, and foot bones dating to 1.75 million years ago (mya)—were discovered in Olduvai Gorge, Tanzania, in 1960 by Jonathan Leakey, with other native Africans who digged into Olduvai Gorge, and who worked for Jonathan Leakey. However, the actual first remains—OH 4, a molar—were discovered by the senior assistant of Louis and Mary Leakey (Jonathan's parents), Heselon Mukiri, other native Africans, in 1959, but this was not realised at the time. By this time, the Leakeys had spent 29 years excavating in Olduvai Gorge for early hominin remains, but had instead recovered mainly other animal remains as well as the Oldowan stone-tool industry. The industry had been ascribed to Paranthropus boisei (at the time "Zinjanthropus") in 1959 as it was the first and only hominin recovered in the area, but this was revised upon OH 7's discovery. In 1964, Louis, South African palaeoanthropologist Phillip V. Tobias, and British primatologist John R. Napier officially assigned the remains into the genus Homo, and, on recommendation by Australian anthropologist Raymond Dart, the specific name H. habilis, meaning "able, handy, mentally skillful, vigorous" in Latin. The specimen's association with the Oldowan (then considered evidence of advanced cognitive ability) was also used as justification for classifying it into Homo. OH 7 was designated the holotype specimen. After description, it was hotly debated if H. habilis should be reclassified into Australopithecus africanus (the only other early hominin known at the time), in part because the remains were so old and at the time Homo was presumed to have evolved in Asia (with the australopithecines having no living descendants). Also, the brain size was smaller than what Wilfrid Le Gros Clark proposed in 1955 when considering Homo. The classification H. habilis began to receive wider acceptance as more fossil elements and species were unearthed. In 1983, Tobias proposed that A. africanus was a direct ancestor of Paranthropus and Homo (the two were sister taxa), and that A. africanus evolved into H. habilis which evolved into H. erectus which evolved into modern humans (by a process of cladogenesis). He further said that there was a major evolutionary leap between A. africanus and H. habilis, and thereupon human evolution progressed gradually because H. habilis brain size had nearly doubled compared to australopithecine predecessors. Many had accepted Tobias' model and assigned Late Pliocene to Early Pleistocene hominin remains outside the range of Paranthropus and H. erectus into H. habilis. For non-skull elements, this was done on the basis of size as there was a lack of clear diagnostic characteristics. Because of these practices, the range of variation for the species became quite wide, and the terms H. habilis sensu stricto (i.e. strictly) and H. habilis sensu lato (i.e. broadly) were in use to exclude and include, respectively, more discrepant morphs. To address this controversy, English palaeoanthropologist Bernard Wood proposed in 1985, that the comparatively massive skull KNM-ER 1470 from Lake Turkana, Kenya, discovered in 1972 and assigned to H. habilis, actually represented a different species, now referred to as Homo rudolfensis. It is also argued that instead it represents a male specimen whereas other H. habilis specimens are female. Early Homo from South Africa have variously been assigned to H. habilis or H. ergaster / H. erectus, but species designation has largely been unclear. In 2010, Australian archaeologist Darren Curoe proposed splitting off South African early Homo into a new species, "Homo gautengensis". In 1986, OH 62, a fragmentary skeleton was discovered by American anthropologist Tim D. White in association with H. habilis skull fragments, definitively establishing aspects of H. habilis skeletal anatomy for the first time, and revealing more Australopithecus-like than Homo-like features. Because of this, as well as similarities in dental adaptations, Wood and biological anthropologist Mark Collard suggested moving the species to Australopithecus in 1999. However, reevaluation of OH 62 to a more humanlike physiology, if correct, would cast doubt on this. The discovery of the 1.8 Ma Georgian Dmanisi skulls in the early 2000s, which exhibit several similarities with early Homo, has led to suggestions that all contemporary groups of early Homo in Africa, including H. habilis and H. rudolfensis, are the same species and should be assigned to H. erectus. Classification There is still no wide consensus as to whether or not H. habilis is ancestral to H. ergaster / H. erectus or is an offshoot of the human line, and whether or not all specimens assigned to H. habilis are correctly assigned or the species is an assemblage of different Australopithecus and Homo species. Studies of the dental morphology of H. habilis have suggested that it shares greater similarity with Australopithecus than with later Homo species. Nonetheless, H. habilis and H. rudolfensis generally are recognised members of the genus at the base of the family tree, with arguments for synonymisation or removal from the genus not widely adopted. Though it is now largely agreed upon that Homo evolved from Australopithecus, the timing and placement of this split has been much debated, with many Australopithecus species having been proposed as the ancestor. The discovery of LD 350-1, the oldest Homo specimen, dating to 2.8 mya, in the Afar Region of Ethiopia may indicate that the genus evolved from A. afarensis around this time. This specimen was initially classified as Homo sp., though subsequent studies have suggested that it also shares characteristics with Australopithecus and that it is clearly distinct from H. habilis. The oldest H. habilis specimen, A.L. 666-1, dates to 2.3 mya, but is anatomically more derived (has less ancestral, or basal, traits) than the younger OH 7, suggesting derived and basal morphs lived concurrently, and that the H. habilis lineage began before 2.3 mya. Based on 2.1-million-year-old stone tools from Shangchen, China, H. habilis or an ancestral species may have dispersed across Asia. The youngest H. habilis specimen, OH 13, dates to about 1.65 mya. Anatomy Skull It has generally been thought that brain size increased along the human line especially rapidly at the transition between species, with H. habilis brain size smaller than that of H. ergaster / H. erectus, jumping from about in H. habilis to about in H. ergaster and H. erectus. However, a 2015 study showed that the brain sizes of H. habilis, H. rudolfensis, and H. ergaster generally ranged between after reappraising the brain volume of OH 7 from to . This does, nonetheless, indicate a jump from australopithecine brain size which generally ranged from . The brain anatomy of all Homo features an expanded cerebrum in comparison to australopithecines. The pattern of striations on the teeth of OH 65 slant right, which may have been accidentally self-inflicted when the individual was pulling a piece of meat with its teeth and the left hand while trying to cut it with a stone tool using the right hand. If correct, this could indicate right handedness, and handedness is associated with major reorganisation of the brain and the lateralisation of brain function between the left and right hemispheres. This scenario has also been hypothesised for some Neanderthal specimens. Lateralisation could be implicated in tool use. In modern humans, lateralisation is weakly associated with language. The tooth rows of H. habilis were V-shaped as opposed to U-shaped in later Homo, and the mouth jutted outwards (was prognathic), though the face was flat from the nose up. Build Based on the fragmentary skeletons OH 62 (presumed female) and KNM-ER 3735 (presumed male), H. habilis body anatomy has generally been considered to have been more apelike than even that of the earlier A. afarensis and consistent with an at least partially arboreal lifestyle in the trees as is assumed in australopithecines. Based on OH 62 and assuming comparable body dimensions to australopithecines, H. habilis has generally been interpreted as having been small-bodied like australopithecines, with OH 62 generally estimated at in height and in weight. However, assuming longer, modern humanlike legs, OH 62 would have been about and , and KNM-ER 3735 about the same size. For comparison, modern human men and women in the year 1900 averaged and , respectively. It is generally assumed that pre-H. ergaster hominins, including H. habilis, exhibited notable sexual dimorphism with males markedly bigger than females. However, relative female body mass is unknown in this species. Early hominins, including H. habilis, are thought to have had thick body hair coverage like modern non-human apes because they appear to have inhabited colder regions and are thought to have had a less active lifestyle than (presumed hairless) post-ergaster species. Consequently, they probably required thick body hair to stay warm. Based on dental development rates, H. habilis is assumed to have had an accelerated growth rate compared to modern humans, more like that of modern non-human apes. Limbs The arms of H. habilis and australopithecines have generally been considered to have been proportionally long and so adapted for climbing and swinging. In 2004, anthropologists Martin Haeusler and Henry McHenry argued that, because the humerus to femur ratio of OH 62 is within the range of variation for modern humans, and KNM-ER 3735 is close to the modern human average, it is unsafe to assume apelike proportions. Nonetheless, the humerus of OH 62 measured long and the ulna (forearm) , which is closer to the proportion seen in chimpanzees. The hand bones of OH 7 suggest precision gripping, important in dexterity, as well as adaptations for climbing. In regard to the femur, traditionally comparisons with the A. afarensis specimen AL 288-1 have been used to reconstruct stout legs for H. habilis, but Haeusler and McHenry suggested the more gracile OH 24 femur (either belonging to H. ergaster / H. erectus or P. boisei) may be a more apt comparison. In this instance, H. habilis would have had longer, humanlike legs and have been effective long-distance travellers as is assumed to have been the case in H. ergaster. However, estimating the unpreserved length of a fossil is highly problematic. The thickness of the limb bones in OH 62 is more similar to chimpanzees than H. ergaster / H. erectus and modern humans, which may indicate different load bearing capabilities more suitable for arboreality in H. habilis. The strong fibula of OH 35 (though this may belong to P. boisei) is more like that of non-human apes, and consistent with arboreality and vertical climbing. OH 8, a foot, is better suited for terrestrial movement than the foot of A. afarensis, though it still retains many apelike features consistent with climbing. However, the foot has projected toe bone and compacted mid-foot joint structures, which restrict rotation between the foot and ankle as well as at the front foot. Foot stability enhances the efficiency of force transfer between the leg and the foot and vice versa, and is implicated in the plantar arch elastic spring mechanism which generates energy while running (but not walking). This could possibly indicate H. habilis was capable of some degree of endurance running, which is typically thought to have evolved later in H. ergaster / H. erectus. Culture Society Typically, H. ergaster / H. erectus is considered to have been the first human to have lived in a monogamous society, and all preceding hominins were polygynous. However, it is highly difficult to speculate with any confidence the group dynamics of early hominins. The degree of sexual dimorphism and the size disparity between males and females is often used to correlate between polygyny with high disparity and monogamy with low disparity based on general trends (though not without exceptions) seen in modern primates. Rates of sexual dimorphism are difficult to determine as early hominin anatomy is poorly represented in the fossil record. In some cases, sex is arbitrarily determined in large part based on perceived size and apparent robustness in the absence of more reliable elements in sex identification (namely the pelvis). Mating systems are also based on dental anatomy, but early hominins possess a mosaic anatomy of different traits not seen together in modern primates; the enlarged cheek teeth would suggest marked size-related dimorphism and thus intense male–male conflict over mates and a polygynous society, but the small canines should indicate the opposite. Other selective pressures, including diet, can also dramatically impact dental anatomy. The spatial distribution of tools and processed animal bones at the FLK Zinj and PTK sites in Olduvai Gorge indicate the inhabitants used this area as a communal butchering and eating grounds, as opposed to the nuclear family system of modern hunter gatherers where the group is subdivided into smaller units each with their own butchering and eating grounds. The behaviour of early Homo, including H. habilis, is sometimes modelled on that of savanna chimps and baboons. These communities consist of several males (as opposed to a harem society) in order to defend the group on the dangerous and exposed habitat, sometimes engaging in a group display of throwing sticks and stones against enemies and predators. The left foot OH 8 seems to have been bitten off by a crocodile, possibly Crocodylus anthropophagus, and the leg OH 35, which either belongs to P. boisei or H. habilis, shows evidence of leopard predation. H. habilis and contemporary hominins were likely predated upon by other large carnivores of the time, such as (in Olduvai Gorge) the hunting hyena Chasmaporthetes nitidula, and the saber-toothed cats Dinofelis and Megantereon. In 1993, American palaeoanthropologist Leslie C. Aiello and British evolutionary psychologist Robin Dunbar estimated that H. habilis group size ranged from 70–85 members—on the upper end of chimp and baboon group size—based on trends seen in neocortex size and group size in modern non-human primates. H. habilis coexisted with H. rudolfensis, H. ergaster / H. erectus, and P. boisei. It is unclear how all of these species interacted. To explain why P. boisei was associated with Olduwan tools despite not being the knapper (the one who made the tools), Leakey and colleagues, when describing H. habilis, suggested that one possibility was P. boisei was killed by H. habilis, perhaps as food. However, when describing P. boisei five years earlier, Louis Leakey said, "There is no reason whatever, in this case, to believe that the skull represents the victim of a cannibalistic feast by some hypothetical more advanced type of man." Diet It is thought H. habilis derived meat from scavenging rather than hunting (scavenger hypothesis), acting as a confrontational scavenger and stealing kills from smaller predators such as jackals or cheetahs. Fruit was likely also an important dietary component, indicated by dental erosion consistent with repetitive exposure to acidity. Based on dental microwear-texture analysis, H. habilis (like other early Homo) likely did not regularly consume tough foods. Microwear-texture complexity is, on average, somewhere between that of tough-food eaters and leaf eaters (folivores), and points to an increasingly generalised and omnivorous diet. Freshwater fish likely were also consumed, evidenced by the findings of fish remains at archaeological sites most likely associated with H. habilis. It is typically thought that the diets of H. habilis and other early Homo had a greater proportion of meat than Australopithecus, and that this led to brain growth. The main hypotheses regarding this are: meat is energy- and nutrient-rich and put evolutionary pressure on developing enhanced cognitive skills to facilitate strategic scavenging and monopolise fresh carcasses, or meat allowed the large and calorie-expensive ape gut to decrease in size allowing this energy to be diverted to brain growth. Alternatively, it is also suggested that early Homo, in a drying climate with scarcer food options, relied primarily on underground storage organs (such as tubers) and food sharing, which facilitated social bonding among both male and female group members. However, unlike what is presumed for H. ergaster and later Homo, short-statured early Homo are generally considered to have been incapable of endurance running and hunting, and the long and Australopithecus-like forearm of H. habilis could indicate early Homo were still arboreal to a degree. Also, organised hunting and gathering is thought to have emerged in H. ergaster. Nonetheless, the proposed food-gathering models to explain large brain growth necessitate increased daily travel distance. It has also been argued that H. habilis instead had long, modern humanlike legs and was fully capable of effective long distance travel, while still remaining at least partially arboreal. Large incisor size in H. habilis compared to Australopithecus predecessors implies this species relied on incisors more. The bodies of the mandibles of H. habilis and other early Homo are thicker than those of modern humans and all living apes, more comparable to Australopithecus. The mandibular body resists torsion from the bite force or chewing, meaning their jaws could produce unusually powerful stresses while eating. The greater molar cusp relief in H. habilis compared to Australopithecus suggests the former used tools to fracture tough foods (such as pliable plant parts or meat), otherwise the cusps would have been more worn down. Nonetheless, the jaw adaptations for processing mechanically challenging food indicates technological advancement did not greatly affect diet. Technology H. habilis is associated with the Early Stone Age Oldowan stone tool industry. Individuals likely used these tools primarily to butcher and skin animals and crush bones, but also sometimes to saw and scrape wood and cut soft plants. Knappers – individuals shaping stones – appear to have carefully selected lithic cores and knew that certain rocks would break in a specific way when struck hard enough and on the right spot, and they produced several different types, including choppers, polyhedrons, and discoids. Nonetheless, specific shapes were likely not thought of in advance, and probably stem from a lack of standardisation in producing such tools as well as the types of raw materials at the knappers' disposal. For example, spheroids are common at Olduvai, which features an abundance of large and soft quartz and quartzite pieces, whereas Koobi Fora lacks spheroids and provides predominantly hard basalt lava rocks. Unlike the later Acheulean culture invented by H. ergaster / H. erectus, Oldowan technology does not require planning and foresight to manufacture, and thus does not indicate high cognition in Oldowan knappers, though it does require a degree of coordination and some knowledge of mechanics. Oldowan tools infrequently exhibit retouching and were probably discarded immediately after use most of the time. The Oldowan was first reported in 1934, but it was not until the 1960s that it become widely accepted as the earliest culture, dating to 1.8 mya, and as having been manufactured by H. habilis. Since then, more discoveries have placed the origins of material culture substantially backwards in time, with the Oldowan being discovered in Ledi-Geraru and Gona in Ethiopia dating to 2.6 mya, perhaps associated with the evolution of the genus. Australopithecines are also known to have manufactured tools, such as the 3.3 Ma Lomekwi stone tool industry, and some evidence of butchering from about 3.4 mya. Nonetheless, the comparatively sharp-edged Oldowan culture was a major innovation from australopithecine technology, and it would have allowed different feeding strategies and the ability to process a wider range of foods, which would have been advantageous in the changing climate of the time. It is unclear if the Oldowan was independently invented or if it was the result of hominin experimentation with rocks over hundreds of thousands of years across multiple species. In 1962, a circle made with volcanic rocks was discovered in Olduvai Gorge. At intervals, rocks were piled up to high. Mary Leakey suggested the rock piles were used to support poles stuck into the ground, possibly to support a windbreak or a rough hut. Some modern-day nomadic tribes build similar low-lying rock walls to build temporary shelters upon, bending upright branches as poles and using grasses or animal hide as a screen. Dating to 1.75 mya, it is attributed to some early Homo, and is the oldest-claimed evidence of architecture.
Biology and health sciences
Evolution
null
14352
https://en.wikipedia.org/wiki/Hops
Hops
Hops are the flowers (also called seed cones or strobiles) of the hop plant Humulus lupulus, a member of the Cannabaceae family of flowering plants. They are used primarily as a bittering, flavouring, and stability agent in beer, to which, in addition to bitterness, they impart floral, fruity, or citrus flavours and aromas. Hops are also used for various purposes in other beverages and herbal medicine. The hops plants have separate female and male plants, and only female plants are used for commercial production. The hop plant is a vigorous climbing herbaceous perennial, usually trained to grow up strings in a field called a hopfield, hop garden (in the South of England), or hop yard (in the West Country and United States) when grown commercially. Many different varieties of hops are grown by farmers around the world, with different types used for particular styles of beer. The first documented use of hops in beer is from the 9th century, though Hildegard of Bingen, 300 years later, is often cited as the earliest documented source. Before this period, brewers used a "gruit", composed of a wide variety of bitter herbs and flowers, including dandelion, burdock root, marigold, horehound (the old German name for horehound, , means "mountain hops"), ground ivy, and heather. Early documents include mention of a hop garden in the will of Charlemagne's father, Pepin the Short. Hops are also used in brewing for their antibacterial effect over less desirable microorganisms and for purported benefits including balancing the sweetness of the malt with bitterness and a variety of flavours and aromas. It is believed that traditional herb combinations for beers were abandoned after it was noticed that beers made with hops were less prone to spoilage. History The first documented hop cultivation was in 736, in the Hallertau region of present-day Germany. In 768, hop gardens were left to the Cloister of Saint-Denis in a will of Pepin the Short, the father of Charlemagne. The first mention of hops being used in German brewing was in 1079. Not until the 13th century did hops begin to start threatening the use of gruit for flavouring. Gruit was used when the nobility levied taxes on hops. Whichever was taxed made the brewer then quickly switch to the other. In Britain, hopped beer was first imported from Holland around 1400, yet hops were condemned as late as 1519 as a "wicked and pernicious weed". In Germany, using hops was also a religious and political choice in the early 16th century. There was no tax on hops to be paid to the Catholic church, unlike on gruit. For this reason the Protestants preferred hopped beer. Hops used in England were imported from France, Holland and Germany and were subject to import duty; it was not until 1524 that hops were first grown in the southeast of England (Kent), when they were introduced as an agricultural crop by Dutch farmers. Consequently, many words used in the hop industry derive from the Dutch language. Hops were then grown as far north as Aberdeen, near breweries for convenience of infrastructure. According to Thomas Tusser's 1557 Five Hundred Points of Good Husbandry: The hop for his profit I thus do exalt, It strengtheneth drink and it flavored malt; And being well-brewed long kept it will last, And drawing abide, if ye draw not too fast. In England there were many complaints over the quality of imported hops, the sacks of which were often contaminated by stalks, sand or straw to increase their weight. As a result, in 1603, King James I approved an Act of Parliament banning the practice by which "the Subjects of this Realm have been of late years abused &c. to the Value of £20,000 yearly, besides the Danger of their Healths". Hop cultivation was begun in the present-day United States in 1629 by English and Dutch farmers. Before prohibition, cultivation was mainly centred around New York, California, Oregon, and Washington state. Problems with powdery mildew and downy mildew devastated New York's production by the 1920s, and California only produces hops on a small scale. World production Hops production is concentrated in moist temperate climates, with much of the world's production occurring near the 48th parallel north. Hop plants prefer the same soils as potatoes and the leading potato-growing states in the United States are also major hops-producing areas. Not all potato-growing areas can produce good hops naturally, however: for example, soils in the Maritime Provinces of Canada lack the boron that hops prefer. Historically, hops were not grown in Ireland, but were imported from England. In 1752 more than 500 tons of English hops were imported through Dublin alone. Important production centres today are the Hallertau in Germany, the Žatec (Saaz) in the Czech Republic, the Yakima (Washington) and Willamette (Oregon) valleys, and western Canyon County, Idaho (including the communities of Parma, Wilder, Greenleaf, and Notus). The principal production centres in the UK are in Kent (which produces Kent Goldings hops), Herefordshire, and Worcestershire. Essentially all of the harvested hops are used in beer making. Cultivation and harvest Although hops are grown in most of the continental United States and Canada, cultivation of hops for commercial production requires a particular environment. As hops are a climbing plant, they are trained to grow up trellises made from strings or wires that support the plants and allow them significantly greater growth with the same sunlight profile. In this way, energy that would have been required to build structural cells is also freed for crop growth. The hop plant's reproduction method is that male and female flowers develop on separate plants, although occasionally a fertile individual will develop which contains both male and female flowers. Because pollinated seeds are undesirable for brewing beer, only female plants are grown in hop fields, thus preventing pollination. Female plants are propagated vegetatively, and male plants are culled if plants are grown from seeds. Hop plants are planted in rows about apart. Each spring, the roots send forth new bines that are started up strings from the ground to an overhead trellis. The cones grow high on the bine, and in the past, these cones were picked by hand. Harvesting of hops became much more efficient with the invention of the mechanical hops separator, patented by Emil Clemens Horst in 1909. Hops are harvested at the end of summer. The are cut down, separated, and then dried in an oast house to reduce moisture content. To be dried, the hops are spread out on the upper floor of the oast house and heated by heating units on the lower floor. The dried hops are then compressed into bales by a baler. Hop cones contain different oils, such as lupulin, a yellowish, waxy substance, an oleoresin, that imparts flavour and aroma to beer. Lupulin contains lupulone and humulone, which possess antibiotic properties, suppressing bacterial growth favoring brewer's yeast to grow. After lupulin has been extracted in the brewing process the papery cones are discarded. Migrant labor and social impact The need for massed labor at harvest time meant hop-growing had a big social impact. Around the world, the labor-intensive harvesting work involved large numbers of migrant workers who would travel for the annual hop harvest. Whole families would participate and live in hoppers' huts, with even the smallest children helping in the fields. The final chapters of W. Somerset Maugham's Of Human Bondage and a large part of George Orwell's A Clergyman's Daughter contain a vivid description of London families participating in this annual hops harvest. In England, many of those picking hops in Kent were from eastern areas of London. This provided a break from urban conditions that was spent in the countryside. People also came from Birmingham and other Midlands cities to pick hops in the Malvern area of Worcestershire. Some photographs have been preserved. The often-appalling living conditions endured by hop pickers during the harvest became a matter of scandal across Kent and other hop-growing counties. Eventually, the Rev. John Young Stratton, Rector of Ditton, Kent, began to gather support for reform, resulting in 1866 in the formation of the Society for the Employment and Improved Lodging of Hop Pickers. The hop-pickers were given very basic accommodation, with very poor sanitation. This led to the spread of infectious diseases and led to contaminated water. The 1897 Maidstone typhoid epidemic was partly as a result of hop-pickers camping near the Farleigh Springs which supplied Maidstone with water. Particularly in Kent, because of a shortage of small-denomination coin of the realm, many growers issued their own currency to those doing the labor. In some cases, the coins issued were adorned with fanciful hops images, making them quite beautiful. In the United States, Prohibition had a serious adverse effect on hops production, but remnants of this significant industry in the western states are still noticeable in the form of old hop kilns that survive throughout Sonoma County, California, among others. Florian Dauenhauer, of Santa Rosa in Sonoma County, became a manufacturer of hop-harvesting machines in 1940, in part because of the hop industry's importance to the county. This mechanization helped destroy the local industry by enabling large-scale mechanized production, which moved to larger farms in other areas. Dauenhauer Manufacturing Company remains a current producer of hop harvesting machines. Chemical composition In addition to water, cellulose, and various proteins, the chemical composition of hops consists of compounds important for imparting character to beer. Alpha acids Probably the most important chemical compound within hops are the alpha acids or humulones. During wort boiling, the humulones are thermally isomerized into iso-alpha acids or isohumulones, which are responsible for the bitter taste of beer. Beta acids Hops contain beta acids or lupulones. These are desirable for their aroma contributions to beer. Essential oils The main components of hops essential oils are terpene hydrocarbons consisting of myrcene, humulene and caryophyllene. Myrcene is responsible for the pungent smell of fresh hops. Humulene and its oxidative reaction products may give beer its prominent hop aroma. Together, myrcene, humulene, and caryophyllene represent 80 to 90% of the total hops essential oil. Flavonoids Xanthohumol is the principal flavonoid in hops. The other well-studied prenylflavonoids are 8-Prenylnaringenin and isoxanthohumol. Xanthohumol is under basic research for its potential properties, while 8-prenylnaringenin is a potent phytoestrogen. Brewing Hops are usually dried in an oast house before they are used in the brewing process. Undried or "wet" hops are sometimes (since c. 1990) used. The wort (sugar-rich liquid produced from malt) is boiled with hops before it is cooled down and yeast is added, to start fermentation. The effect of hops on the finished beer varies by type and use, though there are two main hop types: bittering and aroma. Bittering hops have higher concentrations of alpha acids, and are responsible for the large majority of the bitter flavour of a beer. European (so-called "noble") hops typically average 5–9% alpha acids by weight (AABW), and the newer American cultivars typically range from 8–19% AABW. Aroma hops usually have a lower concentration of alpha acids (~5%) and are the primary contributors of hop aroma and (nonbitter) flavour. Bittering hops are boiled for a longer period of time, typically 60–90 minutes, and often have inferior aromatic properties, as the aromatic compounds evaporate during the boil. The degree of bitterness imparted by hops depends on the degree to which alpha acids are isomerized during the boil, and the impact of a given amount of hops is specified in International Bitterness Units. On the other hand, unboiled hops are only mildly bitter. Aroma hops are typically added to the wort later to prevent the evaporation of the essential oils, to impart "hop taste" (if during the final 30 minutes of boil) or "hop aroma" (if during the final 10 minutes, or less, of boil). Aroma hops are often added after the wort has cooled and while the beer ferments, a technique known as "dry hopping", which contributes to the hop aroma. Farnesene is a major component in some hops. The composition of hop essential oils can differ between varieties and between years in the same variety, having a significant influence on flavour and aroma. Today, a substantial amount of "dual-use" hops are used, as well. These have high concentrations of alpha acids and good aromatic properties. These can be added to the boil at any time, depending on the desired effect. Hop acids also contribute to and stabilize the foam qualities of beer. Flavours and aromas are described appreciatively using terms which include "grassy", "floral", "citrus", "spicy", "piney", "lemony", "grapefruit", and "earthy". Many pale lagers have fairly low hop influence, while lagers marketed as Pilsener or brewed in the Czech Republic may have noticeable noble hop aroma. Certain ales (particularly the highly hopped style known as India Pale Ale, or IPA) can have high levels of hop bitterness. Brewers may use software tools to control the bittering levels in the boil and adjust recipes to account for a change in the hop bill or seasonal variations in the crop that may lead to the need to compensate for a difference in alpha acid contribution. Data may be shared with other brewers via BeerXML allowing the reproduction of a recipe allowing for differences in hop availability. Lately the dried pucks, extracts and pellets replace whole hops in brewing processes because of efficiency and cost. Varieties Breeding programmes There are many different varieties of hops used in brewing today. Historically, hops varieties were identified by geography, i.e., from the towns of Hallertau, Spalt, and Tettnang in Germany, or the region writ large like the Neomexicanus hops of New Mexico. Others were named for the farmer who is recognized as first cultivating them, including Goldings or Fuggles from England, or by their growing habit like the Oregon Cluster. Around 1900, a number of institutions began to experiment with breeding specific hop varieties. The breeding program at Wye College in Wye, Kent, was started in 1904 and rose to prominence through the work of Prof. E. S. Salmon. Salmon released Brewer's Gold and Brewer's Favorite for commercial cultivation in 1934, and went on to release more than two dozen new cultivars before his death in 1959. Brewer's Gold has become the ancestor of the bulk of new hop releases around the world since its release. Wye College continued its breeding program and again received attention in the 1970s, when Dr. Ray A. Neve released Wye Target, Wye Challenger, Wye Northdown, Wye Saxon and Wye Yeoman. More recently, Wye College and its successor institution Wye Hops Ltd., have focused on breeding the first dwarf hop varieties, which are easier to pick by machine and far more economical to grow. Wye College have also been responsible for breeding hop varieties that will grow with only 12 hours of daily light for the South African hop farmers. Wye College was closed in 2009 but the legacy of their hop breeding programs, particularly that of the dwarf varieties, is continuing as already the US private and public breeding programs are using their stock material. Particular hop varieties are associated with beer regions and styles, for example pale lagers are usually brewed with European (often German, Polish or Czech) noble hop varieties such as Saaz, Hallertau and Strissel Spalt. British ales use hop varieties such as Fuggles, Goldings and W.G.V. North American beers often use Cascade hops, Columbus hops, Centennial hops, Willamette, Amarillo hops and about forty more varieties as the US have lately been the more significant breeders of new hop varieties, including dwarf hop varieties. Hops from New Zealand, such as Pacific Gem, Motueka and Nelson Sauvin, are used in a "Pacific Pale Ale" style of beer with increasing production in 2014. Noble hops The term "noble hops" is a marketing term that traditionally refers to certain varieties of hops that became known for being low in bitterness and high in aroma. They are the European cultivars or races Hallertau, Tettnanger, Spalt, and Saaz. Some proponents assert that the English varieties Fuggle, East Kent Goldings and Goldings might qualify as "noble hops" due to the similar composition, but such terms are not applied to English varieties. Their low relative bitterness, but strong aroma, are often distinguishing characteristics of European-style lagers, such as Pilsener, Dunkel, and Oktoberfest/Märzen. In beer, they are considered aroma hops (as opposed to bittering hops); see Pilsner Urquell as a classic example of the Bohemian Pilsener style, which showcases noble hops. As with grapes, the location where hops are grown affects the hops' characteristics. Much as Dortmunder beer may within the EU be labelled "Dortmunder" only if it has been brewed in Dortmund, noble hops may officially be considered "noble" only if they were grown in the areas for which the hop varieties (races) were named. Hallertau or Hallertauer – The original German lager hop; named after Hallertau or Holledau region in central Bavaria. Due to susceptibility to crop disease, it was largely replaced by Hersbrucker in the 1970s and 1980s. (Alpha acid 3.5–5.5% / beta acid 3–4%) Spalt – Traditional German noble hop from the Spalter region south of Nuremberg. With a delicate, spicy aroma. (Alpha acid 4–5% / beta acid 4–5%) Tettnang – Comes from Tettnang, a small town in southern Baden-Württemberg in Germany. The region produces significant quantities of hops, and ships them to breweries throughout the world. Noble German dual-use hop used in European pale lagers, sometimes with Hallertau. Soft bitterness. (Alpha acid 3.5–5.5% / beta acid 3.5–5.5%) Žatec (Saaz) – Noble hop, named after Žatec town, used extensively in Bohemia to flavour pale Czech lagers such as Pilsner Urquell. Soft aroma and bitterness. (Alpha acid 3–4.5% /Beta acid 3–4.5%) Noble hops are characterized through analysis as having an aroma quality resulting from numerous factors in the essential oil, such as an alpha:beta ratio of 1:1, low alpha-acid levels (2–5%) with a low cohumulone content, low myrcene in the hop oil, high humulene in the oil, a ratio of humulene:caryophyllene above three, and poor storability resulting in them being more prone to oxidation. In reality, this means they have a relatively consistent bittering potential as they age, due to beta-acid oxidation, and a flavor that improves as they age during periods of poor storage. Other uses Hops are used in herbal teas, as well as soft drinks including julmust (a carbonated beverage similar to soda that is popular in Sweden during December), Malta (a Latin American soft drink), kvass and hop water. Additionally, both the young shoots and young flowers are edible and can be cooked like asparagus. Hops may be used in herbal medicine in a way similar to valerian, as a treatment for anxiety, restlessness, and insomnia. A pillow filled with hops is a popular folk remedy for sleeplessness, and animal research has shown a sedative effect. The relaxing effect of hops may be due, in part, to the specific degradation product from alpha acids, 2-methyl-3-buten-2-ol, as demonstrated from nighttime consumption of non-alcoholic beer. 2-methyl-3-buten-2-ol is structurally similar to tert-amyl alcohol which was historically used as an anesthetic. Hops tend to be unstable when exposed to light or air and lose their potency after a few months' storage. Hops are of interest for hormone replacement therapy and are under basic research for potential relief of menstruation-related problems. Toxicity Dermatitis sometimes results from harvesting hops. Although few cases require medical treatment, an estimated 3% of the workers suffer some type of skin lesions on the face, hands, and legs. Hops are toxic to dogs. Fiction Hops and hops picking form the milieu and atmosphere in the British detective novel, Death in the Hopfields (1937) by John Rhode. The novel was subsequently issued in the United States under the title The Harvest Murder. Hop-farming also provides the setting for the 1957 novel September Moon by the English novelist John Moore, set in Herefordshire.
Biology and health sciences
Rosales
Plants
14359
https://en.wikipedia.org/wiki/Huygens%E2%80%93Fresnel%20principle
Huygens–Fresnel principle
The Huygens–Fresnel principle (named after Dutch physicist Christiaan Huygens and French physicist Augustin-Jean Fresnel) states that every point on a wavefront is itself the source of spherical wavelets, and the secondary wavelets emanating from different points mutually interfere. The sum of these spherical wavelets forms a new wavefront. As such, the Huygens-Fresnel principle is a method of analysis applied to problems of luminous wave propagation both in the far-field limit and in near-field diffraction as well as reflection. History In 1678, Huygens proposed that every point reached by a luminous disturbance becomes a source of a spherical wave. The sum of these secondary waves determines the form of the wave at any subsequent time; the overall procedure is referred to as Huygens' construction. He assumed that the secondary waves travelled only in the "forward" direction, and it is not explained in the theory why this is the case. He was able to provide a qualitative explanation of linear and spherical wave propagation, and to derive the laws of reflection and refraction using this principle, but could not explain the deviations from rectilinear propagation that occur when light encounters edges, apertures and screens, commonly known as diffraction effects. In 1818, Fresnel showed that Huygens's principle, together with his own principle of interference, could explain both the rectilinear propagation of light and also diffraction effects. To obtain agreement with experimental results, he had to include additional arbitrary assumptions about the phase and amplitude of the secondary waves, and also an obliquity factor. These assumptions have no obvious physical foundation, but led to predictions that agreed with many experimental observations, including the Poisson spot. Poisson was a member of the French Academy, which reviewed Fresnel's work. He used Fresnel's theory to predict that a bright spot ought to appear in the center of the shadow of a small disc, and deduced from this that the theory was incorrect. However, Arago, another member of the committee, performed the experiment and showed that the prediction was correct. This success was important evidence in favor of the wave theory of light over then predominant corpuscular theory. In 1882, Gustav Kirchhoff analyzed Fresnel's theory in a rigorous mathematical formulation, as an approximate form of an integral theorem. Very few rigorous solutions to diffraction problems are known however, and most problems in optics are adequately treated using the Huygens-Fresnel principle. In 1939 Edward Copson, extended the Huygens' original principle to consider the polarization of light, which requires a vector potential, in contrast to the scalar potential of a simple ocean wave or sound wave. In antenna theory and engineering, the reformulation of the Huygens–Fresnel principle for radiating current sources is known as surface equivalence principle. Issues in Huygens-Fresnel theory continue to be of interest. In 1991, David A. B. Miller suggested that treating the source as a dipole (not the monopole assumed by Huygens) will cancel waves propagating in the reverse direction, making Huygens' construction quantiatively correct. In 2021, Forrest L. Anderson showed that treating the wavelets as Dirac delta functions, summing and differentiating the summation is sufficient to cancel reverse propagating waves. Examples Refraction The apparent change in direction of a light ray as it enters a sheet of glass at angle can be understood by the Huygens construction. Each point on the surface of the glass gives a secondary wavelet. These wavelets propagate at a slower velocity in the glass, making less forward progress than their counterparts in air. When the wavelets are summed, the resulting wavefront propagates at an angle to the direction of the wavefront in air. In an inhomogeneous medium with a variable index of refraction, different parts of the wavefront propagate at different speeds. Consequently the wavefront bends around in the direction of higher index. Diffraction Huygens' principle as a microscopic model The Huygens–Fresnel principle provides a reasonable basis for understanding and predicting the classical wave propagation of light. However, there are limitations to the principle, namely the same approximations done for deriving the Kirchhoff's diffraction formula and the approximations of near field due to Fresnel. These can be summarized in the fact that the wavelength of light is much smaller than the dimensions of any optical components encountered. Kirchhoff's diffraction formula provides a rigorous mathematical foundation for diffraction, based on the wave equation. The arbitrary assumptions made by Fresnel to arrive at the Huygens–Fresnel equation emerge automatically from the mathematics in this derivation. A simple example of the operation of the principle can be seen when an open doorway connects two rooms and a sound is produced in a remote corner of one of them. A person in the other room will hear the sound as if it originated at the doorway. As far as the second room is concerned, the vibrating air in the doorway is the source of the sound. Mathematical expression of the principle Consider the case of a point source located at a point P0, vibrating at a frequency f. The disturbance may be described by a complex variable U0 known as the complex amplitude. It produces a spherical wave with wavelength λ, wavenumber . Within a constant of proportionality, the complex amplitude of the primary wave at the point Q located at a distance r0 from P0 is: Note that magnitude decreases in inverse proportion to the distance traveled, and the phase changes as k times the distance traveled. Using Huygens's theory and the principle of superposition of waves, the complex amplitude at a further point P is found by summing the contribution from each point on the sphere of radius r0. In order to get an agreement with experimental results, Fresnel found that the individual contributions from the secondary waves on the sphere had to be multiplied by a constant, −i/λ, and by an additional inclination factor, K(χ). The first assumption means that the secondary waves oscillate at a quarter of a cycle out of phase with respect to the primary wave and that the magnitude of the secondary waves are in a ratio of 1:λ to the primary wave. He also assumed that K(χ) had a maximum value when χ = 0, and was equal to zero when χ = π/2, where χ is the angle between the normal of the primary wavefront and the normal of the secondary wavefront. The complex amplitude at P, due to the contribution of secondary waves, is then given by: where S describes the surface of the sphere, and s is the distance between Q and P. Fresnel used a zone construction method to find approximate values of K for the different zones, which enabled him to make predictions that were in agreement with experimental results. The integral theorem of Kirchhoff includes the basic idea of Huygens–Fresnel principle. Kirchhoff showed that in many cases, the theorem can be approximated to a simpler form that is equivalent to the formation of Fresnel's formulation. For an aperture illumination consisting of a single expanding spherical wave, if the radius of the curvature of the wave is sufficiently large, Kirchhoff gave the following expression for K(χ): K has a maximum value at χ = 0 as in the Huygens–Fresnel principle; however, K is not equal to zero at χ = π/2, but at χ = π. Above derivation of K(χ) assumed that the diffracting aperture is illuminated by a single spherical wave with a sufficiently large radius of curvature. However, the principle holds for more general illuminations. An arbitrary illumination can be decomposed into a collection of point sources, and the linearity of the wave equation can be invoked to apply the principle to each point source individually. K(χ) can be generally expressed as: In this case, K satisfies the conditions stated above (maximum value at χ = 0 and zero at χ = π/2). Generalized Huygens' principle Many books and references - e.g. (Greiner, 2002) and (Enders, 2009) - refer to the Generalized Huygens' Principle using the definition in (Feynman, 1948). Feynman defines the generalized principle in the following way: This clarifies the fact that in this context the generalized principle reflects the linearity of quantum mechanics and the fact that the quantum mechanics equations are first order in time. Finally only in this case the superposition principle fully apply, i.e. the wave function in a point P can be expanded as a superposition of waves on a border surface enclosing P. Wave functions can be interpreted in the usual quantum mechanical sense as probability densities where the formalism of Green's functions and propagators apply. What is note-worthy is that this generalized principle is applicable for "matter waves" and not for light waves any more. The phase factor is now clarified as given by the action and there is no more confusion why the phases of the wavelets are different from the one of the original wave and modified by the additional Fresnel parameters. As per Greiner the generalized principle can be expressed for in the form: where G is the usual Green function that propagates in time the wave function . This description resembles and generalize the initial Fresnel's formula of the classical model. Feynman's path integral and the modern photon wave function Huygens' theory served as a fundamental explanation of the wave nature of light interference and was further developed by Fresnel and Young but did not fully resolve all observations such as the low-intensity double-slit experiment first performed by G. I. Taylor in 1909. It was not until the early and mid-1900s that quantum theory discussions, particularly the early discussions at the 1927 Brussels Solvay Conference, where Louis de Broglie proposed his de Broglie hypothesis that the photon is guided by a wave function. The wave function presents a much different explanation of the observed light and dark bands in a double slit experiment. In this conception, the photon follows a path which is a probabilistic choice of one of many possible paths in the electromagnetic field. These probable paths form the pattern: in dark areas, no photons are landing, and in bright areas, many photons are landing. The set of possible photon paths is consistent with Richard Feynman's path integral theory, the paths determined by the surroundings: the photon's originating point (atom), the slit, and the screen and by tracking and summing phases. The wave function is a solution to this geometry. The wave function approach was further supported by additional double-slit experiments in Italy and Japan in the 1970s and 1980s with electrons. Quantum field theory Huygens' principle can be seen as a consequence of the homogeneity of space—space is uniform in all locations. Any disturbance created in a sufficiently small region of homogeneous space (or in a homogeneous medium) propagates from that region in all geodesic directions. The waves produced by this disturbance, in turn, create disturbances in other regions, and so on. The superposition of all the waves results in the observed pattern of wave propagation. Homogeneity of space is fundamental to quantum field theory (QFT) where the wave function of any object propagates along all available unobstructed paths. When integrated along all possible paths, with a phase factor proportional to the action, the interference of the wave-functions correctly predicts observable phenomena. Every point on the wavefront acts as the source of secondary wavelets that spread out in the light cone with the same speed as the wave. The new wavefront is found by constructing the surface tangent to the secondary wavelets. In other spatial dimensions In 1900, Jacques Hadamard observed that Huygens' principle was broken when the number of spatial dimensions is even. From this, he developed a set of conjectures that remain an active topic of research. In particular, it has been discovered that Huygens' principle holds on a large class of homogeneous spaces derived from the Coxeter group (so, for example, the Weyl groups of simple Lie algebras). The traditional statement of Huygens' principle for the D'Alembertian gives rise to the KdV hierarchy; analogously, the Dirac operator gives rise to the AKNS hierarchy.
Physical sciences
Waves
Physics
14376
https://en.wikipedia.org/wiki/Hamster
Hamster
Hamsters are rodents (order Rodentia) belonging to the subfamily Cricetinae, which contains 19 species classified in seven genera. They have become established as popular small pets. The best-known species of hamster is the golden or Syrian hamster (Mesocricetus auratus), which is the type most commonly kept as a pet. Other hamster species commonly kept as pets are the three species of dwarf hamster, Campbell's dwarf hamster (Phodopus campbelli), the winter white dwarf hamster (Phodopus sungorus) and the Roborovski hamster (Phodopus roborovskii). Hamsters feed primarily on seeds, fruits, vegetation, and occasionally burrowing insects. In the wild, they are crepuscular: they forage during the twilight hours. In captivity, however, they are known to live a conventionally nocturnal lifestyle, waking around sundown to feed and exercise.. Physically, they are stout-bodied with distinguishing features that include elongated cheek pouches extending to their shoulders, which they use to carry food back to their burrows, as well as a short tail and fur-covered feet. Classification Taxonomists generally disagree about the most appropriate placement of the subfamily Cricetinae within the superfamily Muroidea. Some place it in a family Cricetidae that also includes voles, lemmings, and New World rats and mice; others group all these into a large family called Muridae. Their evolutionary history is recorded by 15 extinct fossil genera and extends back 11.2 million to 16.4 million years to the Middle Miocene Epoch in Europe and North Africa; in Asia it extends 6 million to 11 million years. Four of the seven living genera include extinct species. One extinct hamster of Cricetus, for example, lived in North Africa during the Middle Miocene, but the only extant member of that genus is the European or common hamster of Eurasia. Subfamily Cricetinae Genus Allocricetulus Species A. curtatus—Mongolian hamster Species A. eversmanni—Eversmann's or Kazakh hamster Genus Cansumys Species C. canus—Gansu hamster Genus Cricetulus Species C. barabensis, including "C. pseudogriseus" and "C. obscurus"—Chinese striped hamster, also called Chinese hamster; striped dwarf hamster Species C. griseus—Chinese (dwarf) hamster, also called rat hamster, sometimes considered a synonym of C. barabensis Species C. longicaudatus—long-tailed dwarf hamster Species C. sokolovi—Sokolov's dwarf hamster Genus Cricetus Species C. cricetus—European hamster, also called common hamster or black-bellied field hamster Genus Mesocricetus—golden hamsters Species M. auratus—golden or Syrian hamster Species M. brandti—Turkish hamster, also called Brandt's hamster; Azerbaijani hamster Species M. newtoni—Romanian hamster Species M. raddei—Ciscaucasian hamster Genus Nothocricetulus - grey dwarf hamster Species N. migratorius—grey dwarf hamster, Armenian hamster, migratory grey hamster; grey hamster; migratory hamster Genus Phodopus—dwarf hamsters Species P. campbelli—Campbell's dwarf hamster Species P. roborovskii—Roborovski hamster Species P. sungorus—Djungarian hamster or winter-white Russian dwarf hamster Genus Tscherskia Species T. triton—greater long-tailed hamster, also called Korean hamster Genus Urocricetus Species U. alticola - Ladakh dwarf hamster Species U. kamensis - Kam dwarf hamster Relationships among hamster species Neumann et al. (2006) conducted a molecular phylogenetic analysis of 12 of the above 17 species using DNA sequence from three genes: 12S rRNA, cytochrome b, and von Willebrand factor. They uncovered the following relationships: Phodopus group The genus Phodopus was found to represent the earliest split among hamsters. Their analysis included both species. The results of another study suggest Urocricetus kamensis and the related U. alticola belong to either this Phodopus group or hold a similar basal position. Mesocricetus group The genus Mesocricetus also forms a clade. Their analysis included all four species, with M. auratus and M. raddei forming one subclade and M. brandti and M. newtoni another. Remaining genera The remaining genera of hamsters formed a third major clade. Two of the three sampled species within Cricetulus represent the earliest split. This clade contains C. barabensis (and presumably the related C. sokolovi) and C. longicaudatus. Miscellaneous The remaining clade contains members of Allocricetulus, Tscherskia, Cricetus, and C. migratorius. Allocricetulus and Cricetus were sister taxa. Cricetulus migratorius was their next closest relative, and Tscherskia was basal. History Although the Syrian hamster or golden hamster (Mesocricetus auratus) was first described scientifically by George Robert Waterhouse in 1839, researchers were not able to successfully breed and domesticate hamsters until 1939. The entire laboratory and pet populations of Syrian hamsters appear to be descendants of a single brother–sister pairing. These littermates were captured and imported in 1930 from Aleppo in Syria by Israel Aharoni, a zoologist of the University of Jerusalem. In Jerusalem, the hamsters bred very successfully. Years later, animals of this original breeding colony were exported to the United States, where Syrian hamsters became a common pet and laboratory animal. Comparative studies of domestic and wild Syrian hamsters have shown reduced genetic variability in the domestic strain. However, the differences in behavioral, chronobiological, morphometrical, hematological, and biochemical parameters are relatively small and fall into the expected range of interstrain variations in other laboratory animals. Etymology The name "hamster" is a loanword from the German, which itself derives from earlier Middle High German . It is possibly related to Old Church Slavonic , which is either a blend of the root of Russian () "hamster" and a Baltic word (cf. "hamster"); or of Persian origin (cf. Avestan: "oppressor"). The collective noun for a group of hamsters is "horde". In German, the verb is derived from . It means "to hoard". Description Hamsters are typically stout-bodied, with tails shorter than body length, and have small, furry ears, short, stocky legs, and wide feet. They have thick, silky fur, which can be long or short, colored black, grey, honey, white, brown, yellow, red, or a mix, depending on the species. Two species of hamster belonging to the genus Phodopus, Campbell's dwarf hamster (P. campbelli) and the Djungarian hamster (P. sungorus), and two of the genus Cricetulus, the Chinese striped hamster (C. barabensis) and the Chinese hamster (C. griseus) have a dark stripe down their heads to their tails. The species of genus Phodopus are the smallest, with bodies long; the largest is the European hamster (Cricetus cricetus), measuring up to long, not including a short tail of up to . The hamster tail can be difficult to see, as it is usually not very long (about the length of the body), with the exception of the Chinese hamster, which has a tail the same length as the body. One rodent characteristic that can be highly visible in hamsters is their sharp incisors; they have an upper pair and lower pair which grow continuously throughout life, so must be regularly worn down. Hamsters are very flexible, but their bones are somewhat fragile. They are extremely susceptible to rapid temperature changes and drafts, as well as extreme heat or cold. Senses Hamsters have poor eyesight; they are nearsighted and colorblind. Their eyesight leads to them not having a good sense of distance or knowing where they are, but that does not stop them from climbing in (and sometimes out of) their cages or from being adventurous. Hamsters can sense movement around at all times, which helps protect them from harm in the wild. In a household, this sense helps them know when their owner is near. Hamsters have scent glands on their flanks (and abdomens in Chinese and dwarf hamsters) which they rub against the surface beneath them, leaving a scent trail. Hamsters also use their sense of smell to distinguish between the sexes and to locate food. Mother hamsters can also use their sense of smell to find their own babies and identify which ones are not theirs. Their scent glands can also be used to mark their territories, their babies, or their mate. Hamsters catch sounds by having their ears upright. They tend to learn similar noises and begin to know the sound of their food and even their owner's voice. They are also particularly sensitive to high-pitched noises and can hear and communicate in the ultrasonic range. Diet Hamsters are omnivores, which means they can eat meat and plant matter. Hamsters that live in the wild eat seeds, grass, and even insects. Although pet hamsters can survive on a diet of exclusively commercial hamster food, other items, such as vegetables, fruits, seeds, and nuts, can be given. Although store-bought food is good for hamsters, it is best if fruits and vegetables are also in their diet because it keeps them healthier. Hamsters in the Middle East have been known to hunt in packs to find insects for food. Hamsters are hindgut fermenters and often eat their own feces (coprophagy) to recover nutrients digested in the hind-gut, but not absorbed. Behavior Feeding A behavioral characteristic of hamsters is food hoarding. They carry food in their spacious cheek pouches to their underground storage chambers. When full, the cheeks can make their heads double, or even triple in size. Hamsters lose weight during the autumn months in anticipation of winter. This occurs even when hamsters are kept as pets and is related to an increase in exercise. Social behavior Most hamsters are strictly solitary. If housed together, acute and chronic stress might occur, and they might fight fiercely, sometimes fatally. Dwarf hamster species might tolerate siblings or same-gender unrelated hamsters if introduced at an early enough age, but this cannot be guaranteed. Hamsters communicate through body language to one another and even to their owner. They communicate by sending a specific scent using their scent glands and also show body language to express how they are feeling. Chronobiology Hamsters can be described as nocturnal or crepuscular (active mostly at dawn and dusk). Khunen writes, "Hamsters are nocturnal rodents who are active during the night", but others have written that because hamsters live underground during most of the day, only leaving their burrows for about an hour before sundown and then returning when it gets dark, their behavior is primarily crepuscular. Fritzsche indicated although some species have been observed to show more nocturnal activity than others, they are all primarily crepuscular. In the wild Syrian hamsters can hibernate and allow their body temperature to fall close to ambient temperature. This kind of thermoregulation diminishes the metabolic rate to about 5% and helps the animal to considerably reduce the need for food during the winter. Hibernation can last up to one week but more commonly last 2–3 days. When kept as house pets the Syrian hamster does not hibernate. Burrowing behavior All hamsters are excellent diggers, constructing burrows with one or more entrances, with galleries connected to chambers for nesting, food storage, and other activities. They use their fore- and hindlegs, as well as their snouts and teeth, for digging. In the wild, the burrow buffers extreme ambient temperatures, offers relatively stable climatic conditions, and protects against predators. Syrian hamsters dig their burrows generally at a depth of . A burrow includes a steep entrance pipe ( in diameter), a nesting and a hoarding chamber and a blind-ending branch for urination. Laboratory hamsters have not lost their ability to dig burrows; in fact, they will do this with great vigor and skill if they are provided with the appropriate substrate. Wild hamsters will also appropriate tunnels made by other mammals; the Djungarian hamster, for instance, uses paths and burrows of the pika. Reproduction Fertility Hamsters become fertile at different ages depending on their species. Both Syrian and Russian hamsters mature quickly and can begin reproducing at a young age (4–5 weeks), whereas Chinese hamsters will usually begin reproducing at two to three months of age, and Roborovskis at three to four months of age. The female's reproductive life lasts about 18 months, but male hamsters remain fertile much longer. Females are in estrus about every four days, which is indicated by a reddening of genital areas, a musky smell, and a hissing, squeaking vocalisation she will emit if she believes a male is nearby. When seen from above, a sexually mature female hamster has a trim tail line; a male's tail line bulges on both sides. This might not be very visible in all species. Male hamsters typically have very large testes in relation to their body size. Before sexual maturity occurs, it is more difficult to determine a young hamster's sex. When examined, female hamsters have their anal and genital openings close together, whereas males have these two holes farther apart (the penis is usually withdrawn into the coat and thus appears as a hole or pink pimple). Gestation and fecundity Syrian hamsters are seasonal breeders and will produce several litters a year with several pups in each litter. The breeding season is from April to October in the Northern Hemisphere, with two to five litters of one to 13 young being born after a gestation period of 16 to 23 days. Dwarf hamsters breed all through the year. Gestation lasts 16 to 18 days for Syrian hamsters, 18 to 21 days for Russian hamsters, 21 to 23 days for Chinese hamsters and 23 to 30 for Roborovski hamsters. The average litter size for Syrian hamsters is about seven pups, but can be as great as 24, which is the maximum number of pups that can be contained in the uterus. Campbell's dwarf hamsters tend to have four to eight pups in a litter, but can have up to 13. Winter white hamsters tend to have slightly smaller litters, as do Chinese and Roborovski hamsters. Intersexual aggression and cannibalism Female Chinese and Syrian hamsters are known for being aggressive toward males if kept together for too long after mating. In some cases, male hamsters can die after being attacked by a female. If breeding hamsters, separation of the pair after mating is recommended, or they will attack each other. Female hamsters are also particularly sensitive to disturbances while giving birth, and may even eat their own young if they think they are in danger, although sometimes they are just carrying the pups in their cheek pouches. If captive female hamsters are left for extended periods (three weeks or more) with their litter, they may cannibalize the litter, so the litter must be removed by the time the young can feed and drink independently. Weaning Hamsters are born hairless and blind in a nest the mother will have prepared in advance. After one week, they begin to explore outside the nest. Hamsters are capable of producing litters every month. Hamsters can be bred after they are three weeks old. It may be hard for the babies to not rely on their mother for nursing during this time, so it is important that they are supplied with food to make the transition from nursing to eating on their own easier. After the hamsters reach three weeks of age they are considered mature. Longevity Syrian hamsters typically live no more than two to three years in captivity, and less in the wild. Russian hamsters (Campbell's and Djungarian) live about two to four years in captivity, and Chinese hamsters –3 years. The smaller Roborovski hamster often lives to three years in captivity. Society and culture Hamsters as pets The best-known species of hamster is the golden or Syrian hamster (Mesocricetus auratus), which is the type most commonly kept as pets. There are numerous Syrian hamster variations including long-haired varieties and different colors. British zoologist Leonard Goodwin claimed most hamsters kept in the United Kingdom were descended from the colony he introduced for medical research purposes during the Second World War. Hamsters were domesticated and kept as pets in the United States at least as early as 1942. Other hamsters commonly kept as pets are the three species in the genus Phodopus. Campbell's dwarf hamster (Phodopus campbelli) is the most common—they are also sometimes called "Russian dwarfs"; however, many hamsters are from Russia, so this ambiguous name does not distinguish them from other species appropriately. The coat of the winter white dwarf hamster (Phodopus sungorus) turns almost white during winter (when the hours of daylight decrease). The Roborovski hamster (Phodopus roborovskii) is extremely small and fast, making it difficult to keep as a pet. Hamster shows A hamster show is an event in which people gather hamsters to judge them against each other. Hamster shows are also places where people share their enthusiasm for hamsters among attendees. Hamster shows feature an exhibition of the hamsters participating in the judging. The judging of hamsters usually includes a goal of promoting hamsters which conform to natural or established varieties of hamsters. By awarding hamsters which match standard hamster types, hamster shows encourage planned and careful hamster breeding. Owner activism When the first reported case of animal-to-human transmission of SARS-CoV-2 in Hong Kong took place via imported pet hamsters, researchers expressed difficulty in identifying some of the viral mutations within a global genomic data bank, leading city authorities to announce a mass cull of all hamsters purchased after December 22, 2021, which would affect roughly 2,000 animals. After the government 'strongly encouraged' citizens to turn in their pets, approximately 3,000 people joined underground activities to promote the adoption of abandoned hamsters throughout the city and to maintain pet ownership via methods such as the forgery of pet store purchase receipts. Some activists attempted to intercept owners who were on their way to turn in pet hamsters and encourage them to choose adoption instead, which the government subsequently warned would be subject to police action. Hamsters as lab animals The extracted cells of babies' kidneys and adults' ovaries are used to study cholesterol synthesis. Similar animals Some similar rodents sometimes called "hamsters" are not currently classified in the hamster subfamily Cricetinae. These include the maned hamster, or crested hamster, which is really the maned rat (Lophiomys imhausi). Others are the mouse-like hamsters (Calomyscus spp.), and the white-tailed rat (Mystromys albicaudatus).
Biology and health sciences
Rodents
null
14380
https://en.wikipedia.org/wiki/Helium-3
Helium-3
Helium-3 (3He see also helion) is a light, stable isotope of helium with two protons and one neutron. (In contrast, the most common isotope, helium-4, has two protons and two neutrons.) Helium-3 and protium (ordinary hydrogen) are the only stable nuclides with more protons than neutrons. It was discovered in 1939. Helium-3 occurs as a primordial nuclide, escaping from Earth's crust into its atmosphere and into outer space over millions of years. It is also thought to be a natural nucleogenic and cosmogenic nuclide, one produced when lithium is bombarded by natural neutrons, which can be released by spontaneous fission and by nuclear reactions with cosmic rays. Some found in the terrestrial atmosphere is a remnant of atmospheric and underwater nuclear weapons testing. Nuclear fusion using helium-3 has long been viewed as a desirable future energy source. The fusion of two of its atoms would be aneutronic, not release the dangerous radiation of traditional fusion or require much higher temperatures. The process may unavoidably create other reactions that themselves would cause the surrounding material to become radioactive. Helium-3 is thought to be more abundant on the Moon than on Earth, having been deposited in the upper layer of regolith by the solar wind over billions of years, though still lower in abundance than in the Solar System's gas giants. History The existence of helium-3 was first proposed in 1934 by the Australian nuclear physicist Mark Oliphant while he was working at the University of Cambridge Cavendish Laboratory. Oliphant had performed experiments in which fast deuterons collided with deuteron targets (incidentally, the first demonstration of nuclear fusion). Isolation of helium-3 was first accomplished by Luis Alvarez and Robert Cornog in 1939. Helium-3 was thought to be a radioactive isotope until it was also found in samples of natural helium, which is mostly helium-4, taken both from the terrestrial atmosphere and from natural gas wells. Physical properties Due to its low atomic mass of 3.016 u, helium-3 has some physical properties different from those of helium-4, with a mass of 4.0026 u. On account of the weak, induced dipole–dipole interaction between the helium atoms, their microscopic physical properties are mainly determined by their zero-point energy. Also, the microscopic properties of helium-3 cause it to have a higher zero-point energy than helium-4. This implies that helium-3 can overcome dipole–dipole interactions with less thermal energy than helium-4 can. The quantum mechanical effects on helium-3 and helium-4 are significantly different because with two protons, two neutrons, and two electrons, helium-4 has an overall spin of zero, making it a boson, but with one fewer neutron, helium-3 has an overall spin of one half, making it a fermion. Pure helium-3 gas boils at 3.19 K compared with helium-4 at 4.23 K, and its critical point is also lower at 3.35 K, compared with helium-4 at 5.2 K. Helium-3 has less than half the density of helium-4 when it is at its boiling point: 59 g/L compared to 125 g/L of helium-4 at a pressure of one atmosphere. Its latent heat of vaporization is also considerably lower at 0.026 kJ/mol compared with the 0.0829 kJ/mol of helium-4. Superfluidity An important property of helium-3, which distinguishes it from the more common helium-4, is that its nucleus is a fermion since it contains an odd number of spin particles. Helium-4 nuclei are bosons, containing an even number of spin particles. This is a direct result of the addition rules for quantized angular momentum. At low temperatures (about 2.17 K), helium-4 undergoes a phase transition: A fraction of it enters a superfluid phase that can be roughly understood as a type of Bose–Einstein condensate. Such a mechanism is not available for helium-3 atoms, which are fermions. Many speculated that helium-3 could also become a superfluid at much lower temperatures, if the atoms formed into pairs analogous to Cooper pairs in the BCS theory of superconductivity. Each Cooper pair, having integer spin, can be thought of as a boson. During the 1970s, David Lee, Douglas Osheroff and Robert Coleman Richardson discovered two phase transitions along the melting curve, which were soon realized to be the two superfluid phases of helium-3. The transition to a superfluid occurs at 2.491 millikelvins on the melting curve. They were awarded the 1996 Nobel Prize in Physics for their discovery. Alexei Abrikosov, Vitaly Ginzburg, and Tony Leggett won the 2003 Nobel Prize in Physics for their work on refining understanding of the superfluid phase of helium-3. In a zero magnetic field, there are two distinct superfluid phases of 3He, the A-phase and the B-phase. The B-phase is the low-temperature, low-pressure phase which has an isotropic energy gap. The A-phase is the higher temperature, higher pressure phase that is further stabilized by a magnetic field and has two point nodes in its gap. The presence of two phases is a clear indication that 3He is an unconventional superfluid (superconductor), since the presence of two phases requires an additional symmetry, other than gauge symmetry, to be broken. In fact, it is a p-wave superfluid, with spin one, S=1, and angular momentum one, L=1. The ground state corresponds to total angular momentum zero, J=S+L=0 (vector addition). Excited states are possible with non-zero total angular momentum, J>0, which are excited pair collective modes. These collective modes have been studied with much greater precision than in any other unconventional pairing system, because of the extreme purity of superfluid 3He. This purity is due to all 4He phase separating entirely and all other materials solidifying and sinking to the bottom of the liquid, making the A- and B-phases of 3He the most pure condensed matter state possible. Natural abundance Terrestrial abundance 3He is a primordial substance in the Earth's mantle, thought to have become entrapped in the Earth during planetary formation. The ratio of 3He to 4He within the Earth's crust and mantle is less than that of estimates of solar disk composition as obtained from meteorite and lunar samples, with terrestrial materials generally containing lower 3He/4He ratios due to production of 4He from radioactive decay. 3He has a cosmological ratio of 300 atoms per million atoms of 4He (at. ppm), leading to the assumption that the original ratio of these primordial gases in the mantle was around 200-300 ppm when Earth was formed. Over Earth's history alpha-particle decay of uranium, thorium and other radioactive isotopes has generated significant amounts of 4He, such that only around 7% of the helium now in the mantle is primordial helium, lowering the total 3He/4He ratio to around 20 ppm. Ratios of 3He/4He in excess of atmospheric are indicative of a contribution of 3He from the mantle. Crustal sources are dominated by the 4He produced by radioactive decay. The ratio of helium-3 to helium-4 in natural Earth-bound sources varies greatly. Samples of the lithium ore spodumene from Edison Mine, South Dakota were found to contain 12 parts of helium-3 to a million parts of helium-4. Samples from other mines showed 2 parts per million. Helium is also present as up to 7% of some natural gas sources, and large sources have over 0.5% (above 0.2% makes it viable to extract). The fraction of 3He in helium separated from natural gas in the U.S. was found to range from 70 to 242 parts per billion. Hence the US 2002 stockpile of 1 billion normal m3 would have contained about of helium-3. According to American physicist Richard Garwin, about or almost of 3He is available annually for separation from the US natural gas stream. If the process of separating out the 3He could employ as feedstock the liquefied helium typically used to transport and store bulk quantities, estimates for the incremental energy cost range from NTP, excluding the cost of infrastructure and equipment. Algeria's annual gas production is assumed to contain 100 million normal cubic metres and this would contain between of helium-3 (about ) assuming a similar 3He fraction. 3He is also present in the Earth's atmosphere. The natural abundance of 3He in naturally occurring helium gas is 1.38 (1.38 parts per million). The partial pressure of helium in the Earth's atmosphere is about , and thus helium accounts for 5.2 parts per million of the total pressure (101325 Pa) in the Earth's atmosphere, and 3He thus accounts for 7.2 parts per trillion of the atmosphere. Since the atmosphere of the Earth has a mass of about , the mass of 3He in the Earth's atmosphere is the product of these numbers, or about of 3He. (In fact the effective figure is ten times smaller, since the above ppm are ppmv and not ppmw. One must multiply by 3 (the molecular mass of helium-3) and divide by 29 (the mean molecular mass of the atmosphere), resulting in of helium-3 in the earth's atmosphere.) 3He is produced on Earth from three sources: lithium spallation, cosmic rays, and beta decay of tritium (3H). The contribution from cosmic rays is negligible within all except the oldest regolith materials, and lithium spallation reactions are a lesser contributor than the production of 4He by alpha particle emissions. The total amount of helium-3 in the mantle may be in the range of . Most mantle is not directly accessible. Some helium-3 leaks up through deep-sourced hotspot volcanoes such as those of the Hawaiian Islands, but only per year is emitted to the atmosphere. Mid-ocean ridges emit another . Around subduction zones, various sources produce helium-3 in natural gas deposits which possibly contain a thousand tonnes of helium-3 (although there may be 25 thousand tonnes if all ancient subduction zones have such deposits). Wittenberg estimated that United States crustal natural gas sources may have only half a tonne total. Wittenberg cited Anderson's estimate of another in interplanetary dust particles on the ocean floors. In the 1994 study, extracting helium-3 from these sources consumes more energy than fusion would release. Lunar surface See Extraterrestrial mining or Lunar resources Solar nebula (primordial) abundance One early estimate of the primordial ratio of 3He to 4He in the solar nebula has been the measurement of their ratio in the atmosphere of Jupiter, measured by the mass spectrometer of the Galileo atmospheric entry probe. This ratio is about 1:10,000, or 100 parts of 3He per million parts of 4He. This is roughly the same ratio of the isotopes as in lunar regolith, which contains 28 ppm helium-4 and 2.8 ppb helium-3 (which is at the lower end of actual sample measurements, which vary from about 1.4 to 15 ppb). Terrestrial ratios of the isotopes are lower by a factor of 100, mainly due to enrichment of helium-4 stocks in the mantle by billions of years of alpha decay from uranium, thorium as well as their decay products and extinct radionuclides. Human production Tritium decay Virtually all helium-3 used in industry today is produced from the radioactive decay of tritium, given its very low natural abundance and its very high cost. Production, sales and distribution of helium-3 in the United States are managed by the US Department of Energy (DOE) DOE Isotope Program. While tritium has several different experimentally determined values of its half-life, NIST lists (). It decays into helium-3 by beta decay as in this nuclear equation: {| border="0" |- style="height:2em;" | ||→ || ||+ || ||+ || |} Among the total released energy of , the part taken by electron's kinetic energy varies, with an average of , while the remaining energy is carried off by the nearly undetectable electron antineutrino. Beta particles from tritium can penetrate only about of air, and they are incapable of passing through the dead outermost layer of human skin. The unusually low energy released in the tritium beta decay makes the decay (along with that of rhenium-187) appropriate for absolute neutrino mass measurements in the laboratory (the most recent experiment being KATRIN). The low energy of tritium's radiation makes it difficult to detect tritium-labeled compounds except by using liquid scintillation counting. Tritium is a radioactive isotope of hydrogen and is typically produced by bombarding lithium-6 with neutrons in a nuclear reactor. The lithium nucleus absorbs a neutron and splits into helium-4 and tritium. Tritium decays into helium-3 with a half-life of , so helium-3 can be produced by simply storing the tritium until it undergoes radioactive decay. As tritium forms a stable compound with oxygen (tritiated water) while helium-3 does not, the storage and collection process could continuously collect the material that outgasses from the stored material. Tritium is a critical component of nuclear weapons and historically it was produced and stockpiled primarily for this application. The decay of tritium into helium-3 reduces the explosive power of the fusion warhead, so periodically the accumulated helium-3 must be removed from warhead reservoirs and tritium in storage. Helium-3 removed during this process is marketed for other applications. For decades this has been, and remains, the principal source of the world's helium-3. Since the signing of the START I Treaty in 1991 the number of nuclear warheads that are kept ready for use has decreased. This has reduced the quantity of helium-3 available from this source. Helium-3 stockpiles have been further diminished by increased demand, primarily for use in neutron radiation detectors and medical diagnostic procedures. US industrial demand for helium-3 reached a peak of (approximately ) per year in 2008. Price at auction, historically about , reached as high as . Since then, demand for helium-3 has declined to about per year due to the high cost and efforts by the DOE to recycle it and find substitutes. Assuming a density of at $100/l helium-3 would be about a thirtieth as expensive as tritium (roughly vs roughly ) while at $2000/l helium-3 would be about half as expensive as tritium ( vs ). The DOE recognized the developing shortage of both tritium and helium-3, and began producing tritium by lithium irradiation at the Tennessee Valley Authority's Watts Bar Nuclear Generating Station in 2010. In this process tritium-producing burnable absorber rods (TPBARs) containing lithium in a ceramic form are inserted into the reactor in place of the normal boron control rods Periodically the TPBARs are replaced and the tritium extracted. Currently only two commercial nuclear reactors (Watts Bar Nuclear Plant Units 1 and 2) are being used for tritium production but the process could, if necessary, be vastly scaled up to meet any conceivable demand simply by utilizing more of the nation's power reactors. Substantial quantities of tritium and helium-3 could also be extracted from the heavy water moderator in CANDU nuclear reactors. India and Canada, the two countries with the largest heavy water reactor fleet, are both known to extract tritium from moderator/coolant heavy water, but those amounts are not nearly enough to satisfy global demand of either tritium or helium-3. As tritium is also produced inadvertently in various processes in light water reactors (see the article on tritium for details), extraction from those sources could be another source of helium-3. If the annual discharge of tritium (per 2018 figures) at La Hague reprocessing facility is taken as a basis, the amounts discharged ( at La Hague) are not nearly enough to satisfy demand, even if 100% recovery is achieved. Uses Helium-3 spin echo Helium-3 can be used to do spin echo experiments of surface dynamics, which are underway at the Surface Physics Group at the Cavendish Laboratory in Cambridge and in the Chemistry Department at Swansea University. Neutron detection Helium-3 is an important isotope in instrumentation for neutron detection. It has a high absorption cross section for thermal neutron beams and is used as a converter gas in neutron detectors. The neutron is converted through the nuclear reaction n + 3He → 3H + 1H + 0.764 MeV into charged particles tritium ions (T, 3H) and Hydrogen ions, or protons (p, 1H) which then are detected by creating a charge cloud in the stopping gas of a proportional counter or a Geiger–Müller tube. Furthermore, the absorption process is strongly spin-dependent, which allows a spin-polarized helium-3 volume to transmit neutrons with one spin component while absorbing the other. This effect is employed in neutron polarization analysis, a technique which probes for magnetic properties of matter. The United States Department of Homeland Security had hoped to deploy detectors to spot smuggled plutonium in shipping containers by their neutron emissions, but the worldwide shortage of helium-3 following the drawdown in nuclear weapons production since the Cold War has to some extent prevented this. As of 2012, DHS determined the commercial supply of boron-10 would support converting its neutron detection infrastructure to that technology. Cryogenics A helium-3 refrigerator uses helium-3 to achieve temperatures of 0.2 to 0.3 kelvin. A dilution refrigerator uses a mixture of helium-3 and helium-4 to reach cryogenic temperatures as low as a few thousandths of a kelvin. Medical imaging Helium-3 nuclei have an intrinsic nuclear spin of , and a relatively high magnetogyric ratio. Helium-3 can be hyperpolarized using non-equilibrium means such as spin-exchange optical pumping. During this process, circularly polarized infrared laser light, tuned to the appropriate wavelength, is used to excite electrons in an alkali metal, such as caesium or rubidium inside a sealed glass vessel. The angular momentum is transferred from the alkali metal electrons to the noble gas nuclei through collisions. In essence, this process effectively aligns the nuclear spins with the magnetic field in order to enhance the NMR signal. The hyperpolarized gas may then be stored at pressures of 10 atm, for up to 100 hours. Following inhalation, gas mixtures containing the hyperpolarized helium-3 gas can be imaged with an MRI scanner to produce anatomical and functional images of lung ventilation. This technique is also able to produce images of the airway tree, locate unventilated defects, measure the alveolar oxygen partial pressure, and measure the ventilation/perfusion ratio. This technique may be critical for the diagnosis and treatment management of chronic respiratory diseases such as chronic obstructive pulmonary disease (COPD), emphysema, cystic fibrosis, and asthma. Radio energy absorber for tokamak plasma experiments Both MIT's Alcator C-Mod tokamak and the Joint European Torus (JET) have experimented with adding a little helium-3 to a H–D plasma to increase the absorption of radio-frequency (RF) energy to heat the hydrogen and deuterium ions, a "three-ion" effect. Nuclear fuel can be produced by the low temperature fusion of → + γ + 4.98 MeV. If the fusion temperature is below that for the helium nuclei to fuse, the reaction produces a high energy alpha particle which quickly acquires an electron producing a stable light helium ion which can be utilized directly as a source of electricity without producing dangerous neutrons. can be used in fusion reactions by either of the reactions + 18.3 MeV, or + 12.86 MeV. The conventional deuterium + tritium ("D–T") fusion process produces energetic neutrons which render reactor components radioactive with activation products. The appeal of helium-3 fusion stems from the aneutronic nature of its reaction products. Helium-3 itself is non-radioactive. The lone high-energy by-product, the proton, can be contained by means of electric and magnetic fields. The momentum energy of this proton (created in the fusion process) will interact with the containing electromagnetic field, resulting in direct net electricity generation. Because of the higher Coulomb barrier, the temperatures required for fusion are much higher than those of conventional D–T fusion. Moreover, since both reactants need to be mixed together to fuse, reactions between nuclei of the same reactant will occur, and the D–D reaction () does produce a neutron. Reaction rates vary with temperature, but the D– reaction rate is never greater than 3.56 times the D–D reaction rate (see graph). Therefore, fusion using D– fuel at the right temperature and a D-lean fuel mixture, can produce a much lower neutron flux than D–T fusion, but is not clean, negating some of its main attraction. The second possibility, fusing with itself (), requires even higher temperatures (since now both reactants have a +2 charge), and thus is even more difficult than the D- reaction. It offers a theoretical reaction that produces no neutrons; the charged protons produced can be contained in electric and magnetic fields, which in turn directly generates electricity. fusion is feasible as demonstrated in the laboratory and has immense advantages, but commercial viability is many years in the future. The amounts of helium-3 needed as a replacement for conventional fuels are substantial by comparison to amounts currently available. The total amount of energy produced in the reaction is 18.4 MeV, which corresponds to some 493 megawatt-hours (4.93×108 W·h) per three grams (one mole) of . If the total amount of energy could be converted to electrical power with 100% efficiency (a physical impossibility), it would correspond to about 30 minutes of output of a gigawatt electrical plant per mole of . Thus, a year's production (at 6 grams for each operation hour) would require 52.5 kilograms of helium-3. The amount of fuel needed for large-scale applications can also be put in terms of total consumption: electricity consumption by 107 million U.S. households in 2001 totaled 1,140 billion kW·h (1.14×1015 W·h). Again assuming 100% conversion efficiency, 6.7 tonnes per year of helium-3 would be required for that segment of the energy demand of the United States, 15 to 20 tonnes per year given a more realistic end-to-end conversion efficiency. A second-generation approach to controlled fusion power involves combining helium-3 and deuterium, . This reaction produces an alpha particle and a high-energy proton. The most important potential advantage of this fusion reaction for power production as well as other applications lies in its compatibility with the use of electrostatic fields to control fuel ions and the fusion protons. High speed protons, as positively charged particles, can have their kinetic energy converted directly into electricity, through use of solid-state conversion materials as well as other techniques. Potential conversion efficiencies of 70% may be possible, as there is no need to convert proton energy to heat in order to drive a turbine-powered electrical generator. He-3 power plants There have been many claims about the capabilities of helium-3 power plants. According to proponents, fusion power plants operating on deuterium and helium-3 would offer lower capital and operating costs than their competitors due to less technical complexity, higher conversion efficiency, smaller size, the absence of radioactive fuel, no air or water pollution, and only low-level radioactive waste disposal requirements. Recent estimates suggest that about $6 billion in investment capital will be required to develop and construct the first helium-3 fusion power plant. Financial break even at today's wholesale electricity prices (5 US cents per kilowatt-hour) would occur after five 1-gigawatt plants were on line, replacing old conventional plants or meeting new demand. The reality is not so clear-cut. The most advanced fusion programs in the world are inertial confinement fusion (such as National Ignition Facility) and magnetic confinement fusion (such as ITER and Wendelstein 7-X). In the case of the former, there is no solid roadmap to power generation. In the case of the latter, commercial power generation is not expected until around 2050. In both cases, the type of fusion discussed is the simplest: D–T fusion. The reason for this is the very low Coulomb barrier for this reaction; for D+3He, the barrier is much higher, and it is even higher for 3He–3He. The immense cost of reactors like ITER and National Ignition Facility are largely due to their immense size, yet to scale up to higher plasma temperatures would require reactors far larger still. The 14.7 MeV proton and 3.6 MeV alpha particle from D–3He fusion, plus the higher conversion efficiency, means that more electricity is obtained per kilogram than with D–T fusion (17.6 MeV), but not that much more. As a further downside, the rates of reaction for helium-3 fusion reactions are not particularly high, requiring a reactor that is larger still or more reactors to produce the same amount of electricity. In 2022, Helion Energy claimed that their 7th fusion prototype (Polaris; fully funded and under construction as of September 2022) will demonstrate "net electricity from fusion", and will demonstrate "helium-3 production through deuterium–deuterium fusion" by means of a "patented high-efficiency closed-fuel cycle". Alternatives to He-3 To attempt to work around this problem of massively large power plants that may not even be economical with D–T fusion, let alone the far more challenging D–3He fusion, a number of other reactors have been proposed – the Fusor, Polywell, Focus fusion, and many more, though many of these concepts have fundamental problems with achieving a net energy gain, and generally attempt to achieve fusion in thermal disequilibrium, something that could potentially prove impossible, and consequently, these long-shot programs tend to have trouble garnering funding despite their low budgets. Unlike the "big" and "hot" fusion systems, if such systems worked, they could scale to the higher barrier aneutronic fuels, and so their proponents tend to promote p-B fusion, which requires no exotic fuel such as helium-3. Extraterrestrial Moon Materials on the Moon's surface contain helium-3 at concentrations between 1.4 and 15 ppb in sunlit areas, and may contain concentrations as much as 50 ppb in permanently shadowed regions. A number of people, starting with Gerald Kulcinski in 1986, have proposed to explore the Moon, mine lunar regolith and use the helium-3 for fusion. Because of the low concentrations of helium-3, any mining equipment would need to process extremely large amounts of regolith (over 150 tonnes of regolith to obtain one gram of helium-3). The primary objective of Indian Space Research Organisation's first lunar probe called Chandrayaan-1, launched on October 22, 2008, was reported in some sources to be mapping the Moon's surface for helium-3-containing minerals. No such objective is mentioned in the project's official list of goals, though many of its scientific payloads have held helium-3-related applications. Cosmochemist and geochemist Ouyang Ziyuan from the Chinese Academy of Sciences who is now in charge of the Chinese Lunar Exploration Program has already stated on many occasions that one of the main goals of the program would be the mining of helium-3, from which operation "each year, three space shuttle missions could bring enough fuel for all human beings across the world". In January 2006, the Russian space company RKK Energiya announced that it considers lunar helium-3 a potential economic resource to be mined by 2020, if funding can be found. Not all writers feel the extraction of lunar helium-3 is feasible, or even that there will be a demand for it for fusion. Dwayne Day, writing in The Space Review in 2015, characterises helium-3 extraction from the Moon for use in fusion as magical thinking about an unproven technology, and questions the feasibility of lunar extraction, as compared to production on Earth. Gas giants Mining gas giants for helium-3 has also been proposed. The British Interplanetary Society's hypothetical Project Daedalus interstellar probe design was fueled by helium-3 mines in the atmosphere of Jupiter, for example.
Physical sciences
s-Block
Chemistry
14381
https://en.wikipedia.org/wiki/Hamiltonian%20%28quantum%20mechanics%29
Hamiltonian (quantum mechanics)
In quantum mechanics, the Hamiltonian of a system is an operator corresponding to the total energy of that system, including both kinetic energy and potential energy. Its spectrum, the system's energy spectrum or its set of energy eigenvalues, is the set of possible outcomes obtainable from a measurement of the system's total energy. Due to its close relation to the energy spectrum and time-evolution of a system, it is of fundamental importance in most formulations of quantum theory. The Hamiltonian is named after William Rowan Hamilton, who developed a revolutionary reformulation of Newtonian mechanics, known as Hamiltonian mechanics, which was historically important to the development of quantum physics. Similar to vector notation, it is typically denoted by , where the hat indicates that it is an operator. It can also be written as or . Introduction The Hamiltonian of a system represents the total energy of the system; that is, the sum of the kinetic and potential energies of all particles associated with the system. The Hamiltonian takes different forms and can be simplified in some cases by taking into account the concrete characteristics of the system under analysis, such as single or several particles in the system, interaction between particles, kind of potential energy, time varying potential or time independent one. Schrödinger Hamiltonian One particle By analogy with classical mechanics, the Hamiltonian is commonly expressed as the sum of operators corresponding to the kinetic and potential energies of a system in the form where is the potential energy operator and is the kinetic energy operator in which is the mass of the particle, the dot denotes the dot product of vectors, and is the momentum operator where a is the del operator. The dot product of with itself is the Laplacian . In three dimensions using Cartesian coordinates the Laplace operator is Although this is not the technical definition of the Hamiltonian in classical mechanics, it is the form it most commonly takes. Combining these yields the form used in the Schrödinger equation: which allows one to apply the Hamiltonian to systems described by a wave function . This is the approach commonly taken in introductory treatments of quantum mechanics, using the formalism of Schrödinger's wave mechanics. One can also make substitutions to certain variables to fit specific cases, such as some involving electromagnetic fields. Expectation value It can be shown that the expectation value of the Hamiltonian which gives the energy expectation value will always be greater than or equal to the minimum potential of the system. Consider computing the expectation value of kinetic energy: Hence the expectation value of kinetic energy is always non-negative. This result can be used to calculate the expectation value of the total energy which is given for a normalized wavefunction as: which complete the proof. Similarly, the condition can be generalized to any higher dimensions using divergence theorem. Many particles The formalism can be extended to particles: where is the potential energy function, now a function of the spatial configuration of the system and time (a particular set of spatial positions at some instant of time defines a configuration) and is the kinetic energy operator of particle , is the gradient for particle , and is the Laplacian for particle : Combining these yields the Schrödinger Hamiltonian for the -particle case: However, complications can arise in the many-body problem. Since the potential energy depends on the spatial arrangement of the particles, the kinetic energy will also depend on the spatial configuration to conserve energy. The motion due to any one particle will vary due to the motion of all the other particles in the system. For this reason cross terms for kinetic energy may appear in the Hamiltonian; a mix of the gradients for two particles: where denotes the mass of the collection of particles resulting in this extra kinetic energy. Terms of this form are known as mass polarization terms, and appear in the Hamiltonian of many-electron atoms (see below). For interacting particles, i.e. particles which interact mutually and constitute a many-body situation, the potential energy function is not simply a sum of the separate potentials (and certainly not a product, as this is dimensionally incorrect). The potential energy function can only be written as above: a function of all the spatial positions of each particle. For non-interacting particles, i.e. particles which do not interact mutually and move independently, the potential of the system is the sum of the separate potential energy for each particle, that is The general form of the Hamiltonian in this case is: where the sum is taken over all particles and their corresponding potentials; the result is that the Hamiltonian of the system is the sum of the separate Hamiltonians for each particle. This is an idealized situation—in practice the particles are almost always influenced by some potential, and there are many-body interactions. One illustrative example of a two-body interaction where this form would not apply is for electrostatic potentials due to charged particles, because they interact with each other by Coulomb interaction (electrostatic force), as shown below. Schrödinger equation The Hamiltonian generates the time evolution of quantum states. If is the state of the system at time , then This equation is the Schrödinger equation. It takes the same form as the Hamilton–Jacobi equation, which is one of the reasons is also called the Hamiltonian. Given the state at some initial time (), we can solve it to obtain the state at any subsequent time. In particular, if is independent of time, then The exponential operator on the right hand side of the Schrödinger equation is usually defined by the corresponding power series in . One might notice that taking polynomials or power series of unbounded operators that are not defined everywhere may not make mathematical sense. Rigorously, to take functions of unbounded operators, a functional calculus is required. In the case of the exponential function, the continuous, or just the holomorphic functional calculus suffices. We note again, however, that for common calculations the physicists' formulation is quite sufficient. By the *-homomorphism property of the functional calculus, the operator is a unitary operator. It is the time evolution operator or propagator of a closed quantum system. If the Hamiltonian is time-independent, form a one parameter unitary group (more than a semigroup); this gives rise to the physical principle of detailed balance. Dirac formalism However, in the more general formalism of Dirac, the Hamiltonian is typically implemented as an operator on a Hilbert space in the following way: The eigenkets of , denoted , provide an orthonormal basis for the Hilbert space. The spectrum of allowed energy levels of the system is given by the set of eigenvalues, denoted , solving the equation: Since is a Hermitian operator, the energy is always a real number. From a mathematically rigorous point of view, care must be taken with the above assumptions. Operators on infinite-dimensional Hilbert spaces need not have eigenvalues (the set of eigenvalues does not necessarily coincide with the spectrum of an operator). However, all routine quantum mechanical calculations can be done using the physical formulation. Expressions for the Hamiltonian Following are expressions for the Hamiltonian in a number of situations. Typical ways to classify the expressions are the number of particles, number of dimensions, and the nature of the potential energy function—importantly space and time dependence. Masses are denoted by , and charges by . Free particle The particle is not bound by any potential energy, so the potential is zero and this Hamiltonian is the simplest. For one dimension: and in higher dimensions: Constant-potential well For a particle in a region of constant potential (no dependence on space or time), in one dimension, the Hamiltonian is: in three dimensions This applies to the elementary "particle in a box" problem, and step potentials. Simple harmonic oscillator For a simple harmonic oscillator in one dimension, the potential varies with position (but not time), according to: where the angular frequency , effective spring constant , and mass of the oscillator satisfy: so the Hamiltonian is: For three dimensions, this becomes where the three-dimensional position vector using Cartesian coordinates is , its magnitude is Writing the Hamiltonian out in full shows it is simply the sum of the one-dimensional Hamiltonians in each direction: Rigid rotor For a rigid rotor—i.e., system of particles which can rotate freely about any axes, not bound in any potential (such as free molecules with negligible vibrational degrees of freedom, say due to double or triple chemical bonds), the Hamiltonian is: where , , and are the moment of inertia components (technically the diagonal elements of the moment of inertia tensor), and and are the total angular momentum operators (components), about the , , and axes respectively. Electrostatic (Coulomb) potential The Coulomb potential energy for two point charges and (i.e., those that have no spatial extent independently), in three dimensions, is (in SI units—rather than Gaussian units which are frequently used in electromagnetism): However, this is only the potential for one point charge due to another. If there are many charged particles, each charge has a potential energy due to every other point charge (except itself). For charges, the potential energy of charge due to all other charges is (see also Electrostatic potential energy stored in a configuration of discrete point charges): where is the electrostatic potential of charge at . The total potential of the system is then the sum over : so the Hamiltonian is: Electric dipole in an electric field For an electric dipole moment constituting charges of magnitude , in a uniform, electrostatic field (time-independent) , positioned in one place, the potential is: the dipole moment itself is the operator Since the particle is stationary, there is no translational kinetic energy of the dipole, so the Hamiltonian of the dipole is just the potential energy: Magnetic dipole in a magnetic field For a magnetic dipole moment in a uniform, magnetostatic field (time-independent) , positioned in one place, the potential is: Since the particle is stationary, there is no translational kinetic energy of the dipole, so the Hamiltonian of the dipole is just the potential energy: For a spin- particle, the corresponding spin magnetic moment is: where is the "spin g-factor" (not to be confused with the gyromagnetic ratio), is the electron charge, is the spin operator vector, whose components are the Pauli matrices, hence Charged particle in an electromagnetic field For a particle with mass and charge in an electromagnetic field, described by the scalar potential and vector potential , there are two parts to the Hamiltonian to substitute for. The canonical momentum operator , which includes a contribution from the field and fulfils the canonical commutation relation, must be quantized; where is the kinetic momentum. The quantization prescription reads so the corresponding kinetic energy operator is and the potential energy, which is due to the field, is given by Casting all of these into the Hamiltonian gives Energy eigenket degeneracy, symmetry, and conservation laws In many systems, two or more energy eigenstates have the same energy. A simple example of this is a free particle, whose energy eigenstates have wavefunctions that are propagating plane waves. The energy of each of these plane waves is inversely proportional to the square of its wavelength. A wave propagating in the direction is a different state from one propagating in the direction, but if they have the same wavelength, then their energies will be the same. When this happens, the states are said to be degenerate. It turns out that degeneracy occurs whenever a nontrivial unitary operator commutes with the Hamiltonian. To see this, suppose that is an energy eigenket. Then is an energy eigenket with the same eigenvalue, since Since is nontrivial, at least one pair of and must represent distinct states. Therefore, has at least one pair of degenerate energy eigenkets. In the case of the free particle, the unitary operator which produces the symmetry is the rotation operator, which rotates the wavefunctions by some angle while otherwise preserving their shape. The existence of a symmetry operator implies the existence of a conserved observable. Let be the Hermitian generator of : It is straightforward to show that if commutes with , then so does : Therefore, In obtaining this result, we have used the Schrödinger equation, as well as its dual, Thus, the expected value of the observable is conserved for any state of the system. In the case of the free particle, the conserved quantity is the angular momentum. Hamilton's equations Hamilton's equations in classical Hamiltonian mechanics have a direct analogy in quantum mechanics. Suppose we have a set of basis states , which need not necessarily be eigenstates of the energy. For simplicity, we assume that they are discrete, and that they are orthonormal, i.e., Note that these basis states are assumed to be independent of time. We will assume that the Hamiltonian is also independent of time. The instantaneous state of the system at time , , can be expanded in terms of these basis states: where The coefficients are complex variables. We can treat them as coordinates which specify the state of the system, like the position and momentum coordinates which specify a classical system. Like classical coordinates, they are generally not constant in time, and their time dependence gives rise to the time dependence of the system as a whole. The expectation value of the Hamiltonian of this state, which is also the mean energy, is where the last step was obtained by expanding in terms of the basis states. Each actually corresponds to two independent degrees of freedom, since the variable has a real part and an imaginary part. We now perform the following trick: instead of using the real and imaginary parts as the independent variables, we use and its complex conjugate . With this choice of independent variables, we can calculate the partial derivative By applying Schrödinger's equation and using the orthonormality of the basis states, this further reduces to Similarly, one can show that If we define "conjugate momentum" variables by then the above equations become which is precisely the form of Hamilton's equations, with the s as the generalized coordinates, the s as the conjugate momenta, and taking the place of the classical Hamiltonian.
Physical sciences
Quantum mechanics
Physics
14385
https://en.wikipedia.org/wiki/Hydrolysis
Hydrolysis
Hydrolysis (; ) is any chemical reaction in which a molecule of water breaks one or more chemical bonds. The term is used broadly for substitution, elimination, and solvation reactions in which water is the nucleophile. Biological hydrolysis is the cleavage of biomolecules where a water molecule is consumed to effect the separation of a larger molecule into component parts. When a carbohydrate is broken into its component sugar molecules by hydrolysis (e.g., sucrose being broken down into glucose and fructose), this is recognized as saccharification. Hydrolysis reactions can be the reverse of a condensation reaction in which two molecules join into a larger one and eject a water molecule. Thus hydrolysis adds water to break down, whereas condensation builds up by removing water. Types Usually hydrolysis is a chemical process in which a molecule of water is added to a substance. Sometimes this addition causes both the substance and water molecule to split into two parts. In such reactions, one fragment of the target molecule (or parent molecule) gains a hydrogen ion. It breaks a chemical bond in the compound. Salts A common kind of hydrolysis occurs when a salt of a weak acid or weak base (or both) is dissolved in water. Water spontaneously ionizes into hydroxide anions and hydronium cations. The salt also dissociates into its constituent anions and cations. For example, sodium acetate dissociates in water into sodium and acetate ions. Sodium ions react very little with the hydroxide ions whereas the acetate ions combine with hydronium ions to produce acetic acid. In this case the net result is a relative excess of hydroxide ions, yielding a basic solution. Strong acids also undergo hydrolysis. For example, dissolving sulfuric acid () in water is accompanied by hydrolysis to give hydronium and bisulfate, the sulfuric acid's conjugate base. For a more technical discussion of what occurs during such a hydrolysis, see Brønsted–Lowry acid–base theory. Esters and amides Acid–base-catalysed hydrolyses are very common; one example is the hydrolysis of amides or esters. Their hydrolysis occurs when the nucleophile (a nucleus-seeking agent, e.g., water or hydroxyl ion) attacks the carbon of the carbonyl group of the ester or amide. In an aqueous base, hydroxyl ions are better nucleophiles than polar molecules such as water. In acids, the carbonyl group becomes protonated, and this leads to a much easier nucleophilic attack. The products for both hydrolyses are compounds with carboxylic acid groups. Perhaps the oldest commercially practiced example of ester hydrolysis is saponification (formation of soap). It is the hydrolysis of a triglyceride (fat) with an aqueous base such as sodium hydroxide (NaOH). During the process, glycerol is formed, and the fatty acids react with the base, converting them to salts. These salts are called soaps, commonly used in households. In addition, in living systems, most biochemical reactions (including ATP hydrolysis) take place during the catalysis of enzymes. The catalytic action of enzymes allows the hydrolysis of proteins, fats, oils, and carbohydrates. As an example, one may consider proteases (enzymes that aid digestion by causing hydrolysis of peptide bonds in proteins). They catalyze the hydrolysis of interior peptide bonds in peptide chains, as opposed to exopeptidases (another class of enzymes, that catalyze the hydrolysis of terminal peptide bonds, liberating one free amino acid at a time). However, proteases do not catalyze the hydrolysis of all kinds of proteins. Their action is stereo-selective: Only proteins with a certain tertiary structure are targeted as some kind of orienting force is needed to place the amide group in the proper position for catalysis. The necessary contacts between an enzyme and its substrates (proteins) are created because the enzyme folds in such a way as to form a crevice into which the substrate fits; the crevice also contains the catalytic groups. Therefore, proteins that do not fit into the crevice will not undergo hydrolysis. This specificity preserves the integrity of other proteins such as hormones, and therefore the biological system continues to function normally. Upon hydrolysis, an amide converts into a carboxylic acid and an amine or ammonia (which in the presence of acid are immediately converted to ammonium salts). One of the two oxygen groups on the carboxylic acid are derived from a water molecule and the amine (or ammonia) gains the hydrogen ion. The hydrolysis of peptides gives amino acids. Many polyamide polymers such as nylon 6,6 hydrolyze in the presence of strong acids. The process leads to depolymerization. For this reason nylon products fail by fracturing when exposed to small amounts of acidic water. Polyesters are also susceptible to similar polymer degradation reactions. The problem is known as environmental stress cracking. ATP Hydrolysis is related to energy metabolism and storage. All living cells require a continual supply of energy for two main purposes: the biosynthesis of micro and macromolecules, and the active transport of ions and molecules across cell membranes. The energy derived from the oxidation of nutrients is not used directly but, by means of a complex and long sequence of reactions, it is channeled into a special energy-storage molecule, adenosine triphosphate (ATP). The ATP molecule contains pyrophosphate linkages (bonds formed when two phosphate units are combined) that release energy when needed. ATP can undergo hydrolysis in two ways: Firstly, the removal of terminal phosphate to form adenosine diphosphate (ADP) and inorganic phosphate, with the reaction: Secondly, the removal of a terminal diphosphate to yield adenosine monophosphate (AMP) and pyrophosphate. The latter usually undergoes further cleavage into its two constituent phosphates. This results in biosynthesis reactions, which usually occur in chains, that can be driven in the direction of synthesis when the phosphate bonds have undergone hydrolysis. Polysaccharides Monosaccharides can be linked together by glycosidic bonds, which can be cleaved by hydrolysis. Two, three, several or many monosaccharides thus linked form disaccharides, trisaccharides, oligosaccharides, or polysaccharides, respectively. Enzymes that hydrolyze glycosidic bonds are called "glycoside hydrolases" or "glycosidases". The best-known disaccharide is sucrose (table sugar). Hydrolysis of sucrose yields glucose and fructose. Invertase is a sucrase used industrially for the hydrolysis of sucrose to so-called invert sugar. Lactase is essential for digestive hydrolysis of lactose in milk; many adult humans do not produce lactase and cannot digest the lactose in milk. The hydrolysis of polysaccharides to soluble sugars can be recognized as saccharification. Malt made from barley is used as a source of β-amylase to break down starch into the disaccharide maltose, which can be used by yeast to produce beer. Other amylase enzymes may convert starch to glucose or to oligosaccharides. Cellulose is first hydrolyzed to cellobiose by cellulase and then cellobiose is further hydrolyzed to glucose by beta-glucosidase. Ruminants such as cows are able to hydrolyze cellulose into cellobiose and then glucose because of symbiotic bacteria that produce cellulases. DNA Hydrolysis of DNA occurs at a significant rate in vivo. For example, it is estimated that in each human cell 2,000 to 10,000 DNA purine bases turn over every day due to hydrolytic depurination, and that this is largely counteracted by specific rapid DNA repair processes. Hydrolytic DNA damages that fail to be accurately repaired may contribute to carcinogenesis and ageing. Metal aqua ions Metal ions are Lewis acids, and in aqueous solution they form metal aquo complexes of the general formula . The aqua ions undergo hydrolysis, to a greater or lesser extent. The first hydrolysis step is given generically as M(H2O)_\mathit{n}^{\mathit{m}+}{} + H2O <=> M(H2O)_{\mathit{n}-1}(OH)^{(\mathit{m}-1){}+}{} + H3O+ Thus the aqua cations behave as acids in terms of Brønsted–Lowry acid–base theory. This effect is easily explained by considering the inductive effect of the positively charged metal ion, which weakens the bond of an attached water molecule, making the liberation of a proton relatively easy. The dissociation constant, pKa, for this reaction is more or less linearly related to the charge-to-size ratio of the metal ion. Ions with low charges, such as are very weak acids with almost imperceptible hydrolysis. Large divalent ions such as , , and have a pKa of 6 or more and would not normally be classed as acids, but small divalent ions such as undergo extensive hydrolysis. Trivalent ions like and are weak acids whose pKa is comparable to that of acetic acid. Solutions of salts such as or in water are noticeably acidic; the hydrolysis can be suppressed by adding an acid such as nitric acid, making the solution more acidic. Hydrolysis may proceed beyond the first step, often with the formation of polynuclear species via the process of olation. Some "exotic" species such as are well characterized. Hydrolysis tends to proceed as pH rises leading, in many cases, to the precipitation of a hydroxide such as or . These substances, major constituents of bauxite, are known as laterites and are formed by leaching from rocks of most of the ions other than aluminium and iron and subsequent hydrolysis of the remaining aluminium and iron. Mechanism strategies Acetals, imines, and enamines can be converted back into ketones by treatment with excess water under acid-catalyzed conditions: ; ; . Catalysis Acidic hydrolysis Acid catalysis can be applied to hydrolyses. For example, in the conversion of cellulose or starch to glucose. Carboxylic acids can be produced from acid hydrolysis of esters. Acids catalyze hydrolysis of nitriles to amides. Acid hydrolysis does not usually refer to the acid catalyzed addition of the elements of water to double or triple bonds by electrophilic addition as may originate from a hydration reaction. Acid hydrolysis is used to prepare monosaccharide with the help of mineral acids but formic acid and trifluoroacetic acid have been used. Acid hydrolysis can be utilized in the pretreatment of cellulosic material, so as to cut the interchain linkages in hemicellulose and cellulose. Alkaline hydrolysis Alkaline hydrolysis usually refers to types of nucleophilic substitution reactions in which the attacking nucleophile is a hydroxide ion. The best known type is saponification: cleaving esters into carboxylate salts and alcohols. In ester hydrolysis, the hydroxide ion nucleophile attacks the carbonyl carbon. This mechanism is supported by isotope labeling experiments. For example, when ethyl propionate with an oxygen-18 labeled ethoxy group is treated with sodium hydroxide (NaOH), the oxygen-18 is completely absent from the sodium propionate product and is found exclusively in the ethanol formed. The reaction is often used to solubilize solid organic matter. Chemical drain cleaners take advantage of this method to dissolve hair and fat in pipes. The reaction is also used to dispose of human and other animal remains as an alternative to traditional burial or cremation.
Physical sciences
Other reactions
Chemistry
14387
https://en.wikipedia.org/wiki/Warm-blooded
Warm-blooded
Warm-blooded is a term referring to animal species whose bodies maintain a temperature higher than that of their environment. In particular, homeothermic species (including birds and mammals) maintain a stable body temperature by regulating metabolic processes. Other species have various degrees of thermoregulation. Because there are more than two categories of temperature control utilized by animals, the terms warm-blooded and cold-blooded have been deprecated in the scientific field. Terminology In general, warm-bloodedness refers to three separate categories of thermoregulation. Endothermy is the ability of some creatures to control their body temperatures through internal means such as muscle shivering or increasing their metabolism. The opposite of endothermy is ectothermy. Homeothermy maintains a stable internal body temperature regardless of external influence and temperatures. The stable internal temperature is often higher than the immediate environment. The opposite is poikilothermy. The only known living homeotherms are mammals and birds, as well as one lizard, the Argentine black and white tegu. Some extinct reptiles such as ichthyosaurs, pterosaurs, plesiosaurs and some non-avian dinosaurs are believed to have been homeotherms. Tachymetabolism maintains a high "resting" metabolism. In essence, tachymetabolic creatures are "on" all the time. Though their resting metabolism is still many times slower than their active metabolism, the difference is often not as large as that seen in bradymetabolic creatures. Tachymetabolic creatures have greater difficulty dealing with a scarcity of food. Varieties of thermoregulation A significant proportion of creatures commonly referred to as "warm-blooded," like birds and mammals, exhibit all three of these categories (i.e., they are endothermic, homeothermic, and tachymetabolic). However, over the past three decades, investigations in the field of animal thermophysiology have unveiled numerous species within these two groups that do not meet all these criteria. For instance, many bats and small birds become poikilothermic and bradymetabolic during sleep (or, in nocturnal species, during the day). For such creatures, the term heterothermy was introduced. Further examinations of animals traditionally classified as cold-blooded have revealed that most creatures manifest varying combinations of the three aforementioned terms, along with their counterparts (ectothermy, poikilothermy, and bradymetabolism), thus creating a broad spectrum of body temperature types. Some fish have warm-blooded characteristics, such as the opah. Swordfish and some sharks have circulatory mechanisms that keep their brains and eyes above ambient temperatures and thus increase their ability to detect and react to prey. Tunas and some sharks have similar mechanisms in their muscles, improving their stamina when swimming at high speed. Heat generation Body heat is generated by metabolism. This relates to the chemical reaction in cells that break down glucose into water and carbon dioxide, thereby producing adenosine triphosphate (ATP), a high-energy compound used to power other cellular processes. Muscle contraction is one such metabolic process generating heat energy, and additional heat results from friction as blood circulates through the vascular system. All organisms metabolize food and other inputs, but some make better use of the output than others. Like all energy conversions, metabolism is rather inefficient, and around 60% of the available energy is converted to heat rather than to ATP. In most organisms, this heat dissipates into the surroundings. However, endothermic homeotherms (generally referred to as "warm-blooded" animals) not only produce more heat but also possess superior means of retaining and regulating it compared to other animals. They exhibit a higher basal metabolic rate and can further increase their metabolic rate during strenuous activity. They usually have well-developed insulation in order to retain body heat: fur and blubber in the case of mammals and feathers in birds. When this insulation is insufficient to maintain body temperature, they may resort to shivering—rapid muscle contractions that quickly use up ATP, thus stimulating cellular metabolism to replace it and consequently produce more heat. Additionally, almost all eutherian mammals (with the only known exception being swine) have brown adipose tissue whose mitochondria are capable of non-shivering thermogenesis. This process involves the direct dissipation of the mitochondrial gradient as heat via an uncoupling protein, thereby "uncoupling" the gradient from its usual function of driving ATP production via ATP synthase. In warm environments, these animals employ evaporative cooling to shed excess heat, either through sweating (some mammals) or by panting (many mammals and all birds)—mechanisms generally absent in poikilotherms. Defense against fungi It has been hypothesized that warm-bloodedness evolved in mammals and birds as a defense against fungal infections. Very few fungi can survive the body temperatures of warm-blooded animals. By comparison, insects, reptiles, and amphibians are plagued by fungal infections. Warm-blooded animals have a defense against pathogens contracted from the environment, since environmental pathogens are not adapted to their higher internal temperature.
Biology and health sciences
Basics
Biology
14392
https://en.wikipedia.org/wiki/Howitzer
Howitzer
The howitzer () is an artillery weapon that falls between a cannon (or field gun) and a mortar. It is capable of both low angle fire like a field gun and high angle fire like a mortar, given the distinction between low and high angle fire breaks at 45 degrees or 1600 mils (NATO). With their long-range capabilities, howitzers can be used to great effect in a battery formation with other artillery pieces, such as long-barreled guns, mortars, and rocket artillery. The term "howitzer" originated from the Czech word houfnice, meaning "crowd", which was later adapted into various European languages. Developed in the late 16th century as a medium-trajectory weapon for siege warfare, howitzers were valued for their ability to fire explosive shells and incendiary materials into fortifications. Unlike mortars, which had fixed firing angles, howitzers could be fired at various angles, providing greater flexibility in combat. Throughout the 18th and 19th centuries, howitzers evolved to become more mobile and versatile. The introduction of rifling in the mid-19th century led to significant changes in howitzer design and usage. By the early 20th century, howitzers were classified into different categories based on their size and role, including field howitzers, siege howitzers, and super-heavy siege howitzers. During World War I and World War II, howitzers played significant roles in combat, particularly in trench warfare and artillery-heavy strategies such as the Soviet deep battle doctrine. In modern times, the distinctions between guns and howitzers have become less pronounced, with many artillery pieces combining characteristics of both. Contemporary howitzers are often self-propelled, mounted on tracked or wheeled vehicles, and capable of firing at high angles with adjustable propellant charges for increased range and accuracy. Etymology The English word howitzer comes from the Czech word , from , 'crowd', and is in turn a borrowing from the Middle High German word or (modern German ), meaning 'crowd, throng', plus the Czech nominal suffix . , sometimes in the compound , also designated a pike square formation in German. In the Hussite Wars of the 1420s and 1430s, the Hussites used short-barreled cannons to fire at short distances into crowds of infantry, or into charging heavy cavalry, to make horses shy away. The word was rendered into German as in the earliest attested use in a document dating from 1440; later German renderings include and, eventually , from which derive the Scandinavian , Polish and Croatian , Estonian , Finnish , Russian (), Serbian (), Ukrainian (), Italian , Spanish , Portuguese , French , Romanian and the Dutch word , which led to the English word howitzer. Since World War I, the word howitzer has been changing to describe artillery pieces that previously would have belonged to the category of gun-howitzers – relatively long barrels and high muzzle velocities combined with multiple propelling charges and high maximum elevations. This is particularly true in the armed forces of the United States, where gun-howitzers have been officially described as howitzers since World War II. Because of this practice, the word howitzer is used in some armies as a generic term for any kind of artillery piece that is designed to attack targets using indirect fire. Thus, artillery pieces that bear little resemblance to howitzers of earlier eras are now described as howitzers, although the British call them guns. The British had a further method of nomenclature. In the 18th century, they adopted projectile weight for guns replacing an older naming system (such as culverin, saker, etc.) that had developed in the late 15th century. Mortars had been categorized by calibre in inches in the 17th century, and this was duplicated with howitzers. U.S. military doctrine defines howitzers as any cannon artillery capable of both high-angle fire (45° to 90° elevation) and low-angle fire (0° to 45° elevation); guns are defined as being only capable of low-angle fire (0° to 45° elevation); and mortars are defined as being only capable of high-angle fire (45° to 90° elevation). History Early modern period The first artillery identified as howitzers developed in the late 16th century as a medium-trajectory weapon between the low trajectory (direct fire) of cannons and the high trajectory (indirect fire) of mortars. Originally intended for use in siege warfare, they were particularly useful for delivering cast iron shells filled with gunpowder or incendiary materials into the interior of fortifications. In contrast to contemporaneous mortars, which were fired at a fixed angle and were entirely dependent on adjustments to the size of propellant charges to vary range, howitzers could be fired at a wide variety of angles. Thus, while howitzer gunnery was more complicated than the technique of employing mortars, the howitzer was an inherently more flexible weapon that could fire its projectiles along a wide range of trajectories. In the middle of the 18th century, a number of European armies began to introduce howitzers that were mobile enough to accompany armies in the field. Though usually fired at the relatively high angles of fire used by contemporary siege howitzers, these field howitzers were rarely defined by this capability. Rather, as the field guns of the day were usually restricted to inert projectiles (which relied entirely on momentum for their destructive effects), the field howitzers of the 18th century were chiefly valued for their ability to fire explosive shells. Many, for the sake of simplicity and rapidity of fire, dispensed with adjustable propellant charges. The Abus gun was an early form of howitzer in the Ottoman Empire. In 1758, the Russian Empire introduced a specific type of howitzer (or rather gun-howitzer), with a conical chamber, called a licorne, which remained in service for the next 100 years. In the mid-19th century, some armies attempted to simplify their artillery parks by introducing smoothbore artillery pieces that were designed to fire both explosive projectiles and cannonballs, thereby replacing both field howitzers and field guns. The most famous of these "gun-howitzers" was the Napoleon 12-pounder, a weapon of French design that was extensively used in the American Civil War. In 1859, the armies of Europe (including those that had recently adopted gun-howitzers) began to rearm field batteries with rifled field guns. These field pieces used cylindrical projectiles that, while smaller in caliber than the spherical shells of smoothbore field howitzers, could carry a comparable charge of gunpowder. Moreover, their greater range let them create many of the same effects (such as firing over low walls) that previously required the sharply curved trajectories of smoothbore field howitzers. Because of this, military authorities saw no point in obtaining rifled field howitzers to replace their smoothbore counterparts but instead used rifled field guns to replace both guns and howitzers. In siege warfare, the introduction of rifling had the opposite effect. In the 1860s, artillery officers discovered that rifled siege howitzers (substantially larger than field howitzers) were a more efficient means of destroying walls (particularly walls protected by certain kinds of intervening obstacles) than smoothbore siege guns or siege mortars. Thus, at the same time armies were taking howitzers of one sort out of their field batteries, they were introducing howitzers of another sort into their siege trains and fortresses. The lightest of these weapons (later known as "light siege howitzers") had calibers around and fired shells that weighed between . The heaviest (later called "medium siege howitzers") had calibers between and fired shells that weighed about . During the 1880s, a third type of siege howitzer was added to inventories of a number of European armies. With calibers that ranged between and shells that weighed more than , these soon came to be known as "heavy siege howitzers". A good example of a weapon of this class is provided by the 9.45-inch (240 mm) weapon that the British Army purchased from the Skoda works in 1899. 20th century In the early 20th century, the introduction of howitzers that were significantly larger than the heavy siege howitzers of the day made necessary the creation of a fourth category, that of "super-heavy siege howitzers". Weapons of this category include the famous Big Bertha of the German Army and the 15-inch (381 mm) howitzer of the British Royal Marine Artillery. These large howitzers were transported mechanically rather than by teams of horses. They were transported as several loads and had to be assembled at their firing position. These field howitzers introduced at the end of the 19th century could fire shells with high trajectories giving a steep angle of descent and, as a result, could strike targets that were protected by intervening obstacles. They could also fire shells that were about twice as large as shells fired by guns of the same size. Thus, while a field gun that weighed one ton or so was limited to shells that weigh around , a howitzer of the same weight could fire shells. This is a matter of fundamental mechanics affecting the stability and hence the weight of the carriage. As heavy field howitzers and light siege howitzers of the late 19th and early 20th centuries used ammunition of the same size and type, there was a marked tendency for the two types to merge. At first, this was largely a matter of the same basic weapon being employed on two different mountings. Later, as on-carriage recoil-absorbing systems eliminated many of the advantages that siege platforms had enjoyed over field carriages, the same combination of barrel assembly, recoil mechanism and carriage was used in both roles. By the early 20th century, the differences between guns and howitzers were relative, not absolute, and generally recognized as follows: Guns – higher velocity and longer range, single charge propellant, maximum elevation generally less than 45 degrees. Howitzers – lower velocity and shorter range, multi-charge propellant, maximum elevation typically more than 45 degrees. The onset of trench warfare after the first few months of World War I greatly increased the demand for howitzers that gave a steep angle of descent, which were better suited than guns to the task of striking targets in a vertical plane (such as trenches), with large amounts of explosive and considerably less barrel wear. The German army was well equipped with howitzers, having far more at the beginning of the war than France. Many howitzers introduced in the course of World War I had longer barrels than pre-war howitzers. The standard German light field howitzer at the start of the war (the 10.5 cm leichte Feldhaubitze 98/09) had a barrel that was 16 calibers long, but the light field howitzer adopted by the German Army in 1916 (105 mm leichte Feldhaubitze 16) had a barrel that was 22 calibers long. At the same time, new models of field gun introduced during that conflict, such as the field gun adopted by the German Army in 1916 (7,7 cm Feldkanone 16) were often provided with carriages that allowed firing at comparatively high angles, and adjustable propellant cartridges. In the years after World War I, the tendency of guns and howitzers to acquire each other's characteristics led to the renaissance of the concept of the gun-howitzer. This was a product of technical advances such as the French invention of autofrettage just before World War I, which led to stronger and lighter barrels, the use of cut-off gear to control recoil length depending on firing elevation angle, and the invention of muzzle brakes to reduce recoil forces. Like the gun-howitzers of the 19th century, those of the 20th century replaced both guns and howitzers. Thus, the 25-pounder "gun-howitzer" of the British Army replaced both the 18-pounder field gun and the 4.5-inch howitzer. During World War II, the military doctrine of Soviet deep battle called for extensive use of heavy artillery to hold the formal line of front. Soviet doctrine was remarkably different from the German doctrine of Blitzkrieg and called for a far more extensive use of artillery. As a result, howitzers saw most of the action on the Eastern front. Most of the howitzers produced by the USSR at the time were not self-propelled. Notable examples of Soviet howitzers include the M-10, M-30 and D-1. Since World War II, most of the artillery pieces adopted by armies for attacking targets on land have combined the traditional characteristics of guns and howitzers – high muzzle velocity, long barrels, long range, multiple charges and maximum elevation angles greater than 45 degrees. The term "gun-howitzer" is sometimes used for these (e.g., in Russia); many nations use "howitzer", while the UK (and most members of The Commonwealth of Nations) calls them "guns", for example Gun, 105 mm, Field, L118. Types A self-propelled howitzer is mounted on a tracked or wheeled motor vehicle. In many cases, it is protected by some sort of armor so that it superficially resembles a tank. This armor is designed primarily to protect the crew from shrapnel and small arms fire, not anti-armor weapons. A pack howitzer is a relatively light howitzer that is designed to be easily broken down into several pieces, each of which is small enough to be carried by mule or pack-horse. A mountain howitzer is a relatively light howitzer designed for use in mountainous terrain. Most, but not all, mountain howitzers are also pack howitzers. A siege howitzer is a howitzer that is designed to be fired from a mounting on a fixed platform of some sort. A field howitzer is a howitzer that is mobile enough to accompany a field army on campaign. It is invariably provided with a wheeled carriage of some sort. Gallery
Technology
Artillery
null
14403
https://en.wikipedia.org/wiki/Hydrogen%20peroxide
Hydrogen peroxide
Hydrogen peroxide is a chemical compound with the formula . In its pure form, it is a very pale blue liquid that is slightly more viscous than water. It is used as an oxidizer, bleaching agent, and antiseptic, usually as a dilute solution (3%–6% by weight) in water for consumer use and in higher concentrations for industrial use. Concentrated hydrogen peroxide, or "high-test peroxide", decomposes explosively when heated and has been used as both a monopropellant and an oxidizer in rocketry. Hydrogen peroxide is a reactive oxygen species and the simplest peroxide, a compound having an oxygen–oxygen single bond. It decomposes slowly into water and elemental oxygen when exposed to light, and rapidly in the presence of organic or reactive compounds. It is typically stored with a stabilizer in a weakly acidic solution in an opaque bottle. Hydrogen peroxide is found in biological systems including the human body. Enzymes that use or decompose hydrogen peroxide are classified as peroxidases. Properties The boiling point of has been extrapolated as being , approximately higher than water. In practice, hydrogen peroxide will undergo potentially explosive thermal decomposition if heated to this temperature. It may be safely distilled at lower temperatures under reduced pressure. Hydrogen peroxide forms stable adducts with urea (hydrogen peroxide–urea), sodium carbonate (sodium percarbonate) and other compounds. An acid-base adduct with triphenylphosphine oxide is a useful "carrier" for in some reactions. Structure Hydrogen peroxide () is a nonplanar molecule with (twisted) C2 symmetry; this was first shown by Paul-Antoine Giguère in 1950 using infrared spectroscopy. Although the O−O bond is a single bond, the molecule has a relatively high rotational barrier of 386 cm−1 (4.62 kJ/mol) for rotation between enantiomers via the trans configuration, and 2460 cm−1 (29.4 kJ/mol) via the cis configuration. These barriers are proposed to be due to repulsion between the lone pairs of the adjacent oxygen atoms and dipolar effects between the two O–H bonds. For comparison, the rotational barrier for ethane is 1040 cm−1 (12.4 kJ/mol). The approximately 100° dihedral angle between the two O–H bonds makes the molecule chiral. It is the smallest and simplest molecule to exhibit enantiomerism. It has been proposed that the enantiospecific interactions of one rather than the other may have led to amplification of one enantiomeric form of ribonucleic acids and therefore an origin of homochirality in an RNA world. The molecular structures of gaseous and crystalline are significantly different. This difference is attributed to the effects of hydrogen bonding, which is absent in the gaseous state. Crystals of are tetragonal with the space group D or P41212. Aqueous solutions In aqueous solutions, hydrogen peroxide forms a eutectic mixture, exhibiting freezing-point depression down as low as –56 °C; pure water has a freezing point of 0 °C and pure hydrogen peroxide of –0.43 °C. The boiling point of the same mixtures is also depressed in relation with the mean of both boiling points (125.1 °C). It occurs at 114 °C. This boiling point is 14 °C greater than that of pure water and 36.2 °C less than that of pure hydrogen peroxide. Hydrogen peroxide is most commonly available as a solution in water. For consumers, it is usually available from pharmacies at 3 and 6 wt% concentrations. The concentrations are sometimes described in terms of the volume of oxygen gas generated; one milliliter of a 20-volume solution generates twenty milliliters of oxygen gas when completely decomposed. For laboratory use, 30 wt% solutions are most common. Commercial grades from 70% to 98% are also available, but due to the potential of solutions of more than 68% hydrogen peroxide to be converted entirely to steam and oxygen (with the temperature of the steam increasing as the concentration increases above 68%) these grades are potentially far more hazardous and require special care in dedicated storage areas. Buyers must typically allow inspection by commercial manufacturers. Comparison with analogues Hydrogen peroxide has several structural analogues with bonding arrangements (water also shown for comparison). It has the highest (theoretical) boiling point of this series (X = O, S, N, P). Its melting point is also fairly high, being comparable to that of hydrazine and water, with only hydroxylamine crystallising significantly more readily, indicative of particularly strong hydrogen bonding. Diphosphane and hydrogen disulfide exhibit only weak hydrogen bonding and have little chemical similarity to hydrogen peroxide. Structurally, the analogues all adopt similar skewed structures, due to repulsion between adjacent lone pairs. Natural occurrence Hydrogen peroxide is produced by various biological processes mediated by enzymes. Hydrogen peroxide has been detected in surface water, in groundwater, and in the atmosphere. It can also form when water is exposed to UV light. Sea water contains 0.5 to 14 μg/L of hydrogen peroxide, and freshwater contains 1 to 30 μg/L. Concentrations in air are about 0.4 to 4 μg/m3, varying over several orders of magnitude depending in conditions such as season, altitude, daylight and water vapor content. In rural nighttime air it is less than 0.014 μg/m3, and in moderate photochemical smog it is 14 to 42 μg/m3. The amount of hydrogen peroxide in biological systems can be assayed using a fluorometric assay. Discovery Alexander von Humboldt is sometimes said to have been the first to report the first synthetic peroxide, barium peroxide, in 1799 as a by-product of his attempts to decompose air, although this is disputed due to von Humboldt's ambiguous wording. Nineteen years later Louis Jacques Thénard recognized that this compound could be used for the preparation of a previously unknown compound, which he described as ("oxygenated water") — subsequently known as hydrogen peroxide. An improved version of Thénard's process used hydrochloric acid, followed by addition of sulfuric acid to precipitate the barium sulfate byproduct. This process was used from the end of the 19th century until the middle of the 20th century. The bleaching effect of peroxides and their salts on natural dyes had been known since Thénard's experiments in the 1820s, but early attempts of industrial production of peroxides failed. The first plant producing hydrogen peroxide was built in 1873 in Berlin. The discovery of the synthesis of hydrogen peroxide by electrolysis with sulfuric acid introduced the more efficient electrochemical method. It was first commercialized in 1908 in Weißenstein, Carinthia, Austria. The anthraquinone process, which is still used, was developed during the 1930s by the German chemical manufacturer IG Farben in Ludwigshafen. The increased demand and improvements in the synthesis methods resulted in the rise of the annual production of hydrogen peroxide from 35,000 tonnes in 1950, to over 100,000 tonnes in 1960, to 300,000 tonnes by 1970; by 1998 it reached 2.7 million tonnes. Early attempts failed to produce neat hydrogen peroxide. Anhydrous hydrogen peroxide was first obtained by vacuum distillation. Determination of the molecular structure of hydrogen peroxide proved to be very difficult. In 1892, the Italian physical chemist Giacomo Carrara (1864–1925) determined its molecular mass by freezing-point depression, which confirmed that its molecular formula is . seemed to be just as possible as the modern structure, and as late as in the middle of the 20th century at least half a dozen hypothetical isomeric variants of two main options seemed to be consistent with the available evidence. In 1934, the English mathematical physicist William Penney and the Scottish physicist Gordon Sutherland proposed a molecular structure for hydrogen peroxide that was very similar to the presently accepted one. Production In 1994, world production of was around 1.9 million tonnes and grew to 2.2 million in 2006, most of which was at a concentration of 70% or less. In that year, bulk 30% sold for around 0.54 USD/kg, equivalent to US$1.50/kg (US$0.68/lb) on a "100% basis". Today, hydrogen peroxide is manufactured almost exclusively by the anthraquinone process, which was originally developed by BASF in 1939. It begins with the reduction of an anthraquinone (such as 2-ethylanthraquinone or the 2-amyl derivative) to the corresponding anthrahydroquinone, typically by hydrogenation on a palladium catalyst. In the presence of oxygen, the anthrahydroquinone then undergoes autoxidation: the labile hydrogen atoms of the hydroxy groups transfer to the oxygen molecule, to give hydrogen peroxide and regenerating the anthraquinone. Most commercial processes achieve oxidation by bubbling compressed air through a solution of the anthrahydroquinone, with the hydrogen peroxide then extracted from the solution and the anthraquinone recycled back for successive cycles of hydrogenation and oxidation. The net reaction for the anthraquinone-catalyzed process is: The economics of the process depend heavily on effective recycling of the extraction solvents, the hydrogenation catalyst and the expensive quinone. Historical methods Hydrogen peroxide was once prepared industrially by hydrolysis of ammonium persulfate: was itself obtained by the electrolysis of a solution of ammonium bisulfate () in sulfuric acid. Other routes Small amounts are formed by electrolysis, photochemistry, electric arc, and related methods. A commercially viable route for hydrogen peroxide via the reaction of hydrogen with oxygen favours production of water but can be stopped at the peroxide stage. One economic obstacle has been that direct processes give a dilute solution uneconomic for transportation. None of these has yet reached a point where it can be used for industrial-scale synthesis. Reactions Acid-base Hydrogen peroxide is about 1000 times stronger as an acid than water. (pK = 11.65) Disproportionation Hydrogen peroxide disproportionates to form water and oxygen with a ΔHo of –2884.5 kJ/kg and a ΔS of 70.5 J/(mol·K): The rate of decomposition increases with rise in temperature, concentration, and pH. is unstable under alkaline conditions. Decomposition is catalysed by various redox-active ions or compounds, including most transition metals and their compounds (e.g. manganese dioxide (), silver, and platinum). Oxidation reactions The redox properties of hydrogen peroxide depend on pH. In acidic solutions, is a powerful oxidizer. Sulfite () is oxidized to sulfate (). Reduction reactions Under alkaline conditions, hydrogen peroxide is a reductant. When acts as a reducing agent, oxygen gas is also produced. For example, hydrogen peroxide will reduce sodium hypochlorite and potassium permanganate, which is a convenient method for preparing oxygen in the laboratory: The oxygen produced from hydrogen peroxide and sodium hypochlorite is in the singlet state. Hydrogen peroxide also reduces silver oxide to silver: Although usually a reductant, alkaline hydrogen peroxide converts Mn(II) to the dioxide: In a related reaction, potassium permanganate is reduced to by acidic : Organic reactions Hydrogen peroxide is frequently used as an oxidizing agent. Illustrative is oxidation of thioethers to form sulfoxides, such as conversion of thioanisole to methyl phenyl sulfoxide: Alkaline hydrogen peroxide is used for epoxidation of electron-deficient alkenes such as acrylic acid derivatives, and for the oxidation of alkylboranes to alcohols, the second step of hydroboration-oxidation. It is also the principal reagent in the Dakin oxidation process. Precursor to other peroxide compounds Hydrogen peroxide is a weak acid, forming hydroperoxide or peroxide salts with many metals. It also converts metal oxides into the corresponding peroxides. For example, upon treatment with hydrogen peroxide, chromic acid ( and ) forms a blue peroxide . Biochemistry Production The aerobic oxidation of glucose in the presence of the enzyme glucose oxidase produces hydrogen peroxide. The conversion affords gluconolactone: Superoxide dismutases (SOD)s are enzymes that promote the disproportionation of superoxide into oxygen and hydrogen peroxide. Peroxisomes are organelles found in virtually all eukaryotic cells. They are involved in the catabolism of very long chain fatty acids, branched chain fatty acids, D-amino acids, polyamines, and biosynthesis of plasmalogens and ether phospholipids, which are found in mammalian brains and lungs. They produce hydrogen peroxide in a process catalyzed by flavin adenine dinucleotide (FAD): ->[\ce{FAD}] Hydrogen peroxide arises by the degradation of adenosine monophosphate, which yields hypoxanthine. Hypoxanthine is then oxidatively catabolized first to xanthine and then to uric acid, and the reaction is catalyzed by the enzyme xanthine oxidase: The degradation of guanosine monophosphate yields xanthine as an intermediate product which is then converted in the same way to uric acid with the formation of hydrogen peroxide. Consumption Catalase, another peroxisomal enzyme, uses this to oxidize other substrates, including phenols, formic acid, formaldehyde, and alcohol, by means of a peroxidation reaction: thus eliminating the poisonous hydrogen peroxide in the process. This reaction is important in liver and kidney cells, where the peroxisomes neutralize various toxic substances that enter the blood. Some of the ethanol humans drink is oxidized to acetaldehyde in this way. In addition, when excess accumulates in the cell, catalase converts it to through this reaction: Glutathione peroxidase, a selenoenzyme, also catalyzes the disproportionation of hydrogen peroxide. Fenton reaction The reaction of and hydrogen peroxide is the basis of the Fenton reaction, which generates hydroxyl radicals, which are of significance in biology: The Fenton reaction explains the toxicity of hydrogen peroxides because the hydroxyl radicals rapidly and irreversibly oxidize all organic compounds, including proteins, membrane lipids, and DNA. Hydrogen peroxide is a significant source of oxidative DNA damage in living cells. DNA damage includes formation of 8-Oxo-2'-deoxyguanosine among many other altered bases, as well as strand breaks, inter-strand crosslinks, and deoxyribose damage. By interacting with Cl¯, hydrogen peroxide also leads to chlorinated DNA bases. Hydroxyl radicals readily damage vital cellular components, especially those of the mitochondria. The compound is a major factor implicated in the free-radical theory of aging, based on its ready conversion into a hydroxyl radical. Function Eggs of sea urchin, shortly after fertilization by a sperm, produce hydrogen peroxide. It is then converted to hydroxyl radicals (HO•), which initiate radical polymerization, which surrounds the eggs with a protective layer of polymer. The bombardier beetle combines hydroquinone and hydrogen peroxide, leading to a violent exothermic chemical reaction to produce boiling, foul-smelling liquid that partially becomes a gas (flash evaporation) and is expelled through an outlet valve with a loud popping sound. As a proposed signaling molecule, hydrogen peroxide may regulate a wide variety of biological processes. At least one study has tried to link hydrogen peroxide production to cancer. Uses Bleaching About 60% of the world's production of hydrogen peroxide is used for pulp- and paper-bleaching. The second major industrial application is the manufacture of sodium percarbonate and sodium perborate, which are used as mild bleaches in laundry detergents. A representative conversion is: Sodium percarbonate, which is an adduct of sodium carbonate and hydrogen peroxide, is the active ingredient in such laundry products as OxiClean and Tide laundry detergent. When dissolved in water, it releases hydrogen peroxide and sodium carbonate. By themselves these bleaching agents are only effective at wash temperatures of or above and so, often are used in conjunction with bleach activators, which facilitate cleaning at lower temperatures. Hydrogen peroxide has also been used as a flour bleaching agent and a tooth and bone whitening agent. Production of organic peroxy compounds It is used in the production of various organic peroxides with dibenzoyl peroxide being a high volume example. Peroxy acids, such as peracetic acid and meta-chloroperoxybenzoic acid also are produced using hydrogen peroxide. Hydrogen peroxide has been used for creating organic peroxide-based explosives, such as acetone peroxide. It is used as an initiator in polymerizations. Hydrogen peroxide reacts with certain di-esters, such as phenyl oxalate ester (cyalume), to produce chemiluminescence; this application is most commonly encountered in the form of glow sticks. Production of inorganic peroxides The reaction with borax leads to sodium perborate, a bleach used in laundry detergents: Sewage treatment Hydrogen peroxide is used in certain waste-water treatment processes to remove organic impurities. In advanced oxidation processing, the Fenton reaction gives the highly reactive hydroxyl radical (•OH). This degrades organic compounds, including those that are ordinarily robust, such as aromatic or halogenated compounds. It can also oxidize sulfur-based compounds present in the waste; which is beneficial as it generally reduces their odour. Disinfectant Hydrogen peroxide may be used for the sterilization of various surfaces, including surgical instruments, and may be deployed as a vapour (VHP) for room sterilization. demonstrates broad-spectrum efficacy against viruses, bacteria, yeasts, and bacterial spores. In general, greater activity is seen against Gram-positive than Gram-negative bacteria; however, the presence of catalase or other peroxidases in these organisms may increase tolerance in the presence of lower concentrations. Lower levels of concentration (3%) will work against most spores; higher concentrations (7 to 30%) and longer contact times will improve sporicidal activity. Hydrogen peroxide is seen as an environmentally safe alternative to chlorine-based bleaches, as it degrades to form oxygen and water and it is generally recognized as safe as an antimicrobial agent by the U.S. Food and Drug Administration (FDA). Propellant High-concentration is referred to as "high-test peroxide" (HTP). It can be used as either a monopropellant (not mixed with fuel) or the oxidizer component of a bipropellant rocket. Use as a monopropellant takes advantage of the decomposition of 70–98% concentration hydrogen peroxide into steam and oxygen. The propellant is pumped into a reaction chamber, where a catalyst, usually a silver or platinum screen, triggers decomposition, producing steam at over , which is expelled through a nozzle, generating thrust. monopropellant produces a maximal specific impulse (Isp) of 161 s (1.6 kN·s/kg). Peroxide was the first major monopropellant adopted for use in rocket applications. Hydrazine eventually replaced hydrogen peroxide monopropellant thruster applications primarily because of a 25% increase in the vacuum specific impulse. Hydrazine (toxic) and hydrogen peroxide (less toxic [ACGIH TLV 0.01 and 1 ppm respectively]) are the only two monopropellants (other than cold gases) to have been widely adopted and utilized for propulsion and power applications. The Bell Rocket Belt, reaction control systems for X-1, X-15, Centaur, Mercury, Little Joe, as well as the turbo-pump gas generators for X-1, X-15, Jupiter, Redstone and Viking used hydrogen peroxide as a monopropellant. The RD-107 engines (used from 1957 to present) in the R-7 series of rockets decompose hydrogen peroxide to power the turbopumps. In bipropellant applications, is decomposed to oxidize a burning fuel. Specific impulses as high as 350 s (3.5 kN·s/kg) can be achieved, depending on the fuel. Peroxide used as an oxidizer gives a somewhat lower Isp than liquid oxygen but is dense, storable, and non-cryogenic and can be more easily used to drive gas turbines to give high pressures using an efficient closed cycle. It may also be used for regenerative cooling of rocket engines. Peroxide was used very successfully as an oxidizer in World War II German rocket motors (e.g., T-Stoff, containing oxyquinoline stabilizer, for both the Walter HWK 109-500 Starthilfe RATO externally podded monopropellant booster system and the Walter HWK 109-509 rocket motor series used for the Me 163B), most often used with C-Stoff in a self-igniting hypergolic combination, and for the low-cost British Black Knight and Black Arrow launchers. Presently, HTP is used on ILR-33 AMBER and Nucleus suborbital rockets. In the 1940s and 1950s, the Hellmuth Walter KG–conceived turbine used hydrogen peroxide for use in submarines while submerged; it was found to be too noisy and require too much maintenance compared to diesel-electric power systems. Some torpedoes used hydrogen peroxide as oxidizer or propellant. Operator error in the use of hydrogen peroxide torpedoes was named as possible causes for the sinking of HMS Sidon and the Russian submarine Kursk. SAAB Underwater Systems is manufacturing the Torpedo 2000. This torpedo, used by the Swedish Navy, is powered by a piston engine propelled by HTP as an oxidizer and kerosene as a fuel in a bipropellant system. Household use Hydrogen peroxide has various domestic uses, primarily as a cleaning and disinfecting agent. Hair bleaching Diluted (between 1.9% and 12%) mixed with aqueous ammonia has been used to bleach human hair. The chemical's bleaching property lends its name to the phrase "peroxide blonde". Hydrogen peroxide is also used for tooth whitening. It may be found in most whitening toothpastes. Hydrogen peroxide has shown positive results involving teeth lightness and chroma shade parameters. It works by oxidizing colored pigments onto the enamel where the shade of the tooth may become lighter. Hydrogen peroxide may be mixed with baking soda and salt to make a homemade toothpaste. Removal of blood stains Hydrogen peroxide reacts with blood as a bleaching agent, and so if a blood stain is fresh, or not too old, liberal application of hydrogen peroxide, if necessary in more than single application, will bleach the stain fully out. After about two minutes of the application, the blood should be firmly blotted out. Acne treatment Hydrogen peroxide may be used to treat acne, although benzoyl peroxide is a more common treatment. Oral cleaning agent The use of dilute hydrogen peroxide as an oral cleansing agent has been reviewed academically to determine its usefulness in treating gingivitis and plaque. Although there is a positive effect when compared with a placebo, it was concluded that chlorhexidine is a much more effective treatment. Niche uses Horticulture Some horticulturists and users of hydroponics advocate the use of weak hydrogen peroxide solution in watering solutions. Its spontaneous decomposition releases oxygen that enhances a plant's root development and helps to treat root rot (cellular root death due to lack of oxygen) and a variety of other pests. For general watering concentrations, around 0.1% is in use. This can be increased up to one percent for antifungal actions. Tests show that plant foliage can safely tolerate concentrations up to 3%. Fishkeeping Hydrogen peroxide is used in aquaculture for controlling mortality caused by various microbes. In 2019, the U.S. FDA approved it for control of Saprolegniasis in all coldwater finfish and all fingerling and adult coolwater and warmwater finfish, for control of external columnaris disease in warm-water finfish, and for control of Gyrodactylus spp. in freshwater-reared salmonids. Laboratory tests conducted by fish culturists have demonstrated that common household hydrogen peroxide may be used safely to provide oxygen for small fish. The hydrogen peroxide releases oxygen by decomposition when it is exposed to catalysts such as manganese dioxide. Removing yellowing from aged plastics Hydrogen peroxide may be used in combination with a UV-light source to remove yellowing from white or light grey acrylonitrile butadiene styrene (ABS) plastics to partially or fully restore the original color. In the retrocomputing scene, this process is commonly referred to as retrobright. Safety Regulations vary, but low concentrations, such as 5%, are widely available and legal to buy for medical use. Most over-the-counter peroxide solutions are not suitable for ingestion. Higher concentrations may be considered hazardous and typically are accompanied by a safety data sheet (SDS). In high concentrations, hydrogen peroxide is an aggressive oxidizer and will corrode many materials, including human skin. In the presence of a reducing agent, high concentrations of will react violently. While concentrations up to 35% produce only "white" oxygen bubbles in the skin (and some biting pain) that disappear with the blood within 30–45 minutes, concentrations of 98% dissolve paper. However, concentrations as low as 3% can be dangerous for the eye because of oxygen evolution within the eye. High-concentration hydrogen peroxide streams, typically above 40%, should be considered hazardous due to concentrated hydrogen peroxide's meeting the definition of a DOT oxidizer according to U.S. regulations if released into the environment. The EPA Reportable Quantity (RQ) for D001 hazardous wastes is , or approximately , of concentrated hydrogen peroxide. Hydrogen peroxide should be stored in a cool, dry, well-ventilated area and away from any flammable or combustible substances. It should be stored in a container composed of non-reactive materials such as stainless steel or glass (other materials including some plastics and aluminium alloys may also be suitable). As it breaks down quickly when exposed to light, it should be stored in an opaque container, and pharmaceutical formulations typically come in brown bottles that block light. Hydrogen peroxide, either in pure or diluted form, may pose several risks, the main one being that it forms explosive mixtures upon contact with organic compounds. Distillation of hydrogen peroxide at normal pressures is highly dangerous. It is corrosive, especially when concentrated, but even domestic-strength solutions may cause irritation to the eyes, mucous membranes, and skin. Swallowing hydrogen peroxide solutions is particularly dangerous, as decomposition in the stomach releases large quantities of gas (ten times the volume of a 3% solution), leading to internal bloating. Inhaling over 10% can cause severe pulmonary irritation. With a significant vapour pressure (1.2 kPa at 50 °C), hydrogen peroxide vapour is potentially hazardous. According to U.S. NIOSH, the immediately dangerous to life and health (IDLH) limit is only 75 ppm. The U.S. Occupational Safety and Health Administration (OSHA) has established a permissible exposure limit of 1.0 ppm calculated as an 8-hour time-weighted average (29 CFR 1910.1000, Table Z-1). Hydrogen peroxide has been classified by the American Conference of Governmental Industrial Hygienists (ACGIH) as a "known animal carcinogen, with unknown relevance on humans". For workplaces where there is a risk of exposure to the hazardous concentrations of the vapours, continuous monitors for hydrogen peroxide should be used. Information on the hazards of hydrogen peroxide is available from OSHA and from the ATSDR. Wound healing Historically, hydrogen peroxide was used for disinfecting wounds, partly because of its low cost and prompt availability compared to other antiseptics. There is conflicting evidence on hydrogen peroxide's effect on wound healing. Some research finds benefit, while other research find delays and healing inhibition. Its use for home treatment of wounds is generally not recommended. 1.5–3% hydrogen peroxide is used as a disinfectant in dentistry, especially in endodotic treatments together with hypochlorite and chlorhexidine and 1–1.5% is also useful for treatment of inflammation of third molars (wisdom teeth). Use in alternative medicine Practitioners of alternative medicine have advocated the use of hydrogen peroxide for various conditions, including emphysema, influenza, AIDS, and in particular cancer. There is no evidence of effectiveness and in some cases it has proved fatal. Both the effectiveness and safety of hydrogen peroxide therapy is scientifically questionable. Hydrogen peroxide is produced by the immune system, but in a carefully controlled manner. Cells called phagocytes engulf pathogens and then use hydrogen peroxide to destroy them. The peroxide is toxic to both the cell and the pathogen and so is kept within a special compartment, called a phagosome. Free hydrogen peroxide will damage any tissue it encounters via oxidative stress, a process that also has been proposed as a cause of cancer. Claims that hydrogen peroxide therapy increases cellular levels of oxygen have not been supported. The quantities administered would be expected to provide very little additional oxygen compared to that available from normal respiration. It is also difficult to raise the level of oxygen around cancer cells within a tumour, as the blood supply tends to be poor, a situation known as tumor hypoxia. Large oral doses of hydrogen peroxide at a 3% concentration may cause irritation and blistering to the mouth, throat, and abdomen as well as abdominal pain, vomiting, and diarrhea. Ingestion of hydrogen peroxide at concentrations of 35% or higher has been implicated as the cause of numerous gas embolism events resulting in hospitalisation. In these cases, hyperbaric oxygen therapy was used to treat the embolisms. Intravenous injection of hydrogen peroxide has been linked to several deaths. The American Cancer Society states that "there is no scientific evidence that hydrogen peroxide is a safe, effective, or useful cancer treatment." Furthermore, the therapy is not approved by the U.S. FDA. Historical incidents On 16 July 1934, in Kummersdorf, Germany, a propellant tank containing an experimental monopropellant mixture consisting of hydrogen peroxide and ethanol exploded during a test, killing three people. During the Second World War, doctors in German concentration camps experimented with the use of hydrogen peroxide injections in the killing of human subjects. In December 1943, the pilot Josef Pöhs died after being exposed to the T-Stoff of his Messerschmitt Me 163. In June 1955, Royal Navy submarine HMS Sidon sank after leaking high-test peroxide in a torpedo caused it to explode in its tube, killing twelve crew members; a member of the rescue party also succumbed. In April 1992, an explosion occurred at the hydrogen peroxide plant at Jarrie in France, due to technical failure of the computerised control system and resulting in one fatality and wide destruction of the plant. Several people received minor injuries after a hydrogen peroxide spill on board a Northwest Airlines flight from Orlando, Florida to Memphis, Tennessee on 28 October 1998. The Russian submarine K-141 Kursk sailed to perform an exercise of firing dummy torpedoes at the Pyotr Velikiy, a Kirov-class battlecruiser. On 12 August 2000, at 11:28 local time (07:28 UTC), there was an explosion while preparing to fire the torpedoes. The only credible report to date is that this was due to the failure and explosion of one of the Kursks hydrogen peroxide-fueled torpedoes. It is believed that HTP, a form of highly concentrated hydrogen peroxide used as propellant for the torpedo, seeped through its container, damaged either by rust or in the loading procedure on land where an incident involving one of the torpedoes accidentally touching ground went unreported. The vessel was lost with all hands. On 15 August 2010, a spill of about of cleaning fluid occurred on the 54th floor of 1515 Broadway, in Times Square, New York City. The spill, which a spokesperson for the New York City Fire Department said was of hydrogen peroxide, shut down Broadway between West 42nd and West 48th streets as fire engines responded to the hazmat situation. There were no reported injuries.
Physical sciences
Inorganic compounds
null
14413
https://en.wikipedia.org/wiki/Hydrocodone
Hydrocodone
Hydrocodone, also known as dihydrocodeinone, is a semi-synthetic opioid used to treat pain and as a cough suppressant. It is taken by mouth. Typically, it is dispensed as the combination acetaminophen/hydrocodone or ibuprofen/hydrocodone for pain severe enough to require an opioid and in combination with homatropine methylbromide to relieve cough. It is also available by itself in a long-acting form sold under the brand name Zohydro ER, among others, to treat severe pain of a prolonged duration. Hydrocodone is a controlled drug: in the United States, it is classified as a Schedule II Controlled Substance. Common side effects include dizziness, sleepiness, nausea, and constipation. Serious side effects may include low blood pressure, seizures, QT prolongation, respiratory depression, and serotonin syndrome. Rapidly decreasing the dose may result in opioid withdrawal. Use during pregnancy or breastfeeding is generally not recommended. Hydrocodone is believed to work by activating opioid receptors, mainly in the brain and spinal cord. Hydrocodone 10 mg is equivalent to about 10 mg of morphine by mouth. Hydrocodone was patented in 1923, while the long-acting formulation was approved for medical use in the United States in 2013. It is most commonly prescribed in the United States, which consumed 99% of the worldwide supply as of 2010. In 2018, it was the 402nd most commonly prescribed medication in the United States, with more than 400,000 prescriptions. Hydrocodone is a semisynthetic opioid, converted from codeine or less often from thebaine. Production using genetically engineered yeasts has been developed but is not used commercially. Medical uses Hydrocodone is used to treat moderate to severe pain. In liquid formulations, it is used to treat cough. In one study comparing the potency of hydrocodone to that of oxycodone, it was found that it took 50% more hydrocodone to achieve the same degree of miosis (pupillary contraction). The investigators interpreted this to mean that oxycodone is about 50% more potent than hydrocodone. However, in a study of emergency department patients with fractures, it was found that an equal amount of either drug provided about the same degree of pain relief, indicating that there is little practical difference between them when used for that purpose. Some references state that the analgesic action of hydrocodone begins in 20–30 minutes and lasts about 4–8 hours. The manufacturer's information says onset of action is about 10–30 minutes and duration is about 4–6 hours. Recommended dosing interval is 4–6 hours. Hydrocodone reaches peak serum levels after 1.3 hours. Available forms Hydrocodone is available in a variety of formulations for oral administration: The original oral form of hydrocodone alone, Dicodid, as immediate-release 5- and 10-mg tablets is available for prescription in Continental Europe per national drug control and prescription laws and Title 76 of the Schengen Treaty, but dihydrocodeine has been more widely used for the same indications since the beginning in the early 1920s, with hydrocodone being regulated the same way as morphine in the German Betäubungsmittelgesetz, the similarly named law in Switzerland and the Austrian Suchtmittelgesetz, whereas dihydrocodeine is regulated like codeine. For a number of decades, the liquid hydrocodone products available have been cough medicines. Hydrocodone plus homatropine (Hycodan) in the form of small tablets for coughing and especially neuropathic moderate pain (the homatropine, an anticholinergic, is useful in both of those cases and is a deterrent to intentional overdose) was more widely used than Dicodid and was labelled as a cough medicine in the United States whilst Vicodin and similar drugs were the choices for analgesia. Extended-release hydrocodone in a time-release syrup also containing chlorphenamine/chlorpheniramine is a cough medicine called Tussionex in North America. In Europe, similar time-release syrups containing codeine (numerous), dihydrocodeine (Paracodin Retard Hustensaft), nicocodeine (Tusscodin), thebacon, acetyldihydrocodeine, dionine, and nicodicodeine are used instead. Immediate-release hydrocodone with paracetamol (acetaminophen) (Vicodin, Lortab, Lorcet, Maxidone, Norco, Zydone) Immediate-release hydrocodone with ibuprofen (Vicoprofen, Ibudone, Reprexain) Immediate-release hydrocodone with aspirin (Alor 5/500, Azdone, Damason-P, Lortab ASA, Panasal 5/500) Controlled-release hydrocodone (Hysingla ER by Purdue Pharma, Zohydro ER) Hydrocodone is not available in parenteral or any other non-oral forms. Side effects Common side effects of hydrocodone are nausea, vomiting, constipation, drowsiness, dizziness, lightheadedness, anxiety, abnormally happy or sad mood, dry throat, difficulty urinating, rash, itching, and contraction of the pupils. Serious side effects include slowed or irregular breathing and chest tightness. Several cases of progressive bilateral hearing loss unresponsive to steroid therapy have been described as an infrequent adverse reaction to hydrocodone/paracetamol misuse. This adverse effect has been considered by some to be due to the ototoxicity of hydrocodone. Other researchers have suggested that paracetamol is the primary agent responsible for the ototoxicity. The U.S. Food and Drug Administration (FDA) assigns the drug to pregnancy category C, meaning that no adequate and well-controlled studies in humans have been conducted. A newborn of a mother taking opioid medications regularly prior to the birth will be physically dependent. The baby may also exhibit respiratory depression if the opioid dose was high. An epidemiological study indicated that opioid treatment during early pregnancy results in increased risk of various birth defects. Symptoms of hydrocodone overdose include narrowed or widened pupils; slow, shallow, or stopped breathing; slowed or stopped heartbeat; cold, clammy, or blue skin; excessive sleepiness; loss of consciousness; seizures; or death. Hydrocodone can be habit forming, causing physical and psychological dependence. Its abuse liability is similar to morphine and less than oxycodone. Interactions Hydrocodone is metabolized by the cytochrome P450 enzymes CYP2D6 and CYP3A4, and inhibitors and inducers of these enzymes can modify hydrocodone exposure. One study found that combination of paroxetine, a selective serotonin reuptake inhibitor (SSRI) and strong CYP2D6 inhibitor, with once-daily extended-release hydrocodone, did not modify exposure to hydrocodone or the incidence of adverse effects. These findings suggest that hydrocodone can be coadministered with CYP2D6 inhibitors without dosage modification. Conversely, combination of hydrocodone/acetaminophen with the antiviral regimen of ombitasvir, paritaprevir, ritonavir, and dasabuvir for treatment of hepatitis C increased peak concentrations of hydrocodone by 27%, total exposure by 90%, and elimination half-life from 5.1hours to 8.0hours. Ritonavir is a strong CYP3A4 inhibitor as well as inducer of CYP3A and other enzymes, and the other antivirals are known to inhibit drug transporters like organic anion transporting polypeptide (OATP) 1B1 and 1B3, P-glycoprotein, and breast cancer resistance protein (BCRP). The changes in hydrocodone levels are consistent with CYP3A4 inhibition by ritonavir. Based on these findings, a 50% lower dose of hydrocodone and closer clinical monitoring was recommended when hydrocodone is used in combination with this antiviral regimen. People consuming alcohol, other opioids, anticholinergic antihistamines, antipsychotics, anxiolytics, or other central nervous system (CNS) depressants together with hydrocodone may exhibit an additive CNS depression. Hydrocodone taken concomitantly with serotonergic medications like SSRI antidepressants may increase the risk of serotonin syndrome. Pharmacology Pharmacodynamics Hydrocodone is a highly selective full agonist of the μ-opioid receptor (MOR). This is the main biological target of the endogenous opioid neuropeptide β-endorphin. Hydrocodone has low affinity for the δ-opioid receptor (DOR) and the κ-opioid receptor (KOR), where it is an agonist similarly. Studies have shown hydrocodone is stronger than codeine but only one-tenth as potent as morphine at binding to receptors and reported to be only 59% as potent as morphine in analgesic properties. However, in tests conducted on rhesus monkeys, the analgesic potency of hydrocodone was actually higher than morphine. Oral hydrocodone has a mean equivalent daily dosage (MEDD) factor of 0.4, meaning that 1 mg of hydrocodone is equivalent to 0.4 mg of intravenous morphine. However, because of morphine's low oral bioavailability, there is a 1:1 correspondence between orally administered morphine and orally administered hydrocodone. Pharmacokinetics Absorption Hydrocodone is only pharmaceutically available as an oral medication. It is well-absorbed, but the oral bioavailability of hydrocodone is only approximately 25%. The onset of action of hydrocodone via this route is 10 to 20 minutes, with a peak effect (Tmax) occurring at 30 to 60 minutes, and it has a duration of 4 to 8 hours. The FDA label for immediate-release hydrocodone with acetaminophen does not include any information on the influence of food on its absorption or other pharmacokinetics. Conversely, coadministration with a high-fat meal increases peak concentrations of different formulations of extended-release hydrocodone by 14 to 54%, whereas area-under-the-curve levels are not notably affected. Distribution The volume of distribution of hydrocodone is 3.3 to 4.7 L/kg. The plasma protein binding of hydrocodone is 20 to 50%. Metabolism In the liver, hydrocodone is transformed into several metabolites, including norhydrocodone, hydromorphone, 6α-hydrocodol (dihydrocodeine), and 6β-hydrocodol. 6α- and 6β-hydromorphol are also formed, and the metabolites of hydrocodone are conjugated (via glucuronidation). Hydrocodone has a terminal half-life that averages 3.8 hours (range 3.3–4.4 hours). The hepatic cytochrome P450 enzyme CYP2D6 converts hydrocodone into hydromorphone, a more potent opioid (5-fold higher binding affinity to the MOR). However, extensive and poor cytochrome 450 CYP2D6 metabolizers had similar physiological and subjective responses to hydrocodone, and CYP2D6 inhibitor quinidine did not change the responses of extensive metabolizers, suggesting that inhibition of CYP2D6 metabolism of hydrocodone has no practical importance. Ultra-rapid CYP2D6 metabolizers (1–2% of the population) may have an increased response to hydrocodone; however, hydrocodone metabolism in this population has not been studied. Norhydrocodone, the major metabolite of hydrocodone, is predominantly formed by CYP3A4-catalyzed oxidation. In contrast to hydromorphone, it is described as inactive. However, norhydrocodone is actually a MOR agonist with similar potency to hydrocodone, but has been found to produce only minimal analgesia when administered peripherally to animals (likely due to poor blood–brain barrier and thus central nervous system penetration). Inhibition of CYP3A4 in a child who was, in addition, a poor CYP2D6 metabolizer, resulted in a fatal overdose of hydrocodone. Approximately 40% of hydrocodone metabolism is attributed to non-cytochrome P450-catalyzed reactions. Elimination Hydrocodone is excreted in urine, mainly in the form of conjugates. Chemistry Detection in body fluids Hydrocodone concentrations are measured in blood, plasma, and urine to seek evidence of misuse, to confirm diagnoses of poisoning, and to assist in investigations into deaths. Many commercial opiate screening tests react indiscriminately with hydrocodone, other opiates, and their metabolites, but chromatographic techniques can easily distinguish hydrocodone uniquely. Blood and plasma hydrocodone concentrations typically fall into the 5–30 μg/L range among people taking the drug therapeutically, 100–200 μg/L among recreational users, and 100–1,600 μg/L in cases of acute, fatal overdosage. Co-administration of the drug with food or alcohol can very significantly increase the resulting plasma hydrocodone concentrations that are subsequently achieved. Synthesis Hydrocodone is most commonly synthesized from thebaine, a constituent of opium latex from the dried poppy plant. Once thebaine is obtained, the reaction undergoes hydrogenation using a palladium catalyst. Structure There are three important structures in hydrocodone: the amine group, which binds to the tertiary nitrogen binding site in the central nervous system's opioid receptor, the hydroxy group that binds to the anionic binding site, and the phenyl group which binds to the phenolic binding site. This triggers a G protein activation and subsequent release of dopamine. History Hydrocodone was first synthesized in Germany in 1920 by Carl Mannich and Helene Löwenheim. It was approved by the Food and Drug Administration on 23 March 1943 for sale in the United States and approved by Health Canada for sale in Canada under the brand name Hycodan. Hydrocodone was first marketed by Knoll as Dicodid, starting in February 1924 in Germany. This name is analogous to other products the company introduced or otherwise marketed: Dilaudid (hydromorphone, 1926), Dinarkon (oxycodone, 1917), Dihydrin (dihydrocodeine, 1911), and Dimorphan (dihydromorphine). Paramorfan is the trade name of dihydromorphine from another manufacturer, as is Paracodin, for dihydrocodeine. Hydrocodone was patented in 1923, while the long-acting formulation was approved for medical use in the United States in 2013. It is most commonly prescribed in the United States, which consumed 99% of the worldwide supply as of 2010. In 2018, it was the 402nd most commonly prescribed medication in the United States, with more than 400,000 prescriptions. Society and culture Formulations Several common imprints for hydrocodone are M365, M366, M367. Combination products Most hydrocodone formulations include a second analgesic, such as paracetamol (acetaminophen) or ibuprofen. Examples of hydrocodone combinations include Norco, Vicodin, Vicoprofen and Riboxen. Legal status in the United States The US government imposed tougher prescribing rules for hydrocodone in 2014, changing the drug from Schedule III to Schedule II. In 2011, hydrocodone products were involved in around 100,000 abuse-related emergency department visits in the United States, more than double the number in 2004.
Biology and health sciences
Pain treatments
Health
14458
https://en.wikipedia.org/wiki/Hail
Hail
Hail is a form of solid precipitation. It is distinct from ice pellets (American English "sleet"), though the two are often confused. It consists of balls or irregular lumps of ice, each of which is called a hailstone. Ice pellets generally fall in cold weather, while hail growth is greatly inhibited during low surface temperatures. Unlike other forms of water ice precipitation, such as graupel (which is made of rime ice), ice pellets (which are smaller and translucent), and snow (which consists of tiny, delicately crystalline flakes or needles), hailstones usually measure between and in diameter. The METAR reporting code for hail or greater is GR, while smaller hailstones and graupel are coded GS. Hail is possible within most thunderstorms (as it is produced by cumulonimbus), as well as within of the parent storm. Hail formation requires environments of strong, upward motion of air within the parent thunderstorm (similar to tornadoes) and lowered heights of the freezing level. In the mid-latitudes, hail forms near the interiors of continents, while, in the tropics, it tends to be confined to high elevations. There are methods available to detect hail-producing thunderstorms using weather satellites and weather radar imagery. Hailstones generally fall at higher speeds as they grow in size, though complicating factors such as melting, friction with air, wind, and interaction with rain and other hailstones can slow their descent through Earth's atmosphere. Severe weather warnings are issued for hail when the stones reach a damaging size, as it can cause serious damage to human-made structures, and, most commonly, farmers' crops. Definition Any thunderstorm which produces hail that reaches the ground is known as a hailstorm. An ice crystal with a diameter of > is considered a hailstone. Hailstones can grow to and weigh more than . Unlike ice pellets, hailstones are often layered and can be irregular and clumped together. Hail is composed of transparent ice or alternating layers of transparent and translucent ice at least thick, which are deposited upon the hailstone as it travels through the cloud, suspended aloft by air with strong upward motion until its weight overcomes the updraft and falls to the ground. Although the diameter of hail is varied, in the United States, the average observation of damaging hail is between and golf-ball-sized . Stones larger than are usually considered large enough to cause damage. The Meteorological Service of Canada issues severe thunderstorm warnings when hail that size or above is expected. The US National Weather Service has a diameter threshold, effective January 2010, an increase over the previous threshold of hail. Other countries have different thresholds according to local sensitivity to hail; for instance, grape-growing areas could be adversely impacted by smaller hailstones. Hailstones can be very large or very small, depending on how strong the updraft is: weaker hailstorms produce smaller hailstones than stronger hailstorms (such as supercells), as the more powerful updrafts in a stronger storm can keep larger hailstones aloft. Formation Hail forms in strong thunderstorm clouds, particularly those with intense updrafts, high liquid-water content, great vertical extent, large water droplets, and where a good portion of the cloud layer is below freezing (). These types of strong updrafts can also indicate the presence of a tornado. The growth rate of hailstones is impacted by factors such as higher elevation, lower freezing zones, and wind shear. Layer nature of the hailstones Like other precipitation in cumulonimbus clouds, hail begins as water droplets. As the droplets rise and the temperature goes below freezing, they become supercooled water and will freeze on contact with condensation nuclei. A cross-section through a large hailstone shows an onion-like structure. This means that the hailstone is made of thick and translucent layers, alternating with layers that are thin, white and opaque. Former theory suggested that hailstones were subjected to multiple descents and ascents, falling into a zone of humidity and refreezing as they were uplifted. This up and down motion was thought to be responsible for the successive layers of the hailstone. New research, based on theory as well as field study, has shown this is not necessarily true. The storm's updraft, with upwardly directed wind speeds as high as , blows the forming hailstones up the cloud. As the hailstone ascends, it passes into areas of the cloud where the concentration of humidity and supercooled water droplets varies. The hailstone's growth rate changes depending on the variation in humidity and supercooled water droplets that it encounters. The accretion rate of these water droplets is another factor in the hailstone's growth. When the hailstone moves into an area with a high concentration of water droplets, it captures the latter and acquires a translucent layer. Should the hailstone move into an area where mostly water vapor is available, it acquires a layer of opaque white ice. Furthermore, the hailstone's speed depends on its position in the cloud's updraft and its mass. This determines the varying thicknesses of the layers of the hailstone. The accretion rate of supercooled water droplets onto the hailstone depends on the relative velocities between these water droplets and the hailstone itself. This means that generally the larger hailstones will form some distance from the stronger updraft, where they can pass more time growing. As the hailstone grows, it releases latent heat, which keeps its exterior in a liquid phase. Because it undergoes "wet growth", the outer layer is sticky (i.e. more adhesive), so a single hailstone may grow by collision with other smaller hailstones, forming a larger entity with an irregular shape. Hail can also undergo "dry growth", in which the latent heat release through freezing is not enough to keep the outer layer in a liquid state. Hail forming in this manner appears opaque due to small air bubbles that become trapped in the stone during rapid freezing. These bubbles coalesce and escape during the "wet growth" mode, and the hailstone is more clear. The mode of growth for a hailstone can change throughout its development, and this can result in distinct layers in a hailstone's cross-section. The hailstone will keep rising in the thunderstorm until its mass can no longer be supported by the updraft. This may take at least 30 minutes, based on the force of the updrafts in the hail-producing thunderstorm, whose top is usually greater than 10 km high. It then falls toward the ground while continuing to grow, based on the same processes, until it leaves the cloud. It will later begin to melt as it passes into air above freezing temperature. Thus, a unique trajectory in the thunderstorm is sufficient to explain the layer-like structure of the hailstone. The only case in which multiple trajectories can be discussed is in a multicellular thunderstorm, where the hailstone may be ejected from the top of the "mother" cell and captured in the updraft of a more intense "daughter" cell. This, however, is an exceptional case. Factors favoring hail Hail is most common within continental interiors of the mid-latitudes, as hail formation is considerably more likely when the freezing level is below the altitude of . Movement of dry air into strong thunderstorms over continents can increase the frequency of hail by promoting evaporational cooling, which lowers the freezing level of thunderstorm clouds, giving hail a larger volume to grow in. Accordingly, hail is less common in the tropics despite a much higher frequency of thunderstorms than in the mid-latitudes because the atmosphere over the tropics tends to be warmer over a much greater altitude. Hail in the tropics occurs mainly at higher elevations. Hail growth becomes vanishingly small when air temperatures fall below , as supercooled water droplets become rare at these temperatures. Around thunderstorms, hail is most likely within the cloud at elevations above . Between and , 60% of hail is still within the thunderstorm, though 40% now lies within the clear air under the anvil. Below , hail is equally distributed in and around a thunderstorm to a distance of . Climatology Hail occurs most frequently within continental interiors at mid-latitudes and is less common in the tropics, despite a much higher frequency of thunderstorms than in the mid-latitudes. Hail is also much more common along mountain ranges because mountains force horizontal winds upwards (known as orographic lifting), thereby intensifying the updrafts within thunderstorms and making hail more likely. The higher elevations also result in there being less time available for hail to melt before reaching the ground. One of the more common regions for large hail is across mountainous northern India, which reported one of the highest hail-related death tolls on record in 1888. China also experiences significant hailstorms. Central Europe and southern Australia also experience a lot of hailstorms. Regions where hailstorms frequently occur are southern and western Germany, northern and eastern France, southern and eastern Benelux, and northern Italy. In southeastern Europe, Croatia and Serbia experience frequent occurrences of hail. Some mediterranean countries register the maximum frequency of hail during the Fall season. In North America, hail is most common in the area where Colorado, Nebraska, and Wyoming meet, known as "Hail Alley". Hail in this region occurs between the months of March and October during the afternoon and evening hours, with the bulk of the occurrences from May through September. Cheyenne, Wyoming is North America's most hail-prone city with an average of nine to ten hailstorms per season. To the north of this area and also just downwind of the Rocky Mountains is the Hailstorm Alley region of Alberta, which also experiences an increased incidence of significant hail events. Hailstorms are also common in several regions of South America, particularly in the temperate latitudes. The central region of Argentina, extending from the Mendoza region eastward towards Córdoba, experiences some of the most frequent hailstorms in the world, with 10-30 storms per year on average. The Patagonia region of southern Argentina also sees frequent hailstorms, though this may be partially due to graupel (small hail) being counted as hail in this colder region. The triple border region between the Brazilian states of Paraná, Santa Catarina, and Argentina, in southern Brazil is another area known for damaging hailstorms. Hailstorms are also common in parts of Paraguay, Uruguay, and Bolivia that border the high-frequency hail regions of northern Argentina. The high frequency of hailstorms in these areas of South America is attributed to the region's orographic forcing of convection, combined with moisture transport from the Amazon and instability created by temperature contrasts between the surface and upper atmosphere. In Colombia, the cities of Bogotá and Medellín also see frequent hailstorms due to their high elevation. Southern Chile also sees persistent hail from mid april through october. Short-term detection Weather radar is a very useful tool to detect the presence of hail-producing thunderstorms. However, radar data has to be complemented by a knowledge of current atmospheric conditions which can allow one to determine if the current atmosphere is conducive to hail development. Modern radar scans many angles around the site. Reflectivity values at multiple angles above ground level in a storm are proportional to the precipitation rate at those levels. Summing reflectivities in the Vertically Integrated Liquid or VIL, gives the liquid water content in the cloud. Research shows that hail development in the upper levels of the storm is related to the evolution of VIL. VIL divided by the vertical extent of the storm, called VIL density, has a relationship with hail size, although this varies with atmospheric conditions and therefore is not highly accurate. Traditionally, hail size and probability can be estimated from radar data by computer using algorithms based on this research. Some algorithms include the height of the freezing level to estimate the melting of the hailstone and what would be left on the ground. Certain patterns of reflectivity are important clues for the meteorologist as well. The three body scatter spike is an example. This is the result of energy from the radar hitting hail and being deflected to the ground, where they deflect back to the hail and then to the radar. The energy took more time to go from the hail to the ground and back, as opposed to the energy that went directly from the hail to the radar, and the echo is further away from the radar than the actual location of the hail on the same radial path, forming a cone of weaker reflectivities. More recently, the polarization properties of weather radar returns have been analyzed to differentiate between hail and heavy rain. The use of differential reflectivity (), in combination with horizontal reflectivity () has led to a variety of hail classification algorithms. Visible satellite imagery is beginning to be used to detect hail, but false alarm rates remain high using this method. Size and terminal velocity The size of hailstones is best determined by measuring their diameter with a ruler. In the absence of a ruler, hailstone size is often visually estimated by comparing its size to that of known objects, such as coins. Using objects such as hen's eggs, peas, and marbles for comparing hailstone sizes is imprecise, due to their varied dimensions. The UK organisation, TORRO, also scales for both hailstones and hailstorms. When observed at an airport, METAR code is used within a surface weather observation which relates to the size of the hailstone. Within METAR code, GR is used to indicate larger hail, of a diameter of at least . GR is derived from the French word grêle. Smaller-sized hail, as well as snow pellets, use the coding of GS, which is short for the French word grésil. Terminal velocity of hail, or the speed at which hail is falling when it strikes the ground, varies. It is estimated that a hailstone of in diameter falls at a rate of , while stones the size of in diameter fall at a rate of . Hailstone velocity is dependent on the size of the stone, its drag coefficient, the motion of wind it is falling through, collisions with raindrops or other hailstones, and melting as the stones fall through a warmer atmosphere. As hailstones are not perfect spheres, it is difficult to accurately calculate their drag coefficient - and, thus, their speed. Size comparisons to objects In the United States, the National Weather Service reports hail size as a comparison to everyday objects. Hailstones larger than 1 inch in diameter are denoted as "severe." Hail records Megacryometeors, large rocks of ice that are not associated with thunderstorms, are not officially recognized by the World Meteorological Organization as "hail", which are aggregations of ice associated with thunderstorms, and therefore records of extreme characteristics of megacryometeors are not given as hail records. Heaviest: ; Gopalganj District, Bangladesh, 14 April 1986. Largest diameter officially measured: diameter, circumference; Vivian, South Dakota, 23 July 2010. Largest circumference officially measured: circumference, diameter; Aurora, Nebraska, 22 June 2003. Greatest average hail precipitation: Kericho, Kenya experiences hailstorms, on average, 50 days annually. Kericho is close to the equator and the elevation of contributes to it being a hot spot for hail. Kericho reached the world record for 132 days of hail in one year. Hazards Hail can cause serious damage, notably to automobiles, aircraft, skylights, glass-roofed structures, livestock, and most commonly, crops. Hail damage to roofs often goes unnoticed until further structural damage is seen, such as leaks or cracks. It is hardest to recognize hail damage on shingled roofs and flat roofs, but all roofs have their own hail damage detection problems. Metal roofs are fairly resistant to hail damage, but may accumulate cosmetic damage in the form of dents and damaged coatings. Hail is one of the most significant thunderstorm hazards to aircraft. When hailstones exceed in diameter, planes can be seriously damaged within seconds. The hailstones accumulating on the ground can also be hazardous to landing aircraft. Hail is a common nuisance to drivers of automobiles, severely denting the vehicle and cracking or even shattering windshields and windows unless parked in a garage or covered with a shielding material. Wheat, corn, soybeans, and tobacco are the most sensitive crops to hail damage. Hail is one of Canada's most expensive hazards. Rarely, massive hailstones have been known to cause concussions or fatal head trauma. Hailstorms have been the cause of costly and deadly events throughout history. One of the earliest known incidents occurred around the 9th century in Roopkund, Uttarakhand, India, where 200 to 600 nomads seem to have died of injuries from hail the size of cricket balls. Accumulations Narrow zones where hail accumulates on the ground in association with thunderstorm activity are known as hail streaks or hail swaths, which can be detectable by satellite after the storms pass by. Hailstorms normally last from a few minutes up to 15 minutes in duration. Accumulating hail storms can blanket the ground with over of hail, cause thousands to lose power, and bring down many trees. Flash flooding and mudslides within areas of steep terrain can be a concern with accumulating hail. Depths of up to have been reported. A landscape covered in accumulated hail generally resembles one covered in accumulated snow and any significant accumulation of hail has the same restrictive effects as snow accumulation, albeit over a smaller area, on transport and infrastructure. Accumulated hail can also cause flooding by blocking drains, and hail can be carried in the floodwater, turning into a snow-like slush which is deposited at lower elevations. On somewhat rare occasions, a thunderstorm can become stationary or nearly so while prolifically producing hail and significant depths of accumulation do occur; this tends to happen in mountainous areas, such as the July 29, 2010 case of a foot of hail accumulation in Boulder County, Colorado. On June 5, 2015, hail up to four feet deep fell on one city block in Denver, Colorado. The hailstones, described as between the size of bumble bees and ping pong balls, were accompanied by rain and high winds. The hail fell in only the one area, leaving the surrounding area untouched. It fell for one and a half hours between 10:00 pm and 11:30 pm. A meteorologist for the National Weather Service in Boulder said, "It's a very interesting phenomenon. We saw the storm stall. It produced copious amounts of hail in one small area. It's a meteorological thing." Tractors used to clear the area filled more than 30 dump truck loads of hail. Research focused on four individual days that accumulated more than of hail in 30 minutes on the Colorado front range has shown that these events share similar patterns in observed synoptic weather, radar, and lightning characteristics, suggesting the possibility of predicting these events prior to their occurrence. A fundamental problem in continuing research in this area is that, unlike hail diameter, hail depth is not commonly reported. The lack of data leaves researchers and forecasters in the dark when trying to verify operational methods. A cooperative effort between the University of Colorado and the National Weather Service is in progress. The joint project's goal is to enlist the help of the general public to develop a database of hail accumulation depths. Suppression and prevention During the Middle Ages, people in Europe used to ring church bells and fire cannons to try to prevent hail, and the subsequent damage to crops. Updated versions of this approach are available as modern hail cannons. Cloud seeding after World War II was done to eliminate the hail threat, particularly across the Soviet Union, where it was claimed a 70–98% reduction in crop damage from hail storms was achieved by deploying silver iodide in clouds using rockets and artillery shells. But these effects have not been replicated in randomized trials conducted in the West. Hail suppression programs have been undertaken by 15 countries between 1965 and 2005.
Physical sciences
Precipitation
null
14463
https://en.wikipedia.org/wiki/Harmonic%20mean
Harmonic mean
In mathematics, the harmonic mean is a kind of average, one of the Pythagorean means. It is the most appropriate average for ratios and rates such as speeds, and is normally only used for positive arguments. The harmonic mean is the reciprocal of the arithmetic mean of the reciprocals of the numbers, that is, the generalized f-mean with . For example, the harmonic mean of 1, 4, and 4 is Definition The harmonic mean H of the positive real numbers is It is the reciprocal of the arithmetic mean of the reciprocals, and vice versa: where the arithmetic mean is The harmonic mean is a Schur-concave function, and is greater than or equal to the minimum of its arguments: for positive arguments, . Thus, the harmonic mean cannot be made arbitrarily large by changing some values to bigger ones (while having at least one value unchanged). The harmonic mean is also concave for positive arguments, an even stronger property than Schur-concavity. Relationship with other means For all positive data sets containing at least one pair of nonequal values, the harmonic mean is always the least of the three Pythagorean means, while the arithmetic mean is always the greatest of the three and the geometric mean is always in between. (If all values in a nonempty data set are equal, the three means are always equal.) It is the special case M−1 of the power mean: Since the harmonic mean of a list of numbers tends strongly toward the least elements of the list, it tends (compared to the arithmetic mean) to mitigate the impact of large outliers and aggravate the impact of small ones. The arithmetic mean is often mistakenly used in places calling for the harmonic mean. In the speed example below for instance, the arithmetic mean of 40 is incorrect, and too big. The harmonic mean is related to the other Pythagorean means, as seen in the equation below. This can be seen by interpreting the denominator to be the arithmetic mean of the product of numbers n times but each time omitting the j-th term. That is, for the first term, we multiply all n numbers except the first; for the second, we multiply all n numbers except the second; and so on. The numerator, excluding the n, which goes with the arithmetic mean, is the geometric mean to the power n. Thus the n-th harmonic mean is related to the n-th geometric and arithmetic means. The general formula is If a set of non-identical numbers is subjected to a mean-preserving spread — that is, two or more elements of the set are "spread apart" from each other while leaving the arithmetic mean unchanged — then the harmonic mean always decreases. Harmonic mean of two or three numbers Two numbers For the special case of just two numbers, and , the harmonic mean can be written as: or (Note that the harmonic mean is undefined if , i.e. .) In this special case, the harmonic mean is related to the arithmetic mean and the geometric mean by Since by the inequality of arithmetic and geometric means, this shows for the n = 2 case that H ≤ G (a property that in fact holds for all n). It also follows that , meaning the two numbers' geometric mean equals the geometric mean of their arithmetic and harmonic means. Three numbers For the special case of three numbers, , and , the harmonic mean can be written as: Three positive numbers H, G, and A are respectively the harmonic, geometric, and arithmetic means of three positive numbers if and only if the following inequality holds Weighted harmonic mean If a set of weights , ..., is associated to the data set , ..., , the weighted harmonic mean is defined by The unweighted harmonic mean can be regarded as the special case where all of the weights are equal. Examples In analytic number theory Prime number theory The prime number theorem states that the number of primes less than or equal to is asymptotically equal to the harmonic mean of the first natural numbers. In physics Average speed In many situations involving rates and ratios, the harmonic mean provides the correct average. For instance, if a vehicle travels a certain distance d outbound at a speed x (e.g. 60 km/h) and returns the same distance at a speed y (e.g. 20 km/h), then its average speed is the harmonic mean of x and y (30 km/h), not the arithmetic mean (40 km/h). The total travel time is the same as if it had traveled the whole distance at that average speed. This can be proven as follows: Average speed for the entire journey = However, if the vehicle travels for a certain amount of time at a speed x and then the same amount of time at a speed y, then its average speed is the arithmetic mean of x and y, which in the above example is 40 km/h. Average speed for the entire journey The same principle applies to more than two segments: given a series of sub-trips at different speeds, if each sub-trip covers the same distance, then the average speed is the harmonic mean of all the sub-trip speeds; and if each sub-trip takes the same amount of time, then the average speed is the arithmetic mean of all the sub-trip speeds. (If neither is the case, then a weighted harmonic mean or weighted arithmetic mean is needed. For the arithmetic mean, the speed of each portion of the trip is weighted by the duration of that portion, while for the harmonic mean, the corresponding weight is the distance. In both cases, the resulting formula reduces to dividing the total distance by the total time.) However, one may avoid the use of the harmonic mean for the case of "weighting by distance". Pose the problem as finding "slowness" of the trip where "slowness" (in hours per kilometre) is the inverse of speed. When trip slowness is found, invert it so as to find the "true" average trip speed. For each trip segment i, the slowness si = 1/speedi. Then take the weighted arithmetic mean of the si's weighted by their respective distances (optionally with the weights normalized so they sum to 1 by dividing them by trip length). This gives the true average slowness (in time per kilometre). It turns out that this procedure, which can be done with no knowledge of the harmonic mean, amounts to the same mathematical operations as one would use in solving this problem by using the harmonic mean. Thus it illustrates why the harmonic mean works in this case. Density Similarly, if one wishes to estimate the density of an alloy given the densities of its constituent elements and their mass fractions (or, equivalently, percentages by mass), then the predicted density of the alloy (exclusive of typically minor volume changes due to atom packing effects) is the weighted harmonic mean of the individual densities, weighted by mass, rather than the weighted arithmetic mean as one might at first expect. To use the weighted arithmetic mean, the densities would have to be weighted by volume. Applying dimensional analysis to the problem while labeling the mass units by element and making sure that only like element-masses cancel makes this clear. Electricity If one connects two electrical resistors in parallel, one having resistance x (e.g., 60 Ω) and one having resistance y (e.g., 40 Ω), then the effect is the same as if one had used two resistors with the same resistance, both equal to the harmonic mean of x and y (48 Ω): the equivalent resistance, in either case, is 24 Ω (one-half of the harmonic mean). This same principle applies to capacitors in series or to inductors in parallel. However, if one connects the resistors in series, then the average resistance is the arithmetic mean of x and y (50 Ω), with total resistance equal to twice this, the sum of x and y (100 Ω). This principle applies to capacitors in parallel or to inductors in series. As with the previous example, the same principle applies when more than two resistors, capacitors or inductors are connected, provided that all are in parallel or all are in series. The "conductivity effective mass" of a semiconductor is also defined as the harmonic mean of the effective masses along the three crystallographic directions. Optics As for other optic equations, the thin lens equation = + can be rewritten such that the focal length f is one-half of the harmonic mean of the distances of the subject u and object v from the lens. Two thin lenses of focal length f1 and f2 in series is equivalent to two thin lenses of focal length fhm, their harmonic mean, in series. Expressed as optical power, two thin lenses of optical powers P1 and P2 in series is equivalent to two thin lenses of optical power Pam, their arithmetic mean, in series. In finance The weighted harmonic mean is the preferable method for averaging multiples, such as the price–earnings ratio (P/E). If these ratios are averaged using a weighted arithmetic mean, high data points are given greater weights than low data points. The weighted harmonic mean, on the other hand, correctly weights each data point. The simple weighted arithmetic mean when applied to non-price normalized ratios such as the P/E is biased upwards and cannot be numerically justified, since it is based on equalized earnings; just as vehicles speeds cannot be averaged for a roundtrip journey (see above). In geometry In any triangle, the radius of the incircle is one-third of the harmonic mean of the altitudes. For any point P on the minor arc BC of the circumcircle of an equilateral triangle ABC, with distances q and t from B and C respectively, and with the intersection of PA and BC being at a distance y from point P, we have that y is half the harmonic mean of q and t. In a right triangle with legs a and b and altitude h from the hypotenuse to the right angle, is half the harmonic mean of and . Let t and s (t > s) be the sides of the two inscribed squares in a right triangle with hypotenuse c. Then equals half the harmonic mean of and . Let a trapezoid have vertices A, B, C, and D in sequence and have parallel sides AB and CD. Let E be the intersection of the diagonals, and let F be on side DA and G be on side BC such that FEG is parallel to AB and CD. Then FG is the harmonic mean of AB and DC. (This is provable using similar triangles.) One application of this trapezoid result is in the crossed ladders problem, where two ladders lie oppositely across an alley, each with feet at the base of one sidewall, with one leaning against a wall at height A and the other leaning against the opposite wall at height B, as shown. The ladders cross at a height of h above the alley floor. Then h is half the harmonic mean of A and B. This result still holds if the walls are slanted but still parallel and the "heights" A, B, and h are measured as distances from the floor along lines parallel to the walls. This can be proved easily using the area formula of a trapezoid and area addition formula. In an ellipse, the semi-latus rectum (the distance from a focus to the ellipse along a line parallel to the minor axis) is the harmonic mean of the maximum and minimum distances of the ellipse from a focus. In other sciences In computer science, specifically information retrieval and machine learning, the harmonic mean of the precision (true positives per predicted positive) and the recall (true positives per real positive) is often used as an aggregated performance score for the evaluation of algorithms and systems: the F-score (or F-measure). This is used in information retrieval because only the positive class is of relevance, while number of negatives, in general, is large and unknown. It is thus a trade-off as to whether the correct positive predictions should be measured in relation to the number of predicted positives or the number of real positives, so it is measured versus a putative number of positives that is an arithmetic mean of the two possible denominators. A consequence arises from basic algebra in problems where people or systems work together. As an example, if a gas-powered pump can drain a pool in 4 hours and a battery-powered pump can drain the same pool in 6 hours, then it will take both pumps , which is equal to 2.4 hours, to drain the pool together. This is one-half of the harmonic mean of 6 and 4: . That is, the appropriate average for the two types of pump is the harmonic mean, and with one pair of pumps (two pumps), it takes half this harmonic mean time, while with two pairs of pumps (four pumps) it would take a quarter of this harmonic mean time. In hydrology, the harmonic mean is similarly used to average hydraulic conductivity values for a flow that is perpendicular to layers (e.g., geologic or soil) - flow parallel to layers uses the arithmetic mean. This apparent difference in averaging is explained by the fact that hydrology uses conductivity, which is the inverse of resistivity. In sabermetrics, a baseball player's Power–speed number is the harmonic mean of their home run and stolen base totals. In population genetics, the harmonic mean is used when calculating the effects of fluctuations in the census population size on the effective population size. The harmonic mean takes into account the fact that events such as population bottleneck increase the rate genetic drift and reduce the amount of genetic variation in the population. This is a result of the fact that following a bottleneck very few individuals contribute to the gene pool limiting the genetic variation present in the population for many generations to come. When considering fuel economy in automobiles two measures are commonly used – miles per gallon (mpg), and litres per 100 km. As the dimensions of these quantities are the inverse of each other (one is distance per volume, the other volume per distance) when taking the mean value of the fuel economy of a range of cars one measure will produce the harmonic mean of the other – i.e., converting the mean value of fuel economy expressed in litres per 100 km to miles per gallon will produce the harmonic mean of the fuel economy expressed in miles per gallon. For calculating the average fuel consumption of a fleet of vehicles from the individual fuel consumptions, the harmonic mean should be used if the fleet uses miles per gallon, whereas the arithmetic mean should be used if the fleet uses litres per 100 km. In the USA the CAFE standards (the federal automobile fuel consumption standards) make use of the harmonic mean. In chemistry and nuclear physics the average mass per particle of a mixture consisting of different species (e.g., molecules or isotopes) is given by the harmonic mean of the individual species' masses weighted by their respective mass fraction. Beta distribution The harmonic mean of a beta distribution with shape parameters α and β is: The harmonic mean with α < 1 is undefined because its defining expression is not bounded in [0, 1]. Letting α = β showing that for α = β the harmonic mean ranges from 0 for α = β = 1, to 1/2 for α = β → ∞. The following are the limits with one parameter finite (non-zero) and the other parameter approaching these limits: With the geometric mean the harmonic mean may be useful in maximum likelihood estimation in the four parameter case. A second harmonic mean (H1 − X) also exists for this distribution This harmonic mean with β < 1 is undefined because its defining expression is not bounded in [ 0, 1 ]. Letting α = β in the above expression showing that for α = β the harmonic mean ranges from 0, for α = β = 1, to 1/2, for α = β → ∞. The following are the limits with one parameter finite (non zero) and the other approaching these limits: Although both harmonic means are asymmetric, when α = β the two means are equal. Lognormal distribution The harmonic mean ( H ) of the lognormal distribution of a random variable X is where μ and σ2 are the parameters of the distribution, i.e. the mean and variance of the distribution of the natural logarithm of X. The harmonic and arithmetic means of the distribution are related by where Cv and μ* are the coefficient of variation and the mean of the distribution respectively.. The geometric (G), arithmetic and harmonic means of the distribution are related by Pareto distribution The harmonic mean of type 1 Pareto distribution is where k is the scale parameter and α is the shape parameter. Statistics For a random sample, the harmonic mean is calculated as above. Both the mean and the variance may be infinite (if it includes at least one term of the form 1/0). Sample distributions of mean and variance The mean of the sample m is asymptotically distributed normally with variance s2. The variance of the mean itself is where m is the arithmetic mean of the reciprocals, x are the variates, n is the population size and E is the expectation operator. Delta method Assuming that the variance is not infinite and that the central limit theorem applies to the sample then using the delta method, the variance is where H is the harmonic mean, m is the arithmetic mean of the reciprocals s2 is the variance of the reciprocals of the data and n is the number of data points in the sample. Jackknife method A jackknife method of estimating the variance is possible if the mean is known. This method is the usual 'delete 1' rather than the 'delete m' version. This method first requires the computation of the mean of the sample (m) where x are the sample values. A series of value wi is then computed where The mean (h) of the wi is then taken: The variance of the mean is Significance testing and confidence intervals for the mean can then be estimated with the t test. Size biased sampling Assume a random variate has a distribution f( x ). Assume also that the likelihood of a variate being chosen is proportional to its value. This is known as length based or size biased sampling. Let μ be the mean of the population. Then the probability density function f*( x ) of the size biased population is The expectation of this length biased distribution E*( x ) is where σ2 is the variance. The expectation of the harmonic mean is the same as the non-length biased version E( x ) The problem of length biased sampling arises in a number of areas including textile manufacture pedigree analysis and survival analysis Akman et al. have developed a test for the detection of length based bias in samples. Shifted variables If X is a positive random variable and q > 0 then for all ε > 0 Moments Assuming that X and E(X) are > 0 then This follows from Jensen's inequality. Gurland has shown that for a distribution that takes only positive values, for any n > 0 Under some conditions where ~ means approximately equal to. Sampling properties Assuming that the variates (x) are drawn from a lognormal distribution there are several possible estimators for H: where Of these H3 is probably the best estimator for samples of 25 or more. Bias and variance estimators A first order approximation to the bias and variance of H1 are where Cv is the coefficient of variation. Similarly a first order approximation to the bias and variance of H3 are In numerical experiments H3 is generally a superior estimator of the harmonic mean than H1. H2 produces estimates that are largely similar to H1.
Mathematics
Statistics
null
14539
https://en.wikipedia.org/wiki/Internet
Internet
The Internet (or internet) is the global system of interconnected computer networks that uses the Internet protocol suite (TCP/IP) to communicate between networks and devices. It is a network of networks that consists of private, public, academic, business, and government networks of local to global scope, linked by a broad array of electronic, wireless, and optical networking technologies. The Internet carries a vast range of information resources and services, such as the interlinked hypertext documents and applications of the World Wide Web (WWW), electronic mail, internet telephony, and file sharing. The origins of the Internet date back to research that enabled the time-sharing of computer resources, the development of packet switching in the 1960s and the design of computer networks for data communication. The set of rules (communication protocols) to enable internetworking on the Internet arose from research and development commissioned in the 1970s by the Defense Advanced Research Projects Agency (DARPA) of the United States Department of Defense in collaboration with universities and researchers across the United States and in the United Kingdom and France. The ARPANET initially served as a backbone for the interconnection of regional academic and military networks in the United States to enable resource sharing. The funding of the National Science Foundation Network as a new backbone in the 1980s, as well as private funding for other commercial extensions, encouraged worldwide participation in the development of new networking technologies and the merger of many networks using DARPA's Internet protocol suite. The linking of commercial networks and enterprises by the early 1990s, as well as the advent of the World Wide Web, marked the beginning of the transition to the modern Internet, and generated sustained exponential growth as generations of institutional, personal, and mobile computers were connected to the internetwork. Although the Internet was widely used by academia in the 1980s, the subsequent commercialization of the Internet in the 1990s and beyond incorporated its services and technologies into virtually every aspect of modern life. Most traditional communication media, including telephone, radio, television, paper mail, and newspapers, are reshaped, redefined, or even bypassed by the Internet, giving birth to new services such as email, Internet telephone, Internet television, online music, digital newspapers, and video streaming websites. Newspapers, books, and other print publishing have adapted to website technology or have been reshaped into blogging, web feeds, and online news aggregators. The Internet has enabled and accelerated new forms of personal interaction through instant messaging, Internet forums, and social networking services. Online shopping has grown exponentially for major retailers, small businesses, and entrepreneurs, as it enables firms to extend their "brick and mortar" presence to serve a larger market or even sell goods and services entirely online. Business-to-business and financial services on the Internet affect supply chains across entire industries. The Internet has no single centralized governance in either technological implementation or policies for access and usage; each constituent network sets its own policies. The overarching definitions of the two principal name spaces on the Internet, the Internet Protocol address (IP address) space and the Domain Name System (DNS), are directed by a maintainer organization, the Internet Corporation for Assigned Names and Numbers (ICANN). The technical underpinning and standardization of the core protocols is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise. In November 2006, the Internet was included on USA Todays list of the New Seven Wonders. Terminology The word internetted was used as early as 1849, meaning interconnected or interwoven. The word Internet was used in 1945 by the United States War Department in a radio operator's manual, and in 1974 as the shorthand form of Internetwork. Today, the term Internet most commonly refers to the global system of interconnected computer networks, though it may also refer to any group of smaller networks. When it came into common use, most publications treated the word Internet as a capitalized proper noun; this has become less common. This reflects the tendency in English to capitalize new terms and move them to lowercase as they become familiar. The word is sometimes still capitalized to distinguish the global internet from smaller networks, though many publications, including the AP Stylebook since 2016, recommend the lowercase form in every case. In 2016, the Oxford English Dictionary found that, based on a study of around 2.5 billion printed and online sources, "Internet" was capitalized in 54% of cases. The terms Internet and World Wide Web are often used interchangeably; it is common to speak of "going on the Internet" when using a web browser to view web pages. However, the World Wide Web, or the Web, is only one of a large number of Internet services, a collection of documents (web pages) and other web resources linked by hyperlinks and URLs. History In the 1960s, computer scientists began developing systems for time-sharing of computer resources. J. C. R. Licklider proposed the idea of a universal network while working at Bolt Beranek & Newman and, later, leading the Information Processing Techniques Office (IPTO) at the Advanced Research Projects Agency (ARPA) of the United States Department of Defense (DoD). Research into packet switching, one of the fundamental Internet technologies, started in the work of Paul Baran at RAND in the early 1960s and, independently, Donald Davies at the United Kingdom's National Physical Laboratory (NPL) in 1965. After the Symposium on Operating Systems Principles in 1967, packet switching from the proposed NPL network and routing concepts proposed by Baran were incorporated into the design of the ARPANET, an experimental resource sharing network proposed by ARPA. ARPANET development began with two network nodes which were interconnected between the University of California, Los Angeles (UCLA) and the Stanford Research Institute (now SRI International) on 29 October 1969. The third site was at the University of California, Santa Barbara, followed by the University of Utah. In a sign of future growth, 15 sites were connected to the young ARPANET by the end of 1971. These early years were documented in the 1972 film Computer Networks: The Heralds of Resource Sharing. Thereafter, the ARPANET gradually developed into a decentralized communications network, connecting remote centers and military bases in the United States. Other user networks and research networks, such as the Merit Network and CYCLADES, were developed in the late 1960s and early 1970s. Early international collaborations for the ARPANET were rare. Connections were made in 1973 to Norway (NORSAR and NDRE), and to Peter Kirstein's research group at University College London (UCL), which provided a gateway to British academic networks, forming the first internetwork for resource sharing. ARPA projects, the International Network Working Group and commercial initiatives led to the development of various protocols and standards by which multiple separate networks could become a single network or "a network of networks". In 1974, Vint Cerf at Stanford University and Bob Kahn at DARPA published a proposal for "A Protocol for Packet Network Intercommunication". They used the term internet as a shorthand for internetwork in , and later RFCs repeated this use. Cerf and Kahn credit Louis Pouzin and others with important influences on the resulting TCP/IP design. National PTTs and commercial providers developed the X.25 standard and deployed it on public data networks. Access to the ARPANET was expanded in 1981 when the National Science Foundation (NSF) funded the Computer Science Network (CSNET). In 1982, the Internet Protocol Suite (TCP/IP) was standardized, which facilitated worldwide proliferation of interconnected networks. TCP/IP network access expanded again in 1986 when the National Science Foundation Network (NSFNet) provided access to supercomputer sites in the United States for researchers, first at speeds of 56 kbit/s and later at 1.5 Mbit/s and 45 Mbit/s. The NSFNet expanded into academic and research organizations in Europe, Australia, New Zealand and Japan in 1988–89. Although other network protocols such as UUCP and PTT public data networks had global reach well before this time, this marked the beginning of the Internet as an intercontinental network. Commercial Internet service providers (ISPs) emerged in 1989 in the United States and Australia. The ARPANET was decommissioned in 1990. Steady advances in semiconductor technology and optical networking created new economic opportunities for commercial involvement in the expansion of the network in its core and for delivering services to the public. In mid-1989, MCI Mail and Compuserve established connections to the Internet, delivering email and public access products to the half million users of the Internet. Just months later, on 1 January 1990, PSInet launched an alternate Internet backbone for commercial use; one of the networks that added to the core of the commercial Internet of later years. In March 1990, the first high-speed T1 (1.5 Mbit/s) link between the NSFNET and Europe was installed between Cornell University and CERN, allowing much more robust communications than were capable with satellites. Later in 1990, Tim Berners-Lee began writing WorldWideWeb, the first web browser, after two years of lobbying CERN management. By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web: the HyperText Transfer Protocol (HTTP) 0.9, the HyperText Markup Language (HTML), the first Web browser (which was also an HTML editor and could access Usenet newsgroups and FTP files), the first HTTP server software (later known as CERN httpd), the first web server, and the first Web pages that described the project itself. In 1991 the Commercial Internet eXchange was founded, allowing PSInet to communicate with the other commercial networks CERFnet and Alternet. Stanford Federal Credit Union was the first financial institution to offer online Internet banking services to all of its members in October 1994. In 1996, OP Financial Group, also a cooperative bank, became the second online bank in the world and the first in Europe. By 1995, the Internet was fully commercialized in the U.S. when the NSFNet was decommissioned, removing the last restrictions on use of the Internet to carry commercial traffic. As technology advanced and commercial opportunities fueled reciprocal growth, the volume of Internet traffic started experiencing similar characteristics as that of the scaling of MOS transistors, exemplified by Moore's law, doubling every 18 months. This growth, formalized as Edholm's law, was catalyzed by advances in MOS technology, laser light wave systems, and noise performance. Since 1995, the Internet has tremendously impacted culture and commerce, including the rise of near-instant communication by email, instant messaging, telephony (Voice over Internet Protocol or VoIP), two-way interactive video calls, and the World Wide Web with its discussion forums, blogs, social networking services, and online shopping sites. Increasing amounts of data are transmitted at higher and higher speeds over fiber optic networks operating at 1 Gbit/s, 10 Gbit/s, or more. The Internet continues to grow, driven by ever-greater amounts of online information and knowledge, commerce, entertainment and social networking services. During the late 1990s, it was estimated that traffic on the public Internet grew by 100 percent per year, while the mean annual growth in the number of Internet users was thought to be between 20% and 50%. This growth is often attributed to the lack of central administration, which allows organic growth of the network, as well as the non-proprietary nature of the Internet protocols, which encourages vendor interoperability and prevents any one company from exerting too much control over the network. , the estimated total number of Internet users was 2.095 billion (30% of world population). It is estimated that in 1993 the Internet carried only 1% of the information flowing through two-way telecommunication. By 2000 this figure had grown to 51%, and by 2007 more than 97% of all telecommunicated information was carried over the Internet. Governance The Internet is a global network that comprises many voluntarily interconnected autonomous networks. It operates without a central governing body. The technical underpinning and standardization of the core protocols (IPv4 and IPv6) is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise. To maintain interoperability, the principal name spaces of the Internet are administered by the Internet Corporation for Assigned Names and Numbers (ICANN). ICANN is governed by an international board of directors drawn from across the Internet technical, business, academic, and other non-commercial communities. ICANN coordinates the assignment of unique identifiers for use on the Internet, including domain names, IP addresses, application port numbers in the transport protocols, and many other parameters. Globally unified name spaces are essential for maintaining the global reach of the Internet. This role of ICANN distinguishes it as perhaps the only central coordinating body for the global Internet. Regional Internet registries (RIRs) were established for five regions of the world. The African Network Information Center (AfriNIC) for Africa, the American Registry for Internet Numbers (ARIN) for North America, the Asia–Pacific Network Information Centre (APNIC) for Asia and the Pacific region, the Latin American and Caribbean Internet Addresses Registry (LACNIC) for Latin America and the Caribbean region, and the Réseaux IP Européens – Network Coordination Centre (RIPE NCC) for Europe, the Middle East, and Central Asia were delegated to assign IP address blocks and other Internet parameters to local registries, such as Internet service providers, from a designated pool of addresses set aside for each region. The National Telecommunications and Information Administration, an agency of the United States Department of Commerce, had final approval over changes to the DNS root zone until the IANA stewardship transition on 1 October 2016. The Internet Society (ISOC) was founded in 1992 with a mission to "assure the open development, evolution and use of the Internet for the benefit of all people throughout the world". Its members include individuals (anyone may join) as well as corporations, organizations, governments, and universities. Among other activities ISOC provides an administrative home for a number of less formally organized groups that are involved in developing and managing the Internet, including: the IETF, Internet Architecture Board (IAB), Internet Engineering Steering Group (IESG), Internet Research Task Force (IRTF), and Internet Research Steering Group (IRSG). On 16 November 2005, the United Nations-sponsored World Summit on the Information Society in Tunis established the Internet Governance Forum (IGF) to discuss Internet-related issues. Infrastructure The communications infrastructure of the Internet consists of its hardware components and a system of software layers that control various aspects of the architecture. As with any computer network, the Internet physically consists of routers, media (such as cabling and radio links), repeaters, modems etc. However, as an example of internetworking, many of the network nodes are not necessarily Internet equipment per se. The internet packets are carried by other full-fledged networking protocols with the Internet acting as a homogeneous networking standard, running across heterogeneous hardware, with the packets guided to their destinations by IP routers. Service tiers Internet service providers (ISPs) establish the worldwide connectivity between individual networks at various levels of scope. End-users who only access the Internet when needed to perform a function or obtain information, represent the bottom of the routing hierarchy. At the top of the routing hierarchy are the tier 1 networks, large telecommunication companies that exchange traffic directly with each other via very high speed fiber-optic cables and governed by peering agreements. Tier 2 and lower-level networks buy Internet transit from other providers to reach at least some parties on the global Internet, though they may also engage in peering. An ISP may use a single upstream provider for connectivity, or implement multihoming to achieve redundancy and load balancing. Internet exchange points are major traffic exchanges with physical connections to multiple ISPs. Large organizations, such as academic institutions, large enterprises, and governments, may perform the same function as ISPs, engaging in peering and purchasing transit on behalf of their internal networks. Research networks tend to interconnect with large subnetworks such as GEANT, GLORIAD, Internet2, and the UK's national research and education network, JANET. Access Common methods of Internet access by users include dial-up with a computer modem via telephone circuits, broadband over coaxial cable, fiber optics or copper wires, Wi-Fi, satellite, and cellular telephone technology (e.g. 3G, 4G). The Internet may often be accessed from computers in libraries and Internet cafés. Internet access points exist in many public places such as airport halls and coffee shops. Various terms are used, such as public Internet kiosk, public access terminal, and Web payphone. Many hotels also have public terminals that are usually fee-based. These terminals are widely accessed for various usages, such as ticket booking, bank deposit, or online payment. Wi-Fi provides wireless access to the Internet via local computer networks. Hotspots providing such access include Wi-Fi cafés, where users need to bring their own wireless devices, such as a laptop or PDA. These services may be free to all, free to customers only, or fee-based. Grassroots efforts have led to wireless community networks. Commercial Wi-Fi services that cover large areas are available in many cities, such as New York, London, Vienna, Toronto, San Francisco, Philadelphia, Chicago and Pittsburgh, where the Internet can then be accessed from places such as a park bench. Experiments have also been conducted with proprietary mobile wireless networks like Ricochet, various high-speed data services over cellular networks, and fixed wireless services. Modern smartphones can also access the Internet through the cellular carrier network. For Web browsing, these devices provide applications such as Google Chrome, Safari, and Firefox and a wide variety of other Internet software may be installed from app stores. Internet usage by mobile and tablet devices exceeded desktop worldwide for the first time in October 2016. Mobile communication The International Telecommunication Union (ITU) estimated that, by the end of 2017, 48% of individual users regularly connect to the Internet, up from 34% in 2012. Mobile Internet connectivity has played an important role in expanding access in recent years, especially in Asia and the Pacific and in Africa. The number of unique mobile cellular subscriptions increased from 3.9 billion in 2012 to 4.8 billion in 2016, two-thirds of the world's population, with more than half of subscriptions located in Asia and the Pacific. The number of subscriptions was predicted to rise to 5.7 billion users in 2020. , 80% of the world's population were covered by a 4G network. The limits that users face on accessing information via mobile applications coincide with a broader process of fragmentation of the Internet. Fragmentation restricts access to media content and tends to affect the poorest users the most. Zero-rating, the practice of Internet service providers allowing users free connectivity to access specific content or applications without cost, has offered opportunities to surmount economic hurdles but has also been accused by its critics as creating a two-tiered Internet. To address the issues with zero-rating, an alternative model has emerged in the concept of 'equal rating' and is being tested in experiments by Mozilla and Orange in Africa. Equal rating prevents prioritization of one type of content and zero-rates all content up to a specified data cap. In a study published by Chatham House, 15 out of 19 countries researched in Latin America had some kind of hybrid or zero-rated product offered. Some countries in the region had a handful of plans to choose from (across all mobile network operators) while others, such as Colombia, offered as many as 30 pre-paid and 34 post-paid plans. A study of eight countries in the Global South found that zero-rated data plans exist in every country, although there is a great range in the frequency with which they are offered and actually used in each. The study looked at the top three to five carriers by market share in Bangladesh, Colombia, Ghana, India, Kenya, Nigeria, Peru and Philippines. Across the 181 plans examined, 13 percent were offering zero-rated services. Another study, covering Ghana, Kenya, Nigeria and South Africa, found Facebook's Free Basics and Wikipedia Zero to be the most commonly zero-rated content. Internet Protocol Suite The Internet standards describe a framework known as the Internet protocol suite (also called TCP/IP, based on the first two components.) This is a suite of protocols that are ordered into a set of four conceptional layers by the scope of their operation, originally documented in and . At the top is the application layer, where communication is described in terms of the objects or data structures most appropriate for each application. For example, a web browser operates in a client–server application model and exchanges information with the HyperText Transfer Protocol (HTTP) and an application-germane data structure, such as the HyperText Markup Language (HTML). Below this top layer, the transport layer connects applications on different hosts with a logical channel through the network. It provides this service with a variety of possible characteristics, such as ordered, reliable delivery (TCP), and an unreliable datagram service (UDP). Underlying these layers are the networking technologies that interconnect networks at their borders and exchange traffic across them. The Internet layer implements the Internet Protocol (IP) which enables computers to identify and locate each other by IP address and route their traffic via intermediate (transit) networks. The Internet Protocol layer code is independent of the type of network that it is physically running over. At the bottom of the architecture is the link layer, which connects nodes on the same physical link, and contains protocols that do not require routers for traversal to other links. The protocol suite does not explicitly specify hardware methods to transfer bits, or protocols to manage such hardware, but assumes that appropriate technology is available. Examples of that technology include Wi-Fi, Ethernet, and DSL. Internet protocol The most prominent component of the Internet model is the Internet Protocol (IP). IP enables internetworking and, in essence, establishes the Internet itself. Two versions of the Internet Protocol exist, IPv4 and IPv6. IP Addresses For locating individual computers on the network, the Internet provides IP addresses. IP addresses are used by the Internet infrastructure to direct internet packets to their destinations. They consist of fixed-length numbers, which are found within the packet. IP addresses are generally assigned to equipment either automatically via DHCP, or are configured. However, the network also supports other addressing systems. Users generally enter domain names (e.g. "en.wikipedia.org") instead of IP addresses because they are easier to remember; they are converted by the Domain Name System (DNS) into IP addresses which are more efficient for routing purposes. IPv4 Internet Protocol version 4 (IPv4) defines an IP address as a 32-bit number. IPv4 is the initial version used on the first generation of the Internet and is still in dominant use. It was designed in 1981 to address up to ≈4.3 billion (109) hosts. However, the explosive growth of the Internet has led to IPv4 address exhaustion, which entered its final stage in 2011, when the global IPv4 address allocation pool was exhausted. IPv6 Because of the growth of the Internet and the depletion of available IPv4 addresses, a new version of IP IPv6, was developed in the mid-1990s, which provides vastly larger addressing capabilities and more efficient routing of Internet traffic. IPv6 uses 128 bits for the IP address and was standardized in 1998. IPv6 deployment has been ongoing since the mid-2000s and is currently in growing deployment around the world, since Internet address registries (RIRs) began to urge all resource managers to plan rapid adoption and conversion. IPv6 is not directly interoperable by design with IPv4. In essence, it establishes a parallel version of the Internet not directly accessible with IPv4 software. Thus, translation facilities must exist for internetworking or nodes must have duplicate networking software for both networks. Essentially all modern computer operating systems support both versions of the Internet Protocol. Network infrastructure, however, has been lagging in this development. Aside from the complex array of physical connections that make up its infrastructure, the Internet is facilitated by bi- or multi-lateral commercial contracts, e.g., peering agreements, and by technical specifications or protocols that describe the exchange of data over the network. Indeed, the Internet is defined by its interconnections and routing policies. Subnetwork A subnetwork or subnet is a logical subdivision of an IP network. The practice of dividing a network into two or more networks is called subnetting. Computers that belong to a subnet are addressed with an identical most-significant bit-group in their IP addresses. This results in the logical division of an IP address into two fields, the network number or routing prefix and the rest field or host identifier. The rest field is an identifier for a specific host or network interface. The routing prefix may be expressed in Classless Inter-Domain Routing (CIDR) notation written as the first address of a network, followed by a slash character (/), and ending with the bit-length of the prefix. For example, is the prefix of the Internet Protocol version 4 network starting at the given address, having 24 bits allocated for the network prefix, and the remaining 8 bits reserved for host addressing. Addresses in the range to belong to this network. The IPv6 address specification is a large address block with 296 addresses, having a 32-bit routing prefix. For IPv4, a network may also be characterized by its subnet mask or netmask, which is the bitmask that when applied by a bitwise AND operation to any IP address in the network, yields the routing prefix. Subnet masks are also expressed in dot-decimal notation like an address. For example, is the subnet mask for the prefix . Traffic is exchanged between subnetworks through routers when the routing prefixes of the source address and the destination address differ. A router serves as a logical or physical boundary between the subnets. The benefits of subnetting an existing network vary with each deployment scenario. In the address allocation architecture of the Internet using CIDR and in large organizations, it is necessary to allocate address space efficiently. Subnetting may also enhance routing efficiency or have advantages in network management when subnetworks are administratively controlled by different entities in a larger organization. Subnets may be arranged logically in a hierarchical architecture, partitioning an organization's network address space into a tree-like routing structure. Routing Computers and routers use routing tables in their operating system to direct IP packets to reach a node on a different subnetwork. Routing tables are maintained by manual configuration or automatically by routing protocols. End-nodes typically use a default route that points toward an ISP providing transit, while ISP routers use the Border Gateway Protocol to establish the most efficient routing across the complex connections of the global Internet. The default gateway is the node that serves as the forwarding host (router) to other networks when no other route specification matches the destination IP address of a packet. IETF While the hardware components in the Internet infrastructure can often be used to support other software systems, it is the design and the standardization process of the software that characterizes the Internet and provides the foundation for its scalability and success. The responsibility for the architectural design of the Internet software systems has been assumed by the Internet Engineering Task Force (IETF). The IETF conducts standard-setting work groups, open to any individual, about the various aspects of Internet architecture. The resulting contributions and standards are published as Request for Comments (RFC) documents on the IETF web site. The principal methods of networking that enable the Internet are contained in specially designated RFCs that constitute the Internet Standards. Other less rigorous documents are simply informative, experimental, or historical, or document the best current practices (BCP) when implementing Internet technologies. Applications and services The Internet carries many applications and services, most prominently the World Wide Web, including social media, electronic mail, mobile applications, multiplayer online games, Internet telephony, file sharing, and streaming media services. Most servers that provide these services are today hosted in data centers, and content is often accessed through high-performance content delivery networks. World Wide Web The World Wide Web is a global collection of documents, images, multimedia, applications, and other resources, logically interrelated by hyperlinks and referenced with Uniform Resource Identifiers (URIs), which provide a global system of named references. URIs symbolically identify services, web servers, databases, and the documents and resources that they can provide. HyperText Transfer Protocol (HTTP) is the main access protocol of the World Wide Web. Web services also use HTTP for communication between software systems for information transfer, sharing and exchanging business data and logistics and is one of many languages or protocols that can be used for communication on the Internet. World Wide Web browser software, such as Microsoft's Internet Explorer/Edge, Mozilla Firefox, Opera, Apple's Safari, and Google Chrome, enable users to navigate from one web page to another via the hyperlinks embedded in the documents. These documents may also contain any combination of computer data, including graphics, sounds, text, video, multimedia and interactive content that runs while the user is interacting with the page. Client-side software can include animations, games, office applications and scientific demonstrations. Through keyword-driven Internet research using search engines like Yahoo!, Bing and Google, users worldwide have easy, instant access to a vast and diverse amount of online information. Compared to printed media, books, encyclopedias and traditional libraries, the World Wide Web has enabled the decentralization of information on a large scale. The Web has enabled individuals and organizations to publish ideas and information to a potentially large audience online at greatly reduced expense and time delay. Publishing a web page, a blog, or building a website involves little initial cost and many cost-free services are available. However, publishing and maintaining large, professional websites with attractive, diverse and up-to-date information is still a difficult and expensive proposition. Many individuals and some companies and groups use web logs or blogs, which are largely used as easily being able to update online diaries. Some commercial organizations encourage staff to communicate advice in their areas of specialization in the hope that visitors will be impressed by the expert knowledge and free information and be attracted to the corporation as a result. Advertising on popular web pages can be lucrative, and e-commerce, which is the sale of products and services directly via the Web, continues to grow. Online advertising is a form of marketing and advertising which uses the Internet to deliver promotional marketing messages to consumers. It includes email marketing, search engine marketing (SEM), social media marketing, many types of display advertising (including web banner advertising), and mobile advertising. In 2011, Internet advertising revenues in the United States surpassed those of cable television and nearly exceeded those of broadcast television. Many common online advertising practices are controversial and increasingly subject to regulation. When the Web developed in the 1990s, a typical web page was stored in completed form on a web server, formatted in HTML, ready for transmission to a web browser in response to a request. Over time, the process of creating and serving web pages has become dynamic, creating a flexible design, layout, and content. Websites are often created using content management software with, initially, very little content. Contributors to these systems, who may be paid staff, members of an organization or the public, fill underlying databases with content using editing pages designed for that purpose while casual visitors view and read this content in HTML form. There may or may not be editorial, approval and security systems built into the process of taking newly entered content and making it available to the target visitors. Communication Email is an important communications service available via the Internet. The concept of sending electronic text messages between parties, analogous to mailing letters or memos, predates the creation of the Internet. Pictures, documents, and other files are sent as email attachments. Email messages can be cc-ed to multiple email addresses. Internet telephony is a common communications service realized with the Internet. The name of the principal internetworking protocol, the Internet Protocol, lends its name to voice over Internet Protocol (VoIP). The idea began in the early 1990s with walkie-talkie-like voice applications for personal computers. VoIP systems now dominate many markets and are as easy to use and as convenient as a traditional telephone. The benefit has been substantial cost savings over traditional telephone calls, especially over long distances. Cable, ADSL, and mobile data networks provide Internet access in customer premises and inexpensive VoIP network adapters provide the connection for traditional analog telephone sets. The voice quality of VoIP often exceeds that of traditional calls. Remaining problems for VoIP include the situation that emergency services may not be universally available and that devices rely on a local power supply, while older traditional phones are powered from the local loop, and typically operate during a power failure. Data transfer File sharing is an example of transferring large amounts of data across the Internet. A computer file can be emailed to customers, colleagues and friends as an attachment. It can be uploaded to a website or File Transfer Protocol (FTP) server for easy download by others. It can be put into a "shared location" or onto a file server for instant use by colleagues. The load of bulk downloads to many users can be eased by the use of "mirror" servers or peer-to-peer networks. In any of these cases, access to the file may be controlled by user authentication, the transit of the file over the Internet may be obscured by encryption, and money may change hands for access to the file. The price can be paid by the remote charging of funds from, for example, a credit card whose details are also passed—usually fully encrypted—across the Internet. The origin and authenticity of the file received may be checked by digital signatures or by MD5 or other message digests. These simple features of the Internet, over a worldwide basis, are changing the production, sale, and distribution of anything that can be reduced to a computer file for transmission. This includes all manner of print publications, software products, news, music, film, video, photography, graphics and the other arts. This in turn has caused seismic shifts in each of the existing industries that previously controlled the production and distribution of these products. Streaming media is the real-time delivery of digital media for immediate consumption or enjoyment by end users. Many radio and television broadcasters provide Internet feeds of their live audio and video productions. They may also allow time-shift viewing or listening such as Preview, Classic Clips and Listen Again features. These providers have been joined by a range of pure Internet "broadcasters" who never had on-air licenses. This means that an Internet-connected device, such as a computer or something more specific, can be used to access online media in much the same way as was previously possible only with a television or radio receiver. The range of available types of content is much wider, from specialized technical webcasts to on-demand popular multimedia services. Podcasting is a variation on this theme, where—usually audio—material is downloaded and played back on a computer or shifted to a portable media player to be listened to on the move. These techniques using simple equipment allow anybody, with little censorship or licensing control, to broadcast audio-visual material worldwide. Digital media streaming increases the demand for network bandwidth. For example, standard image quality needs 1 Mbit/s link speed for SD 480p, HD 720p quality requires 2.5 Mbit/s, and the top-of-the-line HDX quality needs 4.5 Mbit/s for 1080p. Webcams are a low-cost extension of this phenomenon. While some webcams can give full-frame-rate video, the picture either is usually small or updates slowly. Internet users can watch animals around an African waterhole, ships in the Panama Canal, traffic at a local roundabout or monitor their own premises, live and in real time. Video chat rooms and video conferencing are also popular with many uses being found for personal webcams, with and without two-way sound. YouTube was founded on 15 February 2005 and is now the leading website for free streaming video with more than two billion users. It uses an HTML5 based web player by default to stream and show video files. Registered users may upload an unlimited amount of video and build their own personal profile. YouTube claims that its users watch hundreds of millions, and upload hundreds of thousands of videos daily. Social impact The Internet has enabled new forms of social interaction, activities, and social associations. This phenomenon has given rise to the scholarly study of the sociology of the Internet. The early Internet left an impact on some writers who used symbolism to write about it, such as describing the Internet as a "means to connect individuals in a vast invisible net over all the earth." Users Between 2000 and 2009, the number of Internet users globally rose from 390 million to 1.9 billion. By 2010, 22% of the world's population had access to computers with 1 billion Google searches every day, 300 million Internet users reading blogs, and 2 billion videos viewed daily on YouTube. In 2014 the world's Internet users surpassed 3 billion or 44 percent of world population, but two-thirds came from the richest countries, with 78 percent of Europeans using the Internet, followed by 57 percent of the Americas. However, by 2018, Asia alone accounted for 51% of all Internet users, with 2.2 billion out of the 4.3 billion Internet users in the world. China's Internet users surpassed a major milestone in 2018, when the country's Internet regulatory authority, China Internet Network Information Centre, announced that China had 802 million users. China was followed by India, with some 700 million users, with the United States third with 275 million users. However, in terms of penetration, in 2022 China had a 70% penetration rate compared to India's 60% and the United States's 90%. In 2022, 54% of the world's Internet users were based in Asia, 14% in Europe, 7% in North America, 10% in Latin America and the Caribbean, 11% in Africa, 4% in the Middle East and 1% in Oceania. In 2019, Kuwait, Qatar, the Falkland Islands, Bermuda and Iceland had the highest Internet penetration by the number of users, with 93% or more of the population with access. As of 2022, it was estimated that 5.4 billion people use the Internet, more than two-thirds of the world's population. The prevalent language for communication via the Internet has always been English. This may be a result of the origin of the Internet, as well as the language's role as a lingua franca and as a world language. Early computer systems were limited to the characters in the American Standard Code for Information Interchange (ASCII), a subset of the Latin alphabet. After English (27%), the most requested languages on the World Wide Web are Chinese (25%), Spanish (8%), Japanese (5%), Portuguese and German (4% each), Arabic, French and Russian (3% each), and Korean (2%). The Internet's technologies have developed enough in recent years, especially in the use of Unicode, that good facilities are available for development and communication in the world's widely used languages. However, some glitches such as mojibake (incorrect display of some languages' characters) still remain. In a US study in 2005, the percentage of men using the Internet was very slightly ahead of the percentage of women, although this difference reversed in those under 30. Men logged on more often, spent more time online, and were more likely to be broadband users, whereas women tended to make more use of opportunities to communicate (such as email). Men were more likely to use the Internet to pay bills, participate in auctions, and for recreation such as downloading music and videos. Men and women were equally likely to use the Internet for shopping and banking. In 2008, women significantly outnumbered men on most social networking services, such as Facebook and Myspace, although the ratios varied with age. Women watched more streaming content, whereas men downloaded more. Men were more likely to blog. Among those who blog, men were more likely to have a professional blog, whereas women were more likely to have a personal blog. Several neologisms exist that refer to Internet users: Netizen (as in "citizen of the net") refers to those actively involved in improving online communities, the Internet in general or surrounding political affairs and rights such as free speech, Internaut refers to operators or technically highly capable users of the Internet, digital citizen refers to a person using the Internet in order to engage in society, politics, and government participation. Usage The Internet allows greater flexibility in working hours and location, especially with the spread of unmetered high-speed connections. The Internet can be accessed almost anywhere by numerous means, including through mobile Internet devices. Mobile phones, datacards, handheld game consoles and cellular routers allow users to connect to the Internet wirelessly. Within the limitations imposed by small screens and other limited facilities of such pocket-sized devices, the services of the Internet, including email and the web, may be available. Service providers may restrict the services offered and mobile data charges may be significantly higher than other access methods. Educational material at all levels from pre-school to post-doctoral is available from websites. Examples range from CBeebies, through school and high-school revision guides and virtual universities, to access to top-end scholarly literature through the likes of Google Scholar. For distance education, help with homework and other assignments, self-guided learning, whiling away spare time or just looking up more detail on an interesting fact, it has never been easier for people to access educational information at any level from anywhere. The Internet in general and the World Wide Web in particular are important enablers of both formal and informal education. Further, the Internet allows researchers (especially those from the social and behavioral sciences) to conduct research remotely via virtual laboratories, with profound changes in reach and generalizability of findings as well as in communication between scientists and in the publication of results. The low cost and nearly instantaneous sharing of ideas, knowledge, and skills have made collaborative work dramatically easier, with the help of collaborative software. Not only can a group cheaply communicate and share ideas but the wide reach of the Internet allows such groups more easily to form. An example of this is the free software movement, which has produced, among other things, Linux, Mozilla Firefox, and OpenOffice.org (later forked into LibreOffice). Internet chat, whether using an IRC chat room, an instant messaging system, or a social networking service, allows colleagues to stay in touch in a very convenient way while working at their computers during the day. Messages can be exchanged even more quickly and conveniently than via email. These systems may allow files to be exchanged, drawings and images to be shared, or voice and video contact between team members. Content management systems allow collaborating teams to work on shared sets of documents simultaneously without accidentally destroying each other's work. Business and project teams can share calendars as well as documents and other information. Such collaboration occurs in a wide variety of areas including scientific research, software development, conference planning, political activism and creative writing. Social and political collaboration is also becoming more widespread as both Internet access and computer literacy spread. The Internet allows computer users to remotely access other computers and information stores easily from any access point. Access may be with computer security; i.e., authentication and encryption technologies, depending on the requirements. This is encouraging new ways of remote work, collaboration and information sharing in many industries. An accountant sitting at home can audit the books of a company based in another country, on a server situated in a third country that is remotely maintained by IT specialists in a fourth. These accounts could have been created by home-working bookkeepers, in other remote locations, based on information emailed to them from offices all over the world. Some of these things were possible before the widespread use of the Internet, but the cost of private leased lines would have made many of them infeasible in practice. An office worker away from their desk, perhaps on the other side of the world on a business trip or a holiday, can access their emails, access their data using cloud computing, or open a remote desktop session into their office PC using a secure virtual private network (VPN) connection on the Internet. This can give the worker complete access to all of their normal files and data, including email and other applications, while away from the office. It has been referred to among system administrators as the Virtual Private Nightmare, because it extends the secure perimeter of a corporate network into remote locations and its employees' homes. By the late 2010s the Internet had been described as "the main source of scientific information "for the majority of the global North population". Social networking and entertainment Many people use the World Wide Web to access news, weather and sports reports, to plan and book vacations and to pursue their personal interests. People use chat, messaging and email to make and stay in touch with friends worldwide, sometimes in the same way as some previously had pen pals. Social networking services such as Facebook have created new ways to socialize and interact. Users of these sites are able to add a wide variety of information to pages, pursue common interests, and connect with others. It is also possible to find existing acquaintances, to allow communication among existing groups of people. Sites like LinkedIn foster commercial and business connections. YouTube and Flickr specialize in users' videos and photographs. Social networking services are also widely used by businesses and other organizations to promote their brands, to market to their customers and to encourage posts to "go viral". "Black hat" social media techniques are also employed by some organizations, such as spam accounts and astroturfing. A risk for both individuals' and organizations' writing posts (especially public posts) on social networking services is that especially foolish or controversial posts occasionally lead to an unexpected and possibly large-scale backlash on social media from other Internet users. This is also a risk in relation to controversial offline behavior, if it is widely made known. The nature of this backlash can range widely from counter-arguments and public mockery, through insults and hate speech, to, in extreme cases, rape and death threats. The online disinhibition effect describes the tendency of many individuals to behave more stridently or offensively online than they would in person. A significant number of feminist women have been the target of various forms of harassment in response to posts they have made on social media, and Twitter in particular has been criticized in the past for not doing enough to aid victims of online abuse. For organizations, such a backlash can cause overall brand damage, especially if reported by the media. However, this is not always the case, as any brand damage in the eyes of people with an opposing opinion to that presented by the organization could sometimes be outweighed by strengthening the brand in the eyes of others. Furthermore, if an organization or individual gives in to demands that others perceive as wrong-headed, that can then provoke a counter-backlash. Some websites, such as Reddit, have rules forbidding the posting of personal information of individuals (also known as doxxing), due to concerns about such postings leading to mobs of large numbers of Internet users directing harassment at the specific individuals thereby identified. In particular, the Reddit rule forbidding the posting of personal information is widely understood to imply that all identifying photos and names must be censored in Facebook screenshots posted to Reddit. However, the interpretation of this rule in relation to public Twitter posts is less clear, and in any case, like-minded people online have many other ways they can use to direct each other's attention to public social media posts they disagree with. Children also face dangers online such as cyberbullying and approaches by sexual predators, who sometimes pose as children themselves. Children may also encounter material that they may find upsetting, or material that their parents consider to be not age-appropriate. Due to naivety, they may also post personal information about themselves online, which could put them or their families at risk unless warned not to do so. Many parents choose to enable Internet filtering or supervise their children's online activities in an attempt to protect their children from inappropriate material on the Internet. The most popular social networking services, such as Facebook and Twitter, commonly forbid users under the age of 13. However, these policies are typically trivial to circumvent by registering an account with a false birth date, and a significant number of children aged under 13 join such sites anyway. Social networking services for younger children, which claim to provide better levels of protection for children, also exist. The Internet has been a major outlet for leisure activity since its inception, with entertaining social experiments such as MUDs and MOOs being conducted on university servers, and humor-related Usenet groups receiving much traffic. Many Internet forums have sections devoted to games and funny videos. The Internet pornography and online gambling industries have taken advantage of the World Wide Web. Although many governments have attempted to restrict both industries' use of the Internet, in general, this has failed to stop their widespread popularity. Another area of leisure activity on the Internet is multiplayer gaming. This form of recreation creates communities, where people of all ages and origins enjoy the fast-paced world of multiplayer games. These range from MMORPG to first-person shooters, from role-playing video games to online gambling. While online gaming has been around since the 1970s, modern modes of online gaming began with subscription services such as GameSpy and MPlayer. Non-subscribers were limited to certain types of game play or certain games. Many people use the Internet to access and download music, movies and other works for their enjoyment and relaxation. Free and fee-based services exist for all of these activities, using centralized servers and distributed peer-to-peer technologies. Some of these sources exercise more care with respect to the original artists' copyrights than others. Internet usage has been correlated to users' loneliness. Lonely people tend to use the Internet as an outlet for their feelings and to share their stories with others, such as in the "I am lonely will anyone speak to me" thread. A 2017 book claimed that the Internet consolidates most aspects of human endeavor into singular arenas of which all of humanity are potential members and competitors, with fundamentally negative impacts on mental health as a result. While successes in each field of activity are pervasively visible and trumpeted, they are reserved for an extremely thin sliver of the world's most exceptional, leaving everyone else behind. Whereas, before the Internet, expectations of success in any field were supported by reasonable probabilities of achievement at the village, suburb, city or even state level, the same expectations in the Internet world are virtually certain to bring disappointment today: there is always someone else, somewhere on the planet, who can do better and take the now one-and-only top spot. Cybersectarianism is a new organizational form that involves, "highly dispersed small groups of practitioners that may remain largely anonymous within the larger social context and operate in relative secrecy, while still linked remotely to a larger network of believers who share a set of practices and texts, and often a common devotion to a particular leader. Overseas supporters provide funding and support; domestic practitioners distribute tracts, participate in acts of resistance, and share information on the internal situation with outsiders. Collectively, members and practitioners of such sects construct viable virtual communities of faith, exchanging personal testimonies and engaging in the collective study via email, online chat rooms, and web-based message boards." In particular, the British government has raised concerns about the prospect of young British Muslims being indoctrinated into Islamic extremism by material on the Internet, being persuaded to join terrorist groups such as the so-called "Islamic State", and then potentially committing acts of terrorism on returning to Britain after fighting in Syria or Iraq. Cyberslacking can become a drain on corporate resources; the average UK employee spent 57 minutes a day surfing the Web while at work, according to a 2003 study by Peninsula Business Services. Internet addiction disorder is excessive computer use that interferes with daily life. Nicholas G. Carr believes that Internet use has other effects on individuals, for instance improving skills of scan-reading and interfering with the deep thinking that leads to true creativity. Electronic business Electronic business (e-business) encompasses business processes spanning the entire value chain: purchasing, supply chain management, marketing, sales, customer service, and business relationship. E-commerce seeks to add revenue streams using the Internet to build and enhance relationships with clients and partners. According to International Data Corporation, the size of worldwide e-commerce, when global business-to-business and -consumer transactions are combined, equate to $16 trillion for 2013. A report by Oxford Economics added those two together to estimate the total size of the digital economy at $20.4 trillion, equivalent to roughly 13.8% of global sales. While much has been written of the economic advantages of Internet-enabled commerce, there is also evidence that some aspects of the Internet such as maps and location-aware services may serve to reinforce economic inequality and the digital divide. Electronic commerce may be responsible for consolidation and the decline of mom-and-pop, brick and mortar businesses resulting in increases in income inequality. Author Andrew Keen, a long-time critic of the social transformations caused by the Internet, has focused on the economic effects of consolidation from Internet businesses. Keen cites a 2013 Institute for Local Self-Reliance report saying brick-and-mortar retailers employ 47 people for every $10 million in sales while Amazon employs only 14. Similarly, the 700-employee room rental start-up Airbnb was valued at $10 billion in 2014, about half as much as Hilton Worldwide, which employs 152,000 people. At that time, Uber employed 1,000 full-time employees and was valued at $18.2 billion, about the same valuation as Avis Rent a Car and The Hertz Corporation combined, which together employed almost 60,000 people. Remote work Remote work is facilitated by tools such as groupware, virtual private networks, conference calling, videotelephony, and VoIP so that work may be performed from any location, most conveniently the worker's home. It can be efficient and useful for companies as it allows workers to communicate over long distances, saving significant amounts of travel time and cost. More workers have adequate bandwidth at home to use these tools to link their home to their corporate intranet and internal communication networks. Collaborative publishing Wikis have also been used in the academic community for sharing and dissemination of information across institutional and international boundaries. In those settings, they have been found useful for collaboration on grant writing, strategic planning, departmental documentation, and committee work. The United States Patent and Trademark Office uses a wiki to allow the public to collaborate on finding prior art relevant to examination of pending patent applications. Queens, New York has used a wiki to allow citizens to collaborate on the design and planning of a local park. The English Wikipedia has the largest user base among wikis on the World Wide Web and ranks in the top 10 among all sites in terms of traffic. Politics and political revolutions The Internet has achieved new relevance as a political tool. The presidential campaign of Howard Dean in 2004 in the United States was notable for its success in soliciting donation via the Internet. Many political groups use the Internet to achieve a new method of organizing for carrying out their mission, having given rise to Internet activism. The New York Times suggested that social media websites, such as Facebook and Twitter, helped people organize the political revolutions in Egypt, by helping activists organize protests, communicate grievances, and disseminate information. Many have understood the Internet as an extension of the Habermasian notion of the public sphere, observing how network communication technologies provide something like a global civic forum. However, incidents of politically motivated Internet censorship have now been recorded in many countries, including western democracies. E-government is the use of technological communications devices, such as the Internet, to provide public services to citizens and other persons in a country or region. E-government offers opportunities for more direct and convenient citizen access to government and for government provision of services directly to citizens. Philanthropy The spread of low-cost Internet access in developing countries has opened up new possibilities for peer-to-peer charities, which allow individuals to contribute small amounts to charitable projects for other individuals. Websites, such as DonorsChoose and GlobalGiving, allow small-scale donors to direct funds to individual projects of their choice. A popular twist on Internet-based philanthropy is the use of peer-to-peer lending for charitable purposes. Kiva pioneered this concept in 2005, offering the first web-based service to publish individual loan profiles for funding. Kiva raises funds for local intermediary microfinance organizations that post stories and updates on behalf of the borrowers. Lenders can contribute as little as $25 to loans of their choice and receive their money back as borrowers repay. Kiva falls short of being a pure peer-to-peer charity, in that loans are disbursed before being funded by lenders and borrowers do not communicate with lenders themselves. Security Internet resources, hardware, and software components are the target of criminal or malicious attempts to gain unauthorized control to cause interruptions, commit fraud, engage in blackmail or access private information. Malware Malware is malicious software used and distributed via the Internet. It includes computer viruses which are copied with the help of humans, computer worms which copy themselves automatically, software for denial of service attacks, ransomware, botnets, and spyware that reports on the activity and typing of users. Usually, these activities constitute cybercrime. Defense theorists have also speculated about the possibilities of hackers using cyber warfare using similar methods on a large scale. Malware poses serious problems to individuals and businesses on the Internet. According to Symantec's 2018 Internet Security Threat Report (ISTR), malware variants number has increased to 669,947,865 in 2017, which is twice as many malware variants as in 2016. Cybercrime, which includes malware attacks as well as other crimes committed by computer, was predicted to cost the world economy US$6 trillion in 2021, and is increasing at a rate of 15% per year. Since 2021, malware has been designed to target computer systems that run critical infrastructure such as the electricity distribution network. Malware can be designed to evade antivirus software detection algorithms. Surveillance The vast majority of computer surveillance involves the monitoring of data and traffic on the Internet. In the United States for example, under the Communications Assistance For Law Enforcement Act, all phone calls and broadband Internet traffic (emails, web traffic, instant messaging, etc.) are required to be available for unimpeded real-time monitoring by Federal law enforcement agencies. Packet capture is the monitoring of data traffic on a computer network. Computers communicate over the Internet by breaking up messages (emails, images, videos, web pages, files, etc.) into small chunks called "packets", which are routed through a network of computers, until they reach their destination, where they are assembled back into a complete "message" again. Packet Capture Appliance intercepts these packets as they are traveling through the network, in order to examine their contents using other programs. A packet capture is an information gathering tool, but not an analysis tool. That is it gathers "messages" but it does not analyze them and figure out what they mean. Other programs are needed to perform traffic analysis and sift through intercepted data looking for important/useful information. Under the Communications Assistance For Law Enforcement Act all U.S. telecommunications providers are required to install packet sniffing technology to allow Federal law enforcement and intelligence agencies to intercept all of their customers' broadband Internet and VoIP traffic. The large amount of data gathered from packet capture requires surveillance software that filters and reports relevant information, such as the use of certain words or phrases, the access to certain types of web sites, or communicating via email or chat with certain parties. Agencies, such as the Information Awareness Office, NSA, GCHQ and the FBI, spend billions of dollars per year to develop, purchase, implement, and operate systems for interception and analysis of data. Similar systems are operated by Iranian secret police to identify and suppress dissidents. The required hardware and software were allegedly installed by German Siemens AG and Finnish Nokia. Censorship Some governments, such as those of Burma, Iran, North Korea, Mainland China, Saudi Arabia and the United Arab Emirates, restrict access to content on the Internet within their territories, especially to political and religious content, with domain name and keyword filters. In Norway, Denmark, Finland, and Sweden, major Internet service providers have voluntarily agreed to restrict access to sites listed by authorities. While this list of forbidden resources is supposed to contain only known child pornography sites, the content of the list is secret. Many countries, including the United States, have enacted laws against the possession or distribution of certain material, such as child pornography, via the Internet but do not mandate filter software. Many free or commercially available software programs, called content-control software are available to users to block offensive websites on individual computers or networks in order to limit access by children to pornographic material or depiction of violence. Performance As the Internet is a heterogeneous network, its physical characteristics, including, for example the data transfer rates of connections, vary widely. It exhibits emergent phenomena that depend on its large-scale organization. Traffic volume The volume of Internet traffic is difficult to measure because no single point of measurement exists in the multi-tiered, non-hierarchical topology. Traffic data may be estimated from the aggregate volume through the peering points of the Tier 1 network providers, but traffic that stays local in large provider networks may not be accounted for. Outages An Internet blackout or outage can be caused by local signaling interruptions. Disruptions of submarine communications cables may cause blackouts or slowdowns to large areas, such as in the 2008 submarine cable disruption. Less-developed countries are more vulnerable due to the small number of high-capacity links. Land cables are also vulnerable, as in 2011 when a woman digging for scrap metal severed most connectivity for the nation of Armenia. Internet blackouts affecting almost entire countries can be achieved by governments as a form of Internet censorship, as in the blockage of the Internet in Egypt, whereby approximately 93% of networks were without access in 2011 in an attempt to stop mobilization for anti-government protests. Energy use Estimates of the Internet's electricity usage have been the subject of controversy, according to a 2014 peer-reviewed research paper that found claims differing by a factor of 20,000 published in the literature during the preceding decade, ranging from 0.0064 kilowatt hours per gigabyte transferred (kWh/GB) to 136 kWh/GB. The researchers attributed these discrepancies mainly to the year of reference (i.e. whether efficiency gains over time had been taken into account) and to whether "end devices such as personal computers and servers are included" in the analysis. In 2011, academic researchers estimated the overall energy used by the Internet to be between 170 and 307 GW, less than two percent of the energy used by humanity. This estimate included the energy needed to build, operate, and periodically replace the estimated 750 million laptops, a billion smart phones and 100 million servers worldwide as well as the energy that routers, cell towers, optical switches, Wi-Fi transmitters and cloud storage devices use when transmitting Internet traffic. According to a non-peer-reviewed study published in 2018 by The Shift Project (a French think tank funded by corporate sponsors), nearly 4% of global CO2 emissions could be attributed to global data transfer and the necessary infrastructure. The study also said that online video streaming alone accounted for 60% of this data transfer and therefore contributed to over 300 million tons of CO2 emission per year, and argued for new "digital sobriety" regulations restricting the use and size of video files.
Technology
Media and communication
null
14554
https://en.wikipedia.org/wiki/Imaginary%20number
Imaginary number
An imaginary number is the product of a real number and the imaginary unit , which is defined by its property . The square of an imaginary number is . For example, is an imaginary number, and its square is . The number zero is considered to be both real and imaginary. Originally coined in the 17th century by René Descartes as a derogatory term and regarded as fictitious or useless, the concept gained wide acceptance following the work of Leonhard Euler (in the 18th century) and Augustin-Louis Cauchy and Carl Friedrich Gauss (in the early 19th century). An imaginary number can be added to a real number to form a complex number of the form , where the real numbers and are called, respectively, the real part and the imaginary part of the complex number. History Although the Greek mathematician and engineer Heron of Alexandria is noted as the first to present a calculation involving the square root of a negative number, it was Rafael Bombelli who first set down the rules for multiplication of complex numbers in 1572. The concept had appeared in print earlier, such as in work by Gerolamo Cardano. At the time, imaginary numbers and negative numbers were poorly understood and were regarded by some as fictitious or useless, much as zero once was. Many other mathematicians were slow to adopt the use of imaginary numbers, including René Descartes, who wrote about them in his La Géométrie in which he coined the term imaginary and meant it to be derogatory. The use of imaginary numbers was not widely accepted until the work of Leonhard Euler (1707–1783) and Carl Friedrich Gauss (1777–1855). The geometric significance of complex numbers as points in a plane was first described by Caspar Wessel (1745–1818). In 1843, William Rowan Hamilton extended the idea of an axis of imaginary numbers in the plane to a four-dimensional space of quaternion imaginaries in which three of the dimensions are analogous to the imaginary numbers in the complex field. Geometric interpretation Geometrically, imaginary numbers are found on the vertical axis of the complex number plane, which allows them to be presented perpendicular to the real axis. One way of viewing imaginary numbers is to consider a standard number line positively increasing in magnitude to the right and negatively increasing in magnitude to the left. At 0 on the -axis, a -axis can be drawn with "positive" direction going up; "positive" imaginary numbers then increase in magnitude upwards, and "negative" imaginary numbers increase in magnitude downwards. This vertical axis is often called the "imaginary axis" and is denoted or . In this representation, multiplication by  corresponds to a counterclockwise rotation of 90 degrees about the origin, which is a quarter of a circle. Multiplication by  corresponds to a clockwise rotation of 90 degrees about the origin. Similarly, multiplying by a purely imaginary number , with a real number, both causes a counterclockwise rotation about the origin by 90 degrees and scales the answer by a factor of . When , this can instead be described as a clockwise rotation by 90 degrees and a scaling by . Square roots of negative numbers Care must be used when working with imaginary numbers that are expressed as the principal values of the square roots of negative numbers. For example, if and are both positive real numbers, the following chain of equalities appears reasonable at first glance: But the result is clearly nonsense. The step where the square root was broken apart was illegitimate. (See Mathematical fallacy.)
Mathematics
Basics
null
14563
https://en.wikipedia.org/wiki/Integer
Integer
An integer is the number zero (0), a positive natural number (1, 2, 3, . . .), or the negation of a positive natural number (−1, −2, −3, . . .). The negations or additive inverses of the positive natural numbers are referred to as negative integers. The set of all integers is often denoted by the boldface or blackboard bold The set of natural numbers is a subset of , which in turn is a subset of the set of all rational numbers , itself a subset of the real numbers . Like the set of natural numbers, the set of integers is countably infinite. An integer may be regarded as a real number that can be written without a fractional component. For example, 21, 4, 0, and −2048 are integers, while 9.75, , 5/4, and are not. The integers form the smallest group and the smallest ring containing the natural numbers. In algebraic number theory, the integers are sometimes qualified as rational integers to distinguish them from the more general algebraic integers. In fact, (rational) integers are algebraic integers that are also rational numbers. History The word integer comes from the Latin integer meaning "whole" or (literally) "untouched", from in ("not") plus tangere ("to touch"). "Entire" derives from the same origin via the French word entier, which means both entire and integer. Historically the term was used for a number that was a multiple of 1, or to the whole part of a mixed number. Only positive integers were considered, making the term synonymous with the natural numbers. The definition of integer expanded over time to include negative numbers as their usefulness was recognized. For example Leonhard Euler in his 1765 Elements of Algebra defined integers to include both positive and negative numbers. The phrase the set of the integers was not used before the end of the 19th century, when Georg Cantor introduced the concept of infinite sets and set theory. The use of the letter Z to denote the set of integers comes from the German word Zahlen ("numbers") and has been attributed to David Hilbert. The earliest known use of the notation in a textbook occurs in Algèbre written by the collective Nicolas Bourbaki, dating to 1947. The notation was not adopted immediately. For example, another textbook used the letter J, and a 1960 paper used Z to denote the non-negative integers. But by 1961, Z was generally used by modern algebra texts to denote the positive and negative integers. The symbol is often annotated to denote various sets, with varying usage amongst different authors: , , or for the positive integers, or for non-negative integers, and for non-zero integers. Some authors use for non-zero integers, while others use it for non-negative integers, or for {–1,1} (the group of units of ). Additionally, is used to denote either the set of integers modulo (i.e., the set of congruence classes of integers), or the set of -adic integers. The whole numbers were synonymous with the integers up until the early 1950s. In the late 1950s, as part of the New Math movement, American elementary school teachers began teaching that whole numbers referred to the natural numbers, excluding negative numbers, while integer included the negative numbers. The whole numbers remain ambiguous to the present day. Algebraic properties Like the natural numbers, is closed under the operations of addition and multiplication, that is, the sum and product of any two integers is an integer. However, with the inclusion of the negative natural numbers (and importantly, ), , unlike the natural numbers, is also closed under subtraction. The integers form a ring which is the most basic one, in the following sense: for any ring, there is a unique ring homomorphism from the integers into this ring. This universal property, namely to be an initial object in the category of rings, characterizes the ring . is not closed under division, since the quotient of two integers (e.g., 1 divided by 2) need not be an integer. Although the natural numbers are closed under exponentiation, the integers are not (since the result can be a fraction when the exponent is negative). The following table lists some of the basic properties of addition and multiplication for any integers , , and : The first five properties listed above for addition say that , under addition, is an abelian group. It is also a cyclic group, since every non-zero integer can be written as a finite sum or . In fact, under addition is the only infinite cyclic group—in the sense that any infinite cyclic group is isomorphic to . The first four properties listed above for multiplication say that under multiplication is a commutative monoid. However, not every integer has a multiplicative inverse (as is the case of the number 2), which means that under multiplication is not a group. All the rules from the above property table (except for the last), when taken together, say that together with addition and multiplication is a commutative ring with unity. It is the prototype of all objects of such algebraic structure. Only those equalities of expressions are true in  for all values of variables, which are true in any unital commutative ring. Certain non-zero integers map to zero in certain rings. The lack of zero divisors in the integers (last property in the table) means that the commutative ring  is an integral domain. The lack of multiplicative inverses, which is equivalent to the fact that is not closed under division, means that is not a field. The smallest field containing the integers as a subring is the field of rational numbers. The process of constructing the rationals from the integers can be mimicked to form the field of fractions of any integral domain. And back, starting from an algebraic number field (an extension of rational numbers), its ring of integers can be extracted, which includes as its subring. Although ordinary division is not defined on , the division "with remainder" is defined on them. It is called Euclidean division, and possesses the following important property: given two integers and with , there exist unique integers and such that and , where denotes the absolute value of . The integer is called the quotient and is called the remainder of the division of by . The Euclidean algorithm for computing greatest common divisors works by a sequence of Euclidean divisions. The above says that is a Euclidean domain. This implies that is a principal ideal domain, and any positive integer can be written as the products of primes in an essentially unique way. This is the fundamental theorem of arithmetic. Order-theoretic properties is a totally ordered set without upper or lower bound. The ordering of is given by: . An integer is positive if it is greater than zero, and negative if it is less than zero. Zero is defined as neither negative nor positive. The ordering of integers is compatible with the algebraic operations in the following way: If and , then If and , then Thus it follows that together with the above ordering is an ordered ring. The integers are the only nontrivial totally ordered abelian group whose positive elements are well-ordered. This is equivalent to the statement that any Noetherian valuation ring is either a field—or a discrete valuation ring. Construction Traditional development In elementary school teaching, integers are often intuitively defined as the union of the (positive) natural numbers, zero, and the negations of the natural numbers. This can be formalized as follows. First construct the set of natural numbers according to the Peano axioms, call this . Then construct a set which is disjoint from and in one-to-one correspondence with via a function . For example, take to be the ordered pairs with the mapping . Finally let 0 be some object not in or , for example the ordered pair (0,0). Then the integers are defined to be the union . The traditional arithmetic operations can then be defined on the integers in a piecewise fashion, for each of positive numbers, negative numbers, and zero. For example negation is defined as follows: The traditional style of definition leads to many different cases (each arithmetic operation needs to be defined on each combination of types of integer) and makes it tedious to prove that integers obey the various laws of arithmetic. Equivalence classes of ordered pairs In modern set-theoretic mathematics, a more abstract construction allowing one to define arithmetical operations without any case distinction is often used instead. The integers can thus be formally constructed as the equivalence classes of ordered pairs of natural numbers . The intuition is that stands for the result of subtracting from . To confirm our expectation that and denote the same number, we define an equivalence relation on these pairs with the following rule: precisely when . Addition and multiplication of integers can be defined in terms of the equivalent operations on the natural numbers; by using to denote the equivalence class having as a member, one has: . . The negation (or additive inverse) of an integer is obtained by reversing the order of the pair: . Hence subtraction can be defined as the addition of the additive inverse: . The standard ordering on the integers is given by: if and only if . It is easily verified that these definitions are independent of the choice of representatives of the equivalence classes. Every equivalence class has a unique member that is of the form or (or both at once). The natural number is identified with the class (i.e., the natural numbers are embedded into the integers by map sending to ), and the class is denoted (this covers all remaining classes, and gives the class a second time since –0 = 0. Thus, is denoted by If the natural numbers are identified with the corresponding integers (using the embedding mentioned above), this convention creates no ambiguity. This notation recovers the familiar representation of the integers as . Some examples are: Other approaches In theoretical computer science, other approaches for the construction of integers are used by automated theorem provers and term rewrite engines. Integers are represented as algebraic terms built using a few basic operations (e.g., zero, succ, pred) and using natural numbers, which are assumed to be already constructed (using the Peano approach). There exist at least ten such constructions of signed integers. These constructions differ in several ways: the number of basic operations used for the construction, the number (usually, between 0 and 2), and the types of arguments accepted by these operations; the presence or absence of natural numbers as arguments of some of these operations, and the fact that these operations are free constructors or not, i.e., that the same integer can be represented using only one or many algebraic terms. The technique for the construction of integers presented in the previous section corresponds to the particular case where there is a single basic operation pair that takes as arguments two natural numbers and , and returns an integer (equal to ). This operation is not free since the integer 0 can be written pair(0,0), or pair(1,1), or pair(2,2), etc.. This technique of construction is used by the proof assistant Isabelle; however, many other tools use alternative construction techniques, notable those based upon free constructors, which are simpler and can be implemented more efficiently in computers. Computer science An integer is often a primitive data type in computer languages. However, integer data types can only represent a subset of all integers, since practical computers are of finite capacity. Also, in the common two's complement representation, the inherent definition of sign distinguishes between "negative" and "non-negative" rather than "negative, positive, and 0". (It is, however, certainly possible for a computer to determine whether an integer value is truly positive.) Fixed length integer approximation data types (or subsets) are denoted int or Integer in several programming languages (such as Algol68, C, Java, Delphi, etc.). Variable-length representations of integers, such as bignums, can store any integer that fits in the computer's memory. Other integer data types are implemented with a fixed size, usually a number of bits which is a power of 2 (4, 8, 16, etc.) or a memorable number of decimal digits (e.g., 9 or 10). Cardinality The set of integers is countably infinite, meaning it is possible to pair each integer with a unique natural number. An example of such a pairing is More technically, the cardinality of is said to equal (aleph-null). The pairing between elements of and is called a bijection.
Mathematics
Counting and numbers
null
14569
https://en.wikipedia.org/wiki/Interpolation
Interpolation
In the mathematical field of numerical analysis, interpolation is a type of estimation, a method of constructing (finding) new data points based on the range of a discrete set of known data points. In engineering and science, one often has a number of data points, obtained by sampling or experimentation, which represent the values of a function for a limited number of values of the independent variable. It is often required to interpolate; that is, estimate the value of that function for an intermediate value of the independent variable. A closely related problem is the approximation of a complicated function by a simple function. Suppose the formula for some given function is known, but too complicated to evaluate efficiently. A few data points from the original function can be interpolated to produce a simpler function which is still fairly close to the original. The resulting gain in simplicity may outweigh the loss from interpolation error and give better performance in calculation process. Example This table gives some values of an unknown function . Interpolation provides a means of estimating the function at intermediate points, such as We describe some methods of interpolation, differing in such properties as: accuracy, cost, number of data points needed, and smoothness of the resulting interpolant function. Piecewise constant interpolation The simplest interpolation method is to locate the nearest data value, and assign the same value. In simple problems, this method is unlikely to be used, as linear interpolation (see below) is almost as easy, but in higher-dimensional multivariate interpolation, this could be a favourable choice for its speed and simplicity. Linear interpolation One of the simplest methods is linear interpolation (sometimes known as lerp). Consider the above example of estimating f(2.5). Since 2.5 is midway between 2 and 3, it is reasonable to take f(2.5) midway between f(2) = 0.9093 and f(3) = 0.1411, which yields 0.5252. Generally, linear interpolation takes two data points, say (xa,ya) and (xb,yb), and the interpolant is given by: This previous equation states that the slope of the new line between and is the same as the slope of the line between and Linear interpolation is quick and easy, but it is not very precise. Another disadvantage is that the interpolant is not differentiable at the point xk. The following error estimate shows that linear interpolation is not very precise. Denote the function which we want to interpolate by g, and suppose that x lies between xa and xb and that g is twice continuously differentiable. Then the linear interpolation error is In words, the error is proportional to the square of the distance between the data points. The error in some other methods, including polynomial interpolation and spline interpolation (described below), is proportional to higher powers of the distance between the data points. These methods also produce smoother interpolants. Polynomial interpolation Polynomial interpolation is a generalization of linear interpolation. Note that the linear interpolant is a linear function. We now replace this interpolant with a polynomial of higher degree. Consider again the problem given above. The following sixth degree polynomial goes through all the seven points: Substituting x = 2.5, we find that f(2.5) = ~0.59678. Generally, if we have n data points, there is exactly one polynomial of degree at most n−1 going through all the data points. The interpolation error is proportional to the distance between the data points to the power n. Furthermore, the interpolant is a polynomial and thus infinitely differentiable. So, we see that polynomial interpolation overcomes most of the problems of linear interpolation. However, polynomial interpolation also has some disadvantages. Calculating the interpolating polynomial is computationally expensive (see computational complexity) compared to linear interpolation. Furthermore, polynomial interpolation may exhibit oscillatory artifacts, especially at the end points (see Runge's phenomenon). Polynomial interpolation can estimate local maxima and minima that are outside the range of the samples, unlike linear interpolation. For example, the interpolant above has a local maximum at x ≈ 1.566, f(x) ≈ 1.003 and a local minimum at x ≈ 4.708, f(x) ≈ −1.003. However, these maxima and minima may exceed the theoretical range of the function; for example, a function that is always positive may have an interpolant with negative values, and whose inverse therefore contains false vertical asymptotes. More generally, the shape of the resulting curve, especially for very high or low values of the independent variable, may be contrary to commonsense; that is, to what is known about the experimental system which has generated the data points. These disadvantages can be reduced by using spline interpolation or restricting attention to Chebyshev polynomials. Spline interpolation Linear interpolation uses a linear function for each of intervals [xk,xk+1]. Spline interpolation uses low-degree polynomials in each of the intervals, and chooses the polynomial pieces such that they fit smoothly together. The resulting function is called a spline. For instance, the natural cubic spline is piecewise cubic and twice continuously differentiable. Furthermore, its second derivative is zero at the end points. The natural cubic spline interpolating the points in the table above is given by In this case we get f(2.5) = 0.5972. Like polynomial interpolation, spline interpolation incurs a smaller error than linear interpolation, while the interpolant is smoother and easier to evaluate than the high-degree polynomials used in polynomial interpolation. However, the global nature of the basis functions leads to ill-conditioning. This is completely mitigated by using splines of compact support, such as are implemented in Boost.Math and discussed in Kress. Mimetic interpolation Depending on the underlying discretisation of fields, different interpolants may be required. In contrast to other interpolation methods, which estimate functions on target points, mimetic interpolation evaluates the integral of fields on target lines, areas or volumes, depending on the type of field (scalar, vector, pseudo-vector or pseudo-scalar). A key feature of mimetic interpolation is that vector calculus identities are satisfied, including Stokes' theorem and the divergence theorem. As a result, mimetic interpolation conserves line, area and volume integrals. Conservation of line integrals might be desirable when interpolating the electric field, for instance, since the line integral gives the electric potential difference at the endpoints of the integration path. Mimetic interpolation ensures that the error of estimating the line integral of an electric field is the same as the error obtained by interpolating the potential at the end points of the integration path, regardless of the length of the integration path. Linear, bilinear and trilinear interpolation are also considered mimetic, even if it is the field values that are conserved (not the integral of the field). Apart from linear interpolation, area weighted interpolation can be considered one of the first mimetic interpolation methods to have been developed. Functional interpolation The Theory of Functional Connections (TFC) is a mathematical framework specifically developed for functional interpolation. Given any interpolant that satisfies a set of constraints, TFC derives a functional that represents the entire family of interpolants satisfying those constraints, including those that are discontinuous or partially defined. These functionals identify the subspace of functions where the solution to a constrained optimization problem resides. Consequently, TFC transforms constrained optimization problems into equivalent unconstrained formulations. This transformation has proven highly effective in the solution of differential equations. TFC achieves this by constructing a constrained functional (a function of a free function), that inherently satisfies given constraints regardless of the expression of the free function. This simplifies solving various types of equations and significantly improves the efficiency and accuracy of methods like Physics-Informed Neural Networks (PINNs). TFC offers advantages over traditional methods like Lagrange multipliers and spectral methods by directly addressing constraints analytically and avoiding iterative procedures, although it cannot currently handle inequality constraints. Function approximation Interpolation is a common way to approximate functions. Given a function with a set of points one can form a function such that for (that is, that interpolates at these points). In general, an interpolant need not be a good approximation, but there are well known and often reasonable conditions where it will. For example, if (four times continuously differentiable) then cubic spline interpolation has an error bound given by where and is a constant. Via Gaussian processes Gaussian process is a powerful non-linear interpolation tool. Many popular interpolation tools are actually equivalent to particular Gaussian processes. Gaussian processes can be used not only for fitting an interpolant that passes exactly through the given data points but also for regression; that is, for fitting a curve through noisy data. In the geostatistics community Gaussian process regression is also known as Kriging. Other forms Other forms of interpolation can be constructed by picking a different class of interpolants. For instance, rational interpolation is interpolation by rational functions using Padé approximant, and trigonometric interpolation is interpolation by trigonometric polynomials using Fourier series. Another possibility is to use wavelets. The Whittaker–Shannon interpolation formula can be used if the number of data points is infinite or if the function to be interpolated has compact support. Sometimes, we know not only the value of the function that we want to interpolate, at some points, but also its derivative. This leads to Hermite interpolation problems. When each data point is itself a function, it can be useful to see the interpolation problem as a partial advection problem between each data point. This idea leads to the displacement interpolation problem used in transportation theory. In higher dimensions Multivariate interpolation is the interpolation of functions of more than one variable. Methods include nearest-neighbor interpolation, bilinear interpolation and bicubic interpolation in two dimensions, and trilinear interpolation in three dimensions. They can be applied to gridded or scattered data. Mimetic interpolation generalizes to dimensional spaces where . In digital signal processing In the domain of digital signal processing, the term interpolation refers to the process of converting a sampled digital signal (such as a sampled audio signal) to that of a higher sampling rate (Upsampling) using various digital filtering techniques (for example, convolution with a frequency-limited impulse signal). In this application there is a specific requirement that the harmonic content of the original signal be preserved without creating aliased harmonic content of the original signal above the original Nyquist limit of the signal (that is, above fs/2 of the original signal sample rate). An early and fairly elementary discussion on this subject can be found in Rabiner and Crochiere's book Multirate Digital Signal Processing. Related concepts The term extrapolation is used to find data points outside the range of known data points. In curve fitting problems, the constraint that the interpolant has to go exactly through the data points is relaxed. It is only required to approach the data points as closely as possible (within some other constraints). This requires parameterizing the potential interpolants and having some way of measuring the error. In the simplest case this leads to least squares approximation. Approximation theory studies how to find the best approximation to a given function by another function from some predetermined class, and how good this approximation is. This clearly yields a bound on how well the interpolant can approximate the unknown function. Generalization If we consider as a variable in a topological space, and the function mapping to a Banach space, then the problem is treated as "interpolation of operators". The classical results about interpolation of operators are the Riesz–Thorin theorem and the Marcinkiewicz theorem. There are also many other subsequent results.
Mathematics
Mathematical analysis
null
14587
https://en.wikipedia.org/wiki/Island
Island
An island or isle is a piece of land, distinct from a continent, completely surrounded by water. There are continental islands, which were formed by being split from a continent by plate tectonics, and oceanic islands, which have never been part of a continent. Oceanic islands can be formed from volcanic activity, grow into atolls from coral reefs, and form from sediment along shorelines, creating barrier islands. River islands can also form from sediment and debris in rivers. Artificial islands are those made by humans, including small rocky outcroppings built out of lagoons and large-scale land reclamation projects used for development. Islands are host to diverse plant and animal life. Oceanic islands have the sea as a natural barrier to the introduction of new species, causing the species that do reach the island to evolve in isolation. Continental islands share animal and plant life with the continent they split from. Depending on how long ago the continental island formed, the life on that island may have diverged greatly from the mainland due to natural selection. Humans have lived on and traveled between islands for thousands of years at a minimum. Some islands became host to humans due to a land bridge or a continental island splitting from the mainland. Today, up to 10% of the world's population lives on islands. Islands are popular targets for tourism due to their perceived natural beauty, isolation, and unique cultures. Islands became the target of colonization by Europeans, resulting in the majority of islands in the Pacific being put under European control. Decolonization has resulted in some but not all island nations becoming self-governing, with lasting effects related to industrialisation, invasive species, nuclear weapons testing, and tourism. Islands and island countries are threatened by climate change. Sea level rise threatens to submerge nations such as Maldives, the Marshall Islands, and Tuvalu completely. Increases in the frequency and intensity of tropical cyclones can cause widespread destruction of infrastructure and animal habitats. Species that live exclusively on islands are some of those most threatened by extinction. Definition An island is an area of land surrounded by water on all sides that is distinct from a continent. There is no standard of size that distinguishes islands and continents. Continents have an accepted geological definition – they are the largest landmass of a particular tectonic plate. Islands can occur in any body of water, including lakes, rivers, seas. Low-tide elevations, areas of land that are not above the surface during a high tide, are generally not considered islands. Islands that have been bridged or otherwise joined to a mainland with land reclamation are sometimes considered "de-islanded", but not in every case. Etymology The word island derives from Middle English , from Old English igland (from ig or ieg, similarly meaning 'island' when used independently, and -land carrying its contemporary meaning. Old English ieg is actually a cognate of Swedish ö and German Aue, and more distantly related to Latin (water). The spelling of the word was modified in the 15th century because of a false etymology caused by an association with the Old French loanword isle, which itself comes from the Latin word insula. Geology Formation in oceans Islands often are found in archipelagos or island chains, which are collections of islands. These chains are thought to form from volcanic hotspots, areas of the lithosphere where the mantle is hotter than the surrounding area. These hotspots would give rise to volcanoes whose lava would form the rock the islands are made of. For some islands, the movement of tectonic plates above stationary hotspots would form islands in a linear chain, with the islands further away from the hotspot being progressively older and more eroded, before disappearing under the sea entirely. An example is the Hawaiian Islands, with the oldest island being 25 million years old, and the youngest, Hawaii, still being an active volcano. However, not all island chains are formed this way. Some may be formed all at once by fractures in the tectonic plates themselves, simultaneously creating multiple islands. One supporting piece of evidence is that of the Line Islands, which are all estimated to be 8 million years old, rather than being different ages. Other island chains form due to being separated from existing continents. The Japanese archipelago may have been separated from Eurasia due to seafloor spreading, a phenomenon where new oceanic crust is formed, pushing away older crust. Islands sitting on the continental shelf may be called continental islands. Other islands, like those that make up New Zealand, are what remains of continents that shrank and sunk beneath the sea. It was estimated that Zealandia, the continent-like area of crust that New Zealand sits on, has had 93% of its original surface area submerged. Some islands are formed when coral reefs grow on volcanic islands that have submerged beneath the surface. When these coral islands encircle a central lagoon, the island is known as an atoll. The formation of reefs and islands related to those reefs is aided by the buildup of sediment in shallow patches of water. In some cases, tectonic movements lifting a reef out of the water by as little as 1 meter can cause sediment to accumulate and an island to form. Barrier islands are long, sandy bars that form along shorelines due to the deposition of sediment by waves. These islands erode and grow as the wind and waves shift. Barrier islands have the effect of protecting coastal areas from severe weather because they absorb some of the energy of large waves before they can reach the shore. Formation in freshwater A fluvial island is an island that forms from the erosion and sedimentation of debris in rivers; almost all rivers have some form of fluvial islands. These islands may only be a few meters high, and are usually temporary. Changes in the flow speed, water level, and sediment content of the river may effect the rate of fluvial island formation and depletion. Permanent river islands also exist, the largest of which (that is completely inland) is Bananal Island in the Tocantins of Brazil, which has a maximum width of 55 kilometers. Lakes form for a variety of reasons, including glaciers, plate tectonics, and volcanism. Lake islands can form as part of these processes. Life on islands The field of insular biogeography studies the ecological processes that take place on islands, with a focus on what factors effect the evolution, extinction, and richness of species. Scientists often study islands as an isolated model of how the process of natural selection takes place. Island ecology studies organisms on islands and their environment. It has yielded important insights for its parent field of ecology since the time of Charles Darwin. Endemism In biology, endemism is defined as the phenomenon where species or genus is only found in a certain geographical area. Islands isolate land organisms from others with water, and isolate aquatic organisms living on them with land. Island ecosystems have the highest rates of endemism globally. This means that islands contribute heavily to global biodiversity. Areas with high lives of biodiversity are a priority target of conservation efforts, to prevent the extinction of these species. Despite high levels of endemism, the total species richness, the total number of unique species in a region, is lower on islands than on mainlands. The level of species richness on islands is proportional to the area of that island, a phenomenon known as the species-area relationship. This is because larger areas have more resources and thus can support more organisms. Populations with a higher carrying capacity also have more genetic diversity, which promotes speciation. Dispersal Oceanic islands, ones that have never been connected to shore, are only populated by life that can cross the sea. This means that any animals present on the island had to have flown there, in the case of birds or bats, were carried by such animals, or were carried in a sea current in what is known as a "rafting event". This phenomenon is known as oceanic dispersal. Tropical cyclones have the capacity to transport species over great distances. Animals like tortoises can live for weeks without food or water, and are able to survive floating on debris in the sea. One case study showed that in 1995, fifteen iguanas survived a 300 km journey to Anguilla in the Caribbean, an island which no iguana had lived on previously. They survived floating on a mass of uprooted trees from a storm. Plant species are thought to be able to travel great distances of ocean. New Zealand and Australia share 200 native plant species, despite being separated by 1500 km. Continental islands, islands that were at one point connected to a continent, are expected to share a common history of plant and animal life up until the point that the island broke away from the continent. For example, the presence of freshwater fish on an island surrounded by ocean would indicate that it once was attached to a continent, since these fish cannot traverse the ocean on their own. Over the course of time, evolution and extinction changes the nature of animal life on a continental island, but only once it splits from the mainland. An example is that of the southern beech, a tree that is present in Australia, New Zealand, parts of South American, and New Guinea, places that today are geographically distant. A possible explanation for this phenomenon is that these landmasses were once all part of the continent Gondwana and separated by tectonic drift. However, there are competing theories that suggest this species may have reached faraway places by way of oceanic dispersal. Evolution on island groups Species that colonize island archipelagos exhibit a specific property known as adaptive radiation. In this process, a species that arrives on a group of islands rapidly becomes more diverse over time, splitting off into new species or subspecies. A species that reaches an island ecosystem may face little competition for resources, or may find that the resources that they found in their previous habitat are not available. These factors together result in individual evolutionary branches with different means of survival. The classical example of this is Darwin's finches, a group of up to fifteen tanager species that are endemic to the Galápagos Islands. These birds evolved different beaks in order to eat different kinds of food available on the islands. The large ground finch has a large bill used to crack seeds and eat fruit. The Genovesa cactus finch prefers cacti as a food source, and has a beak adapted for removing pulp and flowers from cacti. The green warbler-finch (in the habit of true warbler species) consumes spiders and insects that live on plants. Other examples of this phenomenon exist worldwide, including in Hawaii and Madagascar, and are not limited to island ecosystems. The island rule Species endemic to islands show a common evolutionary trajectory. Foster's rule (also known as the island rule), states that small mammals such as rodents evolve to become larger, known as island gigantism. One such example is the giant tortoise of the Seychelles, though it is unknown if it grew in size before or after reaching the island. Larger animals such as the hippopotamus tend to become smaller, such as in the case of the pygmy hippopotamus. This is known as insular dwarfism. In the case of smaller animals, it has been hypothesized that animals on islands may have fewer predators and competitors, resulting in selection pressure towards larger animals. Larger animals may exhaust food resources quickly due to their size, causing malnutrition in their young, resulting in a selection pressure for smaller animals that require less food. Having fewer predators would mean these animals did not need not be large to survive. Darwin, the Galápagos, and natural selection Charles Darwin formulated the theory of natural selection through the study of island ecology. The species he observed on the Galápagos Islands, including tanager birds, contributed to his understanding of how evolution works. He first traveled to the islands as a naturalist on HMS Beagle in 1835, as part of a five-year circumnavigation of Earth. He wrote that "the different islands to a considerable extent are inhabited by a different set of beings". Through the study of the finches and other animals he realized that organisms survive by changing to adapt to their habitat. It would be over twenty years before he published his theories in On the Origin of Species. Humans and islands History of exploration The first evidence of humans colonizing islands probably occurred in the Paleolithic era, 100,000 to 200,000 years ago. Reaching the Indonesian islands of Flores and Timor would have required crossing distances of water of at least . Some islands, such as Honshu, were probably connected to the mainland with a land bridge that allowed humans to colonize it before it became an island. The first people to colonize distant oceanic islands were the Polynesians. Many of the previous island settlements required traveling distances of less than , whereas Polynesians may have traveled to settle islands such as Tahiti. They would send navigators to sail the ocean without the aid of navigational instruments to discover new islands for settlement. Between 1100 and 800 BC, Polynesians sailed East from New Guinea and the Solomon Islands and reached the islands that make up the modern-day Fiji and Samoa. The furthest extent of this migration would be Easter Island in the East, and New Zealand in the South, with New Zealand's first settlements between 1250 and 1300. Historians have sought to understand why some remote islands have always been uninhabited, while others, especially in the Pacific Ocean, have long been populated by humans. Generally, larger islands are more likely to be able to sustain humans and thus are more likely to have been settled. Small islands that cannot sustain populations on their own can still be habitable if they are within a "commuting" distance to an island that has enough resources to be sustainable. The presence of an island is marked by seabirds, differences in cloud and weather patterns, as well as changes in the direction of waves. It is also possible for human populations to have gone extinct on islands, evidenced by explorers finding islands that show evidence of habitation but no life. Not all islands were or are inhabited by maritime cultures. In the past, some societies were found to have lost their seafaring ability over time, such as the case of the Canary Islands, which were occupied by an indigenous people since the island's first discovery in the first century until being conquered by the Spanish Empire in 1496. It has been hypothesized that since the inhabitants had little incentive for trade and had little to any contact with the mainland, they had no need for boats. The motivation for island exploration has been the subject of research and debate. Some early historians previously argued that early island colonization was unintentional, perhaps by a raft being swept out to sea. Others compare the motivations of Polynesian and similar explorers with those of Christopher Columbus, the explorer who sailed westward over the Atlantic Ocean in search of an alternate route to the East Indies. These historians theorize that successful explorers were rewarded with recognition and wealth, leading others to attempt possibly dangerous expeditions to discover more islands, usually with poor results. Lifestyle About 10% of the world's population lives on islands. The study of the culture of islands is known as island studies. The interest in the study of islands is due to their unique cultures and natural environments that differ from mainland cultures. This is for a few reasons: First, the obvious political and geographic isolation from mainland cultures. Second, unique restraints on resources and ecology creating marine-focused cultures with a focus on fishing and sailing. Third, a lasting historical and political significance of islands. Diet The Polynesian diet got most of its protein from fishing. Polynesians were known to fish close to shore, as well as in deep water. It was reported that Rapa Nui people were known to fish as far as from shore at coral reefs. Spear, line, and net fishing were all used, to catch tuna as well as sharks and stingrays. Island cultures also cultivate native and non-native crops. Polynesians grew the native yam, taro, breadfruit, banana, coconut and other fruits and vegetables. Different island climates made different resources more important, such as the Hawaiian islands being home to irrigated fields of taro, whereas in some islands, like Tahiti, breadfruit was more widely cultivated and fermented in order to preserve it. There is archeological evidence that Canary Islanders would chew the roots of ferns for sustenance, a practice that wore heavily on their molars. These islanders would also grow barley and raised livestock such as goats. Island nations and territories Many island nations have little land and a restricted set of natural resources. However, these nations control some of the largest fisheries in the world, deposits of copper, gold, and nickel, as well as oil deposits. The natural beauty of island nations also makes them a magnet for tourism. Islands also have geopolitical value for naval bases, weapons testing, and general territorial control. One such example is French Polynesia, a territory that receives substantial military expenditure and aid from France. Colonization Since the first discoveries of Polynesian, Micronesian, and other islands by Westerners, these nations have been the subject of colonization. Islands were the target of Christian missionaries. These missionaries faced resistance, but found success when some local chiefs used European support to centralize power. Beginning in the 16th century, European states placed most of Oceania in under colonial administration. Pohnpei was colonized by Spain as early as 1526. It changed hands from Germany to Japan to the United States before joining the Federated States of Micronesia in 1982, maintaining a "free association" status with the U.S. Guam was a Spanish territory until 1898, and now is a unincorporated territory of the U.S. The decolonization era saw many island states achieve independence or some form of self-governance. Nuclear weapons testing on the Marshall Islands left many atolls destroyed or uninhabitable, causing the forced displacement of people from their home islands as well as increases in cancer rates due to radiation. Colonization has resulted in a decline of observance of traditional cultural practices in places such as Hawaii, where Native Hawaiians are now a minority. Cultural attitudes related to communal ownership of land as well as a lack of individualistic decision-making may make some island cultures less compatible with the global capitalist economy, causing these nations to experience less economic growth. Tourism Islands have long been a popular target for tourism, thanks to their unique climates, cultures, and natural beauty. However, islands may suffer from poor transportation connectivity from airplanes and boats and strains on infrastructure from tourist activity. Islands in colder climates often rely on seasonal tourists seeking to enjoy nature or local cultures, and may only be one aspect of an island's economy. In contrast, tourism on tropical islands can often make up the majority of the local economy and built environment. These islands sometimes also require consistent foreign aid on top of tourism in order to ensure economic growth. This reliance can result in social inequality and environmental degradation. During tourism downturns, these economies struggle to make up the lost inflow of cash with other industries. Threats to islands Climate change threatens human development on islands due to sea level rise, more dangerous tropical cyclones, coral bleaching, and an increase in invasive species. For example, in 2017 Hurricane Maria caused a loss of almost all the infrastructure in Dominica. Sea level rise and other climate changes can reduce freshwater reserves, resulting in droughts. These risks are expected to decrease the habitability of islands, especially small ones. Beyond risks to human life, plant and animal life are threatened. It has been estimated that almost 50 percent of land species threatened by extinction live on islands. In 2017, a detailed review of 1,288 islands found that they were home to 1,189 highly-threatened vertebrate species, which was 41 percent of the global figure. Coral bleaching is expected to occur with more frequency, threatening marine ecosystems, some of which island economies are dependent on. Some islands that are low-lying may cease to exist given high enough amounts of sea level rise. Tuvalu received media attention for a press conference publicizing the ongoing submerging of the island country. Tuvalu signed a cooperation agreement with Australia agreeing to annually allow 280 of its citizens to become permanent residents of Australia. The Marshall Islands, a country of 1,156 islands, have also been identified as a country that may be existentially threatened by rising seas. Increasing intensity of tropical storms also increases the distances and frequency with which invasive species may be transported to islands. Floodwaters from these storms may also wash plants further inland than they would travel on their own, introducing them to new habitats. Agriculture and trade also have introduced non-native life to islands. These processes result in an introduction of invasive species to ecosystems that are especially small and fragile. One example is the apple snail, initially introduced to the U.S. by aquarium owners. It has since been transported by hurricanes across the Gulf Coast and neighboring islands. These species compete for resources with native animals, and some may grow so densely that they displace other forms of existing life. Artificial islands For hundreds of years, islands have been created through land reclamation. One of the first recorded instances of this when people of the Solomon Islands created eighty such islands by piling coral and rock in the Lau Lagoon. One traditional way of constructing islands is with the use of a revetment. Sandbags or stones are dropped with a barge into the sea to bring the land level slightly out of the water. The island area is then filled with sand or gravel, followed by a construction of this revetment to hold it together. Islands have also been constructed with a permanent caisson, a steel or concrete structure built in a closed loop and then filled with sand. Some modern islands have been constructed by pouring millions of tons of sand into the sea, such as with Pearl Island in Qatar or the Palm Islands in Dubai. These islands are usually created for real estate development, and are sold for private ownership or construction of housing. Offshore oil platforms have also been described as a type of island. Some atolls have been covered in concrete to create artificial islands for military purposes, such as those created by China in the South China Sea. These atolls were previously low-tide elevations, landmasses that are only above water during low tide. The United Nations Convention on the Law of the Sea indicates that these islands may not have the same legal status as a naturally occurring island, and as such may not confer the same legal rights.
Physical sciences
Terrestrial features
null
14624
https://en.wikipedia.org/wiki/Inorganic%20chemistry
Inorganic chemistry
Inorganic chemistry deals with synthesis and behavior of inorganic and organometallic compounds. This field covers chemical compounds that are not carbon-based, which are the subjects of organic chemistry. The distinction between the two disciplines is far from absolute, as there is much overlap in the subdiscipline of organometallic chemistry. It has applications in every aspect of the chemical industry, including catalysis, materials science, pigments, surfactants, coatings, medications, fuels, and agriculture. Occurrence Many inorganic compounds are found in nature as minerals. Soil may contain iron sulfide as pyrite or calcium sulfate as gypsum. Inorganic compounds are also found multitasking as biomolecules: as electrolytes (sodium chloride), in energy storage (ATP) or in construction (the polyphosphate backbone in DNA). Bonding Inorganic compounds exhibit a range of bonding properties. Some are ionic compounds, consisting of very simple cations and anions joined by ionic bonding. Examples of salts (which are ionic compounds) are magnesium chloride MgCl2, which consists of magnesium cations Mg2+ and chloride anions Cl−; or sodium hydroxide NaOH, which consists of sodium cations Na+ and hydroxide anions OH−. Some inorganic compounds are highly covalent, such as sulfur dioxide and iron pentacarbonyl. Many inorganic compounds feature polar covalent bonding, which is a form of bonding intermediate between covalent and ionic bonding. This description applies to many oxides, carbonates, and halides. Many inorganic compounds are characterized by high melting points. Some salts (e.g., NaCl) are very soluble in water. When one reactant contains hydrogen atoms, a reaction can take place by exchanging protons in acid-base chemistry. In a more general definition, any chemical species capable of binding to electron pairs is called a Lewis acid; conversely any molecule that tends to donate an electron pair is referred to as a Lewis base. As a refinement of acid-base interactions, the HSAB theory takes into account polarizability and size of ions. Subdivisions of inorganic chemistry Subdivisions of inorganic chemistry are numerous, but include: organometallic chemistry, compounds with metal-carbon bonds. This area touches on organic synthesis, which employs many organometallic catalysts and reagents. cluster chemistry, compounds with several metals bound together with metal–metal bonds or bridging ligands. bioinorganic chemistry, biomolecules that contain metals. This area touches on medicinal chemistry. materials chemistry and solid state chemistry, extended (i.e. polymeric) solids exhibiting properties not seen for simple molecules. Many practical themes are associated with these areas, including ceramics. Industrial inorganic chemistry Inorganic chemistry is a highly practical area of science. Traditionally, the scale of a nation's economy could be evaluated by their productivity of sulfuric acid. An important man-made inorganic compound is ammonium nitrate, used for fertilization. The ammonia is produced through the Haber process. Nitric acid is prepared from the ammonia by oxidation. Another large-scale inorganic material is portland cement. Inorganic compounds are used as catalysts such as vanadium(V) oxide for the oxidation of sulfur dioxide and titanium(III) chloride for the polymerization of alkenes. Many inorganic compounds are used as reagents in organic chemistry such as lithium aluminium hydride. Descriptive inorganic chemistry Descriptive inorganic chemistry focuses on the classification of compounds based on their properties. Partly the classification focuses on the position in the periodic table of the heaviest element (the element with the highest atomic weight) in the compound, partly by grouping compounds by their structural similarities Coordination compounds Classical coordination compounds feature metals bound to "lone pairs" of electrons residing on the main group atoms of ligands such as H2O, NH3, Cl−, and CN−. In modern coordination compounds almost all organic and inorganic compounds can be used as ligands. The "metal" usually is a metal from the groups 3–13, as well as the trans-lanthanides and trans-actinides, but from a certain perspective, all chemical compounds can be described as coordination complexes. The stereochemistry of coordination complexes can be quite rich, as hinted at by Werner's separation of two enantiomers of [Co((OH)2Co(NH3)4)3]6+, an early demonstration that chirality is not inherent to organic compounds. A topical theme within this specialization is supramolecular coordination chemistry. Examples: [Co(EDTA)]−, [Co(NH3)6]3+, TiCl4(THF)2. Coordination compounds show a rich diversity of structures, varying from tetrahedral for titanium (e.g., TiCl4) to square planar for some nickel complexes to octahedral for coordination complexes of cobalt. A range of transition metals can be found in biologically important compounds, such as iron in hemoglobin. Examples: iron pentacarbonyl, titanium tetrachloride, cisplatin Main group compounds These species feature elements from groups I, II, III, IV, V, VI, VII, 0 (excluding hydrogen) of the periodic table. Due to their often similar reactivity, the elements in group 3 (Sc, Y, and La) and group 12 (Zn, Cd, and Hg) are also generally included, and the lanthanides and actinides are sometimes included as well. Main group compounds have been known since the beginnings of chemistry, e.g., elemental sulfur and the distillable white phosphorus. Experiments on oxygen, O2, by Lavoisier and Priestley not only identified an important diatomic gas, but opened the way for describing compounds and reactions according to stoichiometric ratios. The discovery of a practical synthesis of ammonia using iron catalysts by Carl Bosch and Fritz Haber in the early 1900s deeply impacted mankind, demonstrating the significance of inorganic chemical synthesis. Typical main group compounds are SiO2, SnCl4, and N2O. Many main group compounds can also be classed as "organometallic", as they contain organic groups, e.g., B(CH3)3). Main group compounds also occur in nature, e.g., phosphate in DNA, and therefore may be classed as bioinorganic. Conversely, organic compounds lacking (many) hydrogen ligands can be classed as "inorganic", such as the fullerenes, buckytubes and binary carbon oxides. Examples: tetrasulfur tetranitride S4N4, diborane B2H6, silicones, buckminsterfullerene C60. Noble gas compounds include several derivatives of xenon and krypton. Examples: xenon hexafluoride XeF6, xenon trioxide XeO3, and krypton difluoride KrF2 Organometallic compounds Usually, organometallic compounds are considered to contain the M-C-H group. The metal (M) in these species can either be a main group element or a transition metal. Operationally, the definition of an organometallic compound is more relaxed to include also highly lipophilic complexes such as metal carbonyls and even metal alkoxides. Organometallic compounds are mainly considered a special category because organic ligands are often sensitive to hydrolysis or oxidation, necessitating that organometallic chemistry employs more specialized preparative methods than was traditional in Werner-type complexes. Synthetic methodology, especially the ability to manipulate complexes in solvents of low coordinating power, enabled the exploration of very weakly coordinating ligands such as hydrocarbons, H2, and N2. Because the ligands are petrochemicals in some sense, the area of organometallic chemistry has greatly benefited from its relevance to industry. Examples: Cyclopentadienyliron dicarbonyl dimer (C5H5)Fe(CO)2CH3, ferrocene Fe(C5H5)2, molybdenum hexacarbonyl Mo(CO)6, triethylborane Et3B, Tris(dibenzylideneacetone)dipalladium(0) Pd2(dba)3) Cluster compounds Clusters can be found in all classes of chemical compounds. According to the commonly accepted definition, a cluster consists minimally of a triangular set of atoms that are directly bonded to each other. But metal–metal bonded dimetallic complexes are highly relevant to the area. Clusters occur in "pure" inorganic systems, organometallic chemistry, main group chemistry, and bioinorganic chemistry. The distinction between very large clusters and bulk solids is increasingly blurred. This interface is the chemical basis of nanoscience or nanotechnology and specifically arise from the study of quantum size effects in cadmium selenide clusters. Thus, large clusters can be described as an array of bound atoms intermediate in character between a molecule and a solid. Examples: Fe3(CO)12, B10H14, [Mo6Cl14]2−, 4Fe-4S Bioinorganic compounds By definition, these compounds occur in nature, but the subfield includes anthropogenic species, such as pollutants (e.g., methylmercury) and drugs (e.g., Cisplatin). The field, which incorporates many aspects of biochemistry, includes many kinds of compounds, e.g., the phosphates in DNA, and also metal complexes containing ligands that range from biological macromolecules, commonly peptides, to ill-defined species such as humic acid, and to water (e.g., coordinated to gadolinium complexes employed for MRI). Traditionally bioinorganic chemistry focuses on electron- and energy-transfer in proteins relevant to respiration. Medicinal inorganic chemistry includes the study of both non-essential and essential elements with applications to diagnosis and therapies. Examples: hemoglobin, methylmercury, carboxypeptidase Solid state compounds This important area focuses on structure, bonding, and the physical properties of materials. In practice, solid state inorganic chemistry uses techniques such as crystallography to gain an understanding of the properties that result from collective interactions between the subunits of the solid. Included in solid state chemistry are metals and their alloys or intermetallic derivatives. Related fields are condensed matter physics, mineralogy, and materials science. Examples: silicon chips, zeolites, YBa2Cu3O7 Spectroscopy and magnetism In contrast to most organic compounds, many inorganic compounds are magnetic and/or colored. These properties provide information on the bonding and structure. The magnetism of inorganic compounds can be comlex. For example, most copper(II) compounds are paramagnetic but CuII2(OAc)4(H2O)2 is almost diamagnetic below room temperature. The explanation is due to magnetic coupling between pairs of Cu(II) sites in the acetate. Qualitative theories Inorganic chemistry has greatly benefited from qualitative theories. Such theories are easier to learn as they require little background in quantum theory. Within main group compounds, VSEPR theory powerfully predicts, or at least rationalizes, the structures of main group compounds, such as an explanation for why NH3 is pyramidal whereas ClF3 is T-shaped. For the transition metals, crystal field theory allows one to understand the magnetism of many simple complexes, such as why [FeIII(CN)6]3− has only one unpaired electron, whereas [FeIII(H2O)6]3+ has five. A particularly powerful qualitative approach to assessing the structure and reactivity begins with classifying molecules according to electron counting, focusing on the numbers of valence electrons, usually at the central atom in a molecule. Molecular symmetry group theory A construct in chemistry is molecular symmetry, as embodied in Group theory. Inorganic compounds display a particularly diverse symmetries, so it is logical that Group Theory is intimately associated with inorganic chemistry. Group theory provides the language to describe the shapes of molecules according to their point group symmetry. Group theory also enables factoring and simplification of theoretical calculations. Spectroscopic features are analyzed and described with respect to the symmetry properties of the, inter alia, vibrational or electronic states. Knowledge of the symmetry properties of the ground and excited states allows one to predict the numbers and intensities of absorptions in vibrational and electronic spectra. A classic application of group theory is the prediction of the number of C–O vibrations in substituted metal carbonyl complexes. The most common applications of symmetry to spectroscopy involve vibrational and electronic spectra. Group theory highlights commonalities and differences in the bonding of otherwise disparate species. For example, the metal-based orbitals transform identically for WF6 and W(CO)6, but the energies and populations of these orbitals differ significantly. A similar relationship exists CO2 and molecular beryllium difluoride. Thermodynamics and inorganic chemistry An alternative quantitative approach to inorganic chemistry focuses on energies of reactions. This approach is highly traditional and empirical, but it is also useful. Broad concepts that are couched in thermodynamic terms include redox potential, acidity, phase changes. A classic concept in inorganic thermodynamics is the Born–Haber cycle, which is used for assessing the energies of elementary processes such as electron affinity, some of which cannot be observed directly. Mechanistic inorganic chemistry An important aspect of inorganic chemistry focuses on reaction pathways, i.e. reaction mechanisms. Main group elements and lanthanides The mechanisms of main group compounds of groups 13–18 are usually discussed in the context of organic chemistry (organic compounds are main group compounds, after all). Elements heavier than C, N, O, and F often form compounds with more electrons than predicted by the octet rule, as explained in the article on hypervalent molecules. The mechanisms of their reactions differ from organic compounds for this reason. Elements lighter than carbon (B, Be, Li) as well as Al and Mg often form electron-deficient structures that are electronically akin to carbocations. Such electron-deficient species tend to react via associative pathways. The chemistry of the lanthanides mirrors many aspects of chemistry seen for aluminium. Transition metal complexes Transition metal and main group compounds often react differently. The important role of d-orbitals in bonding strongly influences the pathways and rates of ligand substitution and dissociation. These themes are covered in articles on coordination chemistry and ligand. Both associative and dissociative pathways are observed. An overarching aspect of mechanistic transition metal chemistry is the kinetic lability of the complex illustrated by the exchange of free and bound water in the prototypical complexes [M(H2O)6]n+: [M(H2O)6]n+ + 6 H2O* → [M(H2O*)6]n+ + 6 H2O where H2O* denotes isotopically enriched water, e.g., H217O The rates of water exchange varies by 20 orders of magnitude across the periodic table, with lanthanide complexes at one extreme and Ir(III) species being the slowest. Redox reactions Redox reactions are prevalent for the transition elements. Two classes of redox reaction are considered: atom-transfer reactions, such as oxidative addition/reductive elimination, and electron-transfer. A fundamental redox reaction is "self-exchange", which involves the degenerate reaction between an oxidant and a reductant. For example, permanganate and its one-electron reduced relative manganate exchange one electron: [MnO4]− + [Mn*O4]2− → [MnO4]2− + [Mn*O4]− Reactions at ligands Coordinated ligands display reactivity distinct from the free ligands. For example, the acidity of the ammonia ligands in [Co(NH3)6]3+ is elevated relative to NH3 itself. Alkenes bound to metal cations are reactive toward nucleophiles whereas alkenes normally are not. The large and industrially important area of catalysis hinges on the ability of metals to modify the reactivity of organic ligands. Homogeneous catalysis occurs in solution and heterogeneous catalysis occurs when gaseous or dissolved substrates interact with surfaces of solids. Traditionally homogeneous catalysis is considered part of organometallic chemistry and heterogeneous catalysis is discussed in the context of surface science, a subfield of solid state chemistry. But the basic inorganic chemical principles are the same. Transition metals, almost uniquely, react with small molecules such as CO, H2, O2, and C2H4. The industrial significance of these feedstocks drives the active area of catalysis. Ligands can also undergo ligand transfer reactions such as transmetalation. Characterization of inorganic compounds Because of the diverse range of elements and the correspondingly diverse properties of the resulting derivatives, inorganic chemistry is closely associated with many methods of analysis. Older methods tended to examine bulk properties such as the electrical conductivity of solutions, melting points, solubility, and acidity. With the advent of quantum theory and the corresponding expansion of electronic apparatus, new tools have been introduced to probe the electronic properties of inorganic molecules and solids. Often these measurements provide insights relevant to theoretical models. Commonly encountered techniques are: X-ray crystallography: This technique allows for the 3D determination of molecular structures. Various forms of spectroscopy: Ultraviolet-visible spectroscopy: Historically, this has been an important tool, since many inorganic compounds are strongly colored NMR spectroscopy: Besides 1H and 13C many other NMR-active nuclei (e.g., 11B, 19F, 31P, and 195Pt) can give important information on compound properties and structure. The NMR of paramagnetic species can provide important structural information. Proton (1H) NMR is also important because the light hydrogen nucleus is not easily detected by X-ray crystallography. Infrared spectroscopy: Mostly for absorptions from carbonyl ligands Electron nuclear double resonance (ENDOR) spectroscopy Mössbauer spectroscopy Electron-spin resonance: ESR (or EPR) allows for the measurement of the environment of paramagnetic metal centres. Electrochemistry: Cyclic voltammetry and related techniques probe the redox characteristics of compounds. Synthetic inorganic chemistry Although some inorganic species can be obtained in pure form from nature, most are synthesized in chemical plants and in the laboratory. Inorganic synthetic methods can be classified roughly according to the volatility or solubility of the component reactants. Soluble inorganic compounds are prepared using methods of organic synthesis. For metal-containing compounds that are reactive toward air, Schlenk line and glove box techniques are followed. Volatile compounds and gases are manipulated in "vacuum manifolds" consisting of glass piping interconnected through valves, the entirety of which can be evacuated to 0.001 mm Hg or less. Compounds are condensed using liquid nitrogen (b.p. 78K) or other cryogens. Solids are typically prepared using tube furnaces, the reactants and products being sealed in containers, often made of fused silica (amorphous SiO2) but sometimes more specialized materials such as welded Ta tubes or Pt "boats". Products and reactants are transported between temperature zones to drive reactions.
Physical sciences
Chemistry
null
14730
https://en.wikipedia.org/wiki/IRC
IRC
IRC (Internet Relay Chat) is a text-based chat system for instant messaging. IRC is designed for group communication in discussion forums, called channels, but also allows one-on-one communication via private messages as well as chat and data transfer, including file sharing. Internet Relay Chat is implemented as an application layer protocol to facilitate communication in the form of text. The chat process works on a client–server networking model. Users connect, using a clientwhich may be a web app, a standalone desktop program, or embedded into part of a larger programto an IRC server, which may be part of a larger IRC network. Examples of programs used to connect include Mibbit, IRCCloud, KiwiIRC, and mIRC. IRC usage has been declining steadily since 2003, losing 60 percent of its users. In April 2011, the top 100 IRC networks served more than 200,000 users at a time. History IRC was created by Jarkko Oikarinen in August 1988 to replace a program called MUT (MultiUser Talk) on a BBS called OuluBox at the University of Oulu in Finland, where he was working at the Department of Information Processing Science. Jarkko intended to extend the BBS software he administered, to allow news in the Usenet style, real time discussions and similar BBS features. The first part he implemented was the chat part, which he did with borrowed parts written by his friends Jyrki Kuoppala and Jukka Pihl. The first IRC network was running on a single server named tolsun.oulu.fi. Oikarinen found inspiration in a chat system known as Bitnet Relay, which operated on the BITNET. Jyrki Kuoppala pushed Oikarinen to ask Oulu University to free the IRC code so that it also could be run outside of Oulu, and after they finally got it released, Jyrki Kuoppala immediately installed another server. This was the first "IRC network". Oikarinen got some friends at the Helsinki University of Technology and Tampere University of Technology to start running IRC servers when his number of users increased and other universities soon followed. At this time Oikarinen realized that the rest of the BBS features probably would not fit in his program. Oikarinen contacted people at the University of Denver and Oregon State University. They had their own IRC network running and wanted to connect to the Finnish network. They had obtained the program from one of Oikarinen's friends, Vijay Subramaniam—the first non-Finnish person to use IRC. IRC then grew larger and got used on the entire Finnish national network—FUNET—and then connected to Nordunet, the Scandinavian branch of the Internet. In November 1988, IRC had spread across the Internet and in the middle of 1989, there were some 40 servers worldwide. EFnet In August 1990, the first major disagreement took place in the IRC world. The "A-net" (Anarchy net) included a server named eris.berkeley.edu. It was all open, required no passwords and had no limit on the number of connects. As Greg "wumpus" Lindahl explains: "it had a wildcard server line, so people were hooking up servers and nick-colliding everyone". The "Eris Free Network", EFnet, made the eris machine the first to be Q-lined (Q for quarantine) from IRC. In wumpus' words again: "Eris refused to remove that line, so I formed EFnet. It wasn't much of a fight; I got all the hubs to join, and almost everyone else got carried along." A-net was formed with the eris servers, while EFnet was formed with the non-eris servers. History showed most servers and users went with EFnet. Once A-net disbanded, the name EFnet became meaningless, and once again it was the one and only IRC network. Around that time IRC was used to report on the 1991 Soviet coup d'état attempt throughout a media blackout. It was previously used in a similar fashion during the Gulf War. Chat logs of these and other events are kept in the ibiblio archive. Undernet fork Another fork effort, the first that made a lasting difference, was initiated by "Wildthang" in the United States in October 1992. (It forked off the EFnet ircd version 2.8.10). It was meant to be just a test network to develop bots on but it quickly grew to a network "for friends and their friends". In Europe and Canada a separate new network was being worked on and in December the French servers connected to the Canadian ones, and by the end of the month, the French and Canadian network was connected to the US one, forming the network that later came to be called "The Undernet". The "undernetters" wanted to take ircd further in an attempt to make it use less bandwidth and to try to sort out the channel chaos (netsplits and takeovers) that EFnet started to suffer from. For the latter purpose, the Undernet implemented timestamps, new routing and offered the CService—a program that allowed users to register channels and then attempted to protect them from troublemakers. The first server list presented, from 15 February 1993, includes servers from the U.S., Canada, France, Croatia and Japan. On 15 August, the new user count record was set to 57 users. In May 1993, RFC 1459 was published and details a simple protocol for client/server operation, channels, one-to-one and one-to-many conversations. A significant number of extensions like CTCP, colors and formats are not included in the protocol specifications, nor is character encoding, which led various implementations of servers and clients to diverge. Software implementation varied significantly from one network to the other, each network implementing their own policies and standards in their own code bases. DALnet fork During the summer of 1994, the Undernet was itself forked. The new network was called DALnet (named after its founder: dalvenjah), formed for better user service and more user and channel protections. One of the more significant changes in DALnet was use of longer nicknames (the original ircd limit being 9 letters). DALnet ircd modifications were made by Alexei "Lefler" Kosut. DALnet was thus based on the Undernet ircd server, although the DALnet pioneers were EFnet abandoners. According to James Ng, the initial DALnet people were "ops in #StarTrek sick from the constant splits/lags/takeovers/etc". DALnet quickly offered global WallOps (IRCop messages that can be seen by users who are +w (/mode NickName +w)), longer nicknames, Q:Lined nicknames (nicknames that cannot be used i.e. ChanServ, IRCop, NickServ, etc.), global K:Lines (ban of one person or an entire domain from a server or the entire network), IRCop only communications: GlobOps, +H mode showing that an IRCop is a "helpop" etc. Much of DALnet's new functions were written in early 1995 by Brian "Morpher" Smith and allow users to own nicknames, control channels, send memos, and more. IRCnet fork In July 1996, after months of flame wars and discussions on the mailing list, there was yet another split due to disagreement in how the development of the ircd should evolve. Most notably, the "European" (most of those servers were in Europe) side that later named itself IRCnet argued for nick and channel delays whereas the EFnet side argued for timestamps. There were also disagreements about policies: the European side had started to establish a set of rules directing what IRCops could and could not do, a point of view opposed by the US side. Most (not all) of the IRCnet servers were in Europe, while most of the EFnet servers were in the US. This event is also known as "The Great Split" in many IRC societies. EFnet has since (as of August 1998) grown and passed the number of users it had then. In the (northern) autumn of the year 2000, EFnet had some 50,000 users and IRCnet 70,000. Modern IRC IRC has changed much over its life on the Internet. New server software has added a multitude of new features. Services: Network-operated bots to facilitate registration of nicknames and channels, sending messages for offline users and network operator functions. Extra modes: While the original IRC system used a set of standard user and channel modes, new servers add many new modes for features such as removing color codes from text, or obscuring a user's hostmask ("cloaking") to protect from denial-of-service attacks. Proxy detection: Most modern servers support detection of users attempting to connect through an insecure (misconfigured or exploited) proxy server, which can then be denied a connection. This proxy detection software is used by several networks, although that real time list of proxies is defunct since early 2006. Additional commands: New commands can be such things as shorthand commands to issue commands to Services, to network-operator-only commands to manipulate a user's hostmask. Encryption: For the client-to-server leg of the connection TLS might be used (messages cease to be secure once they are relayed to other users on standard connections, but it makes eavesdropping on or wiretapping an individual's IRC sessions difficult). For client-to-client communication, SDCC (Secure DCC) can be used. Connection protocol: IRC can be connected to via IPv4, the old version of the Internet Protocol, or by IPv6, the current standard of the protocol. , a new standardization effort is under way under a working group called IRCv3, which focuses on more advanced client features such as instant notifications, better history support and improved security. , no major IRC networks have fully adopted the proposed standard. there are 481 different IRC networks known to be operating, of which the open source Libera Chat, founded in May 2021, has the most users, with 20,374 channels on 26 servers; between them, the top 100 IRC networks share over 100 thousand channels operating on about one thousand servers. After its golden era during the 1990s and early 2000s (240,000 users on QuakeNet in 2004), IRC has seen a significant decline, losing around 60% of users between 2003 and 2012, with users moving to social media platforms such as Facebook or Twitter, but also to open platforms such as XMPP which was developed in 1999. Certain networks such as Freenode have not followed the overall trend and have more than quadrupled in size during the same period. However, Freenode, which in 2016 had around 90,000 users, has since declined to about 9,300 users. The largest IRC networks have traditionally been grouped as the "Big Four"—a designation for networks that top the statistics. The Big Four networks change periodically, but due to the community nature of IRC there are a large number of other networks for users to choose from. Historically the "Big Four" were: EFnet IRCnet Undernet DALnet IRC reached 6 million simultaneous users in 2001 and 10 million users in 2004–2005, dropping to around 350k in 2021. The top 100 IRC networks have around 230k users connected at peak hours. Timeline Timeline of major networks: EFnet, 1990 to present Undernet, 1992 to present DALnet, 1994 to present freenode, 1995 to present IRCnet, 1996 to present QuakeNet, 1997 to present Open and Free Technology Community, 2001 to present Rizon, 2002 to present Libera Chat, 2021 to present Technical information IRC is an open protocol that uses TCP and, optionally, TLS. An IRC server can connect to other IRC servers to expand the IRC network. Users access IRC networks by connecting a client to a server. There are many client implementations, such as mIRC, HexChat and irssi, and server implementations, e.g. the original IRCd. Most IRC servers do not require users to register an account but a nickname is required before being connected. IRC was originally a plain text protocol (although later extended), which on request was assigned port 194/TCP by IANA. However, the de facto standard has always been to run IRC on 6667/TCP and nearby port numbers (for example TCP ports 6660–6669, 7000) to avoid having to run the IRCd software with root privileges. The protocol specified that characters were 8-bit but did not specify the character encoding the text was supposed to use. This can cause problems when users using different clients and/or different platforms want to converse. All client-to-server IRC protocols in use today are descended from the protocol implemented in the irc2.4.0 version of the IRC2 server, and documented in RFC 1459. Since RFC 1459 was published, the new features in the irc2.10 implementation led to the publication of several revised protocol documents (RFC 2810, RFC 2811, RFC 2812 and RFC 2813); however, these protocol changes have not been widely adopted among other implementations. Although many specifications on the IRC protocol have been published, there is no official specification, as the protocol remains dynamic. Virtually no clients and very few servers rely strictly on the above RFCs as a reference. Microsoft made an extension for IRC in 1998 via the proprietary IRCX. They later stopped distributing software supporting IRCX, instead developing the proprietary MSNP. The standard structure of a network of IRC servers is a tree. Messages are routed along only necessary branches of the tree but network state is sent to every server and there is generally a high degree of implicit trust between servers. However, this architecture has a number of problems. A misbehaving or malicious server can cause major damage to the network and any changes in structure, whether intentional or a result of conditions on the underlying network, require a net-split and net-join. This results in a lot of network traffic and spurious quit/join messages to users and temporary loss of communication to users on the splitting servers. Adding a server to a large network means a large background bandwidth load on the network and a large memory load on the server. Once established, however, each message to multiple recipients is delivered in a fashion similar to multicast, meaning each message travels a network link exactly once. This is a strength in comparison to non-multicasting protocols such as Simple Mail Transfer Protocol (SMTP) or Extensible Messaging and Presence Protocol (XMPP). An IRC daemon can be used on a local area network (LAN). IRC can thus be used to facilitate communication between people within the local area network (internal communication). Commands and replies IRC has a line-based structure. Clients send single-line messages to the server, receive replies to those messages and receive copies of some messages sent by other clients. In most clients, users can enter commands by prefixing them with a '/'. Depending on the command, these may either be handled entirely by the client, or (generally for commands the client does not recognize) passed directly to the server, possibly with some modification. Due to the nature of the protocol, automated systems cannot always correctly pair a sent command with its reply with full reliability and are subject to guessing. Channels The basic means of communicating to a group of users in an established IRC session is through a channel. Channels on a network can be displayed using the IRC command LIST, which lists all currently available channels that do not have the modes +s or +p set, on that particular network. Users can join a channel using the JOIN command, in most clients available as /join #channelname. Messages sent to the joined channels are then relayed to all other users. Channels that are available across an entire IRC network are prefixed with a '#', while those local to a server use '&'. Other less common channel types include '+' channels—'modeless' channels without operators—and '!' channels, a form of timestamped channel on normally non-timestamped networks. Modes Users and channels may have modes that are represented by individual case-sensitive letters and are set using the MODE command. User modes and channel modes are separate and can use the same letter to mean different things (e.g. user mode "i" is invisible mode while channel mode "i" is invite only.) Modes are usually set and unset using the mode command that takes a target (user or channel), a set of modes to set (+) or unset (-) and any parameters the modes need. Some channel modes take parameters and other channel modes apply to a user on a channel or add or remove a mask (e.g. a ban mask) from a list associated with the channel rather than applying to the channel as a whole. Modes that apply to users on a channel have an associated symbol that is used to represent the mode in names replies (sent to clients on first joining a channel and use of the names command) and in many clients also used to represent it in the client's displayed list of users in a channel or to display an own indicator for a user's modes. In order to correctly parse incoming mode messages and track channel state the client must know which mode is of which type and for the modes that apply to a user on a channel which symbol goes with which letter. In early implementations of IRC this had to be hard-coded in the client but there is now a de facto standard extension to the protocol called ISUPPORT that sends this information to the client at connect time using numeric 005. There is a small design fault in IRC regarding modes that apply to users on channels: the names message used to establish initial channel state can only send one such mode per user on the channel, but multiple such modes can be set on a single user. For example, if a user holds both operator status (+o) and voice status (+v) on a channel, a new client will be unable to see the mode with less priority (i.e. voice). Workarounds for this are possible on both the client and server side; a common solution is to use IRCv3 "multi-prefix" extension. Standard (RFC 1459) modes Many daemons and networks have added extra modes or modified the behavior of modes in the above list. Channel operators A channel operator is a client on an IRC channel that manages the channel. IRC channel operators can be easily seen by the symbol or icon next to their name (varies by client implementation, commonly a "@" symbol prefix, a green circle, or a Latin letter "+o"/"o"). On most networks, an operator can: Kick a user. Ban a user. Give another user IRC Channel Operator Status or IRC Channel Voice Status. Change the IRC Channel topic while channel mode +t is set. Change the IRC Channel Mode locks. Operators There are also users who maintain elevated rights on their local server, or the entire network; these are called IRC operators, sometimes shortened to IRCops or Opers (not to be confused with channel operators). As the implementation of the IRCd varies, so do the privileges of the IRC operator on the given IRCd. RFC 1459 claims that IRC operators are "a necessary evil" to keep a clean state of the network, and as such they need to be able to disconnect and reconnect servers. Additionally, to prevent malicious users or even harmful automated programs from entering IRC, IRC operators are usually allowed to disconnect clients and completely ban IP addresses or complete subnets. Networks that carry services (NickServ et al.) usually allow their IRC operators also to handle basic "ownership" matters. Further privileged rights may include overriding channel bans (being able to join channels they would not be allowed to join, if they were not opered), being able to op themselves on channels where they would not be able without being opered, being auto-opped on channels always and so forth. Hostmasks A hostmask is a unique identifier of an IRC client connected to an IRC server. IRC servers, services, and other clients, including bots, can use it to identify a specific IRC session. The format of a hostmask is nick!user@host. The hostmask looks similar to, but should not be confused with an e-mail address. The nick part is the nickname chosen by the user and may be changed while connected. The user part is the username reported by ident on the client. If ident is not available on the client, the username specified when the client connected is used after being prefixed with a tilde. The host part is the hostname the client is connecting from. If the IP address of the client cannot be resolved to a valid hostname by the server, it is used instead of the hostname. Because of the privacy implications of exposing the IP address or hostname of a client, some IRC daemons also provide privacy features, such as InspIRCd or UnrealIRCd's "+x" mode. This hashes a client IP address or masks part of a client's hostname, making it unreadable to users other than IRCops. Users may also have the option of requesting a "virtual host" (or "vhost"), to be displayed in the hostmask to allow further anonymity. Some IRC networks, such as Libera Chat or Freenode, use these as "cloaks" to indicate that a user is affiliated with a group or project. URI scheme There are three provisional recognized uniform resource identifier (URI) schemes for Internet Relay Chat: irc, ircs, and irc6. When supported, they allow hyperlinks of various forms, including irc://<host>[:<port>]/[<channel>[?<channel_keyword>]] ircs://<host>[:<port>]/[<channel>[?<channel_keyword>]] irc6://<host>[:<port>]/[<channel>[?<channel_keyword>]] (where items enclosed within brackets ([,]) are optional) to be used to (if necessary) connect to the specified host (or network, if known to the IRC client) and join the specified channel. (This can be used within the client itself, or from another application such as a Web browser). irc is the default URI, irc6 specifies a connection to be made using IPv6, and ircs specifies a secure connection. Per the specification, the usual hash symbol (#) will be prepended to channel names that begin with an alphanumeric character—allowing it to be omitted. Some implementations (for example, mIRC) will do so unconditionally resulting in a (usually unintended) extra (for example, ##channel), if included in the URL. Some implementations allow multiple channels to be specified, separated by commas. Challenges Issues in the original design of IRC were the amount of shared state data being a limitation on its scalability, the absence of unique user identifications leading to the nickname collision problem, lack of protection from netsplits by means of cyclic routing, the trade-off in scalability for the sake of real-time user presence information, protocol weaknesses providing a platform for abuse, no transparent and optimizable message passing, and no encryption. Some of these issues have been addressed in Modern IRC. Attacks Because IRC connections may be unencrypted and typically span long time periods, they are an attractive target for DoS/DDoS attackers and hackers. Because of this, careful security policy is necessary to ensure that an IRC network is not susceptible to an attack such as a takeover war. IRC networks may also K-line or G-line users or servers that have a harming effect. Some IRC servers support SSL/TLS connections for security purposes. This helps stop the use of packet sniffer programs to obtain the passwords of IRC users, but has little use beyond this scope due to the public nature of IRC channels. SSL connections require both client and server support (that may require the user to install SSL binaries and IRC client specific patches or modules on their computers). Some networks also use SSL for server-to-server connections, and provide a special channel flag (such as +S) to only allow SSL-connected users on the channel, while disallowing operator identification in clear text, to better utilize the advantages that SSL provides. IRC served as an early laboratory for many kinds of Internet attacks, such as using fake ICMP unreachable messages to break TCP-based IRC connections (nuking) to annoy users or facilitate takeovers. Abuse prevention One of the most contentious technical issues surrounding IRC implementations, which survives to this day, is the merit of "Nick/Channel Delay" vs. "Timestamp" protocols. Both methods exist to solve the problem of denial-of-service attacks, but take very different approaches. The problem with the original IRC protocol as implemented was that when two servers split and rejoined, the two sides of the network would simply merge their channels. If a user could join on a "split" server, where a channel that existed on the other side of the network was empty, and gain operator status, they would become a channel operator of the "combined" channel after the netsplit ended; if a user took a nickname that existed on the other side of the network, the server would kill both users when rejoining (a "nick collision"). This was often abused to "mass-kill" all users on a channel, thus creating "opless" channels where no operators were present to deal with abuse. Apart from causing problems within IRC, this encouraged people to conduct denial-of-service attacks against IRC servers in order to cause netsplits, which they would then abuse. The nick delay (ND) and channel delay (CD) strategies aim to prevent abuse by delaying reconnections and renames. After a user signs off and the nickname becomes available, or a channel ceases to exist because all its users parted (as often happens during a netsplit), the server will not allow any user to use that nickname or join that channel, until a certain period of time (the delay) has passed. The idea behind this is that even if a netsplit occurs, it is useless to an abuser because they cannot take the nickname or gain operator status on a channel, and thus no collision of a nickname or "merging" of a channel can occur. To some extent, this inconveniences legitimate users, who might be forced to briefly use a different name after rejoining (appending an underscore is popular). The timestamp protocol is an alternative to nick/channel delays which resolves collisions using timestamped priority. Every nickname and channel on the network is assigned a timestampthe date and time when it was created. When a netsplit occurs, two users on each side are free to use the same nickname or channel, but when the two sides are joined, only one can survive. In the case of nicknames, the newer user, according to their TS, is killed; when a channel collides, the members (users on the channel) are merged, but the channel operators on the "losing" side of the split lose their channel operator status. TS is a much more complicated protocol than ND/CD, both in design and implementation, and despite having gone through several revisions, some implementations still have problems with "desyncs" (where two servers on the same network disagree about the current state of the network), and allowing too much leniency in what was allowed by the "losing" side. Under the original TS protocols, for example, there was no protection against users setting bans or other modes in the losing channel that would then be merged when the split rejoined, even though the users who had set those modes lost their channel operator status. Some modern TS-based IRC servers have also incorporated some form of ND and/or CD in addition to timestamping in an attempt to further curb abuse. Most networks today use the timestamping approach. The timestamp versus ND/CD disagreements caused several servers to split away from EFnet and form the newer IRCnet. After the split, EFnet moved to a TS protocol, while IRCnet used ND/CD. In recent versions of the IRCnet ircd, as well as ircds using the TS6 protocol (including Charybdis), ND has been extended/replaced by a mechanism called SAVE. This mechanism assigns every client a UID upon connecting to an IRC server. This ID starts with a number, which is forbidden in nicks (although some ircds, namely IRCnet and InspIRCd, allow clients to switch to their own UID as the nickname). If two clients with the same nickname join from different sides of a netsplit ("nick collision"), the first server to see this collision will force both clients to change their nick to their UID, thus saving both clients from being disconnected. On IRCnet, the nickname will also be locked for some time (ND) to prevent both clients from changing back to the original nickname, thus colliding again. Clients Client software Client software exists for various operating systems or software packages, as well as web-based or inside games. Many different clients are available for the various operating systems, including Windows, Unix and Linux, macOS and mobile operating systems (such as iOS and Android). On Windows, mIRC is one of the most popular clients. Some Linux distributions come with an IRC client preinstalled, such as Linux Mint which comes with HexChat preinstalled. Some programs which are extensible through plug-ins also serve as platforms for IRC clients. For instance, a client called ERC, written entirely in Emacs Lisp, is included in v.22.3 of Emacs. Therefore, any platform that can run Emacs can run ERC. A number of web browsers have built-in IRC clients, such as: Opera used to have a client, but no longer supports IRC ChatZilla add-on for Mozilla Firefox (for Firefox 56 and earlier; included as a built-in component of SeaMonkey). Web-based clients, such as Mibbit and open source KiwiIRC, can run in most browsers. Games such as War§ow, Unreal Tournament (up to Unreal Tournament 2004), Uplink, Spring Engine-based games, 0 A.D. and ZDaemon have included IRC. Ustream's chat interface is IRC with custom authentication as well as Twitch's (formerly Justin.tv). Bots A typical use of bots in IRC is to provide IRC services or specific functionality within a channel such as to host a chat-based game or provide notifications of external events. However, some IRC bots are used to launch malicious attacks such as denial of service, spamming, or exploitation. Bouncer A program that runs as a daemon on a server and functions as a persistent proxy is known as a BNC or bouncer. The purpose is to maintain a connection to an IRC server, acting as a relay between the server and client, or simply to act as a proxy. Should the client lose network connectivity, the BNC may stay connected and archive all traffic for later delivery, allowing the user to resume their IRC session without disrupting their connection to the server. Furthermore, as a way of obtaining a bouncer-like effect, an IRC client (typically text-based, for example Irssi) may be run on an always-on server to which the user connects via ssh. This also allows devices that only have ssh functionality, but no actual IRC client installed themselves, to connect to the IRC, and it allows sharing of IRC sessions. To keep the IRC client from quitting when the ssh connection closes, the client can be run inside a terminal multiplexer such as GNU Screen or tmux, thus staying connected to the IRC network(s) constantly and able to log conversation in channels that the user is interested in, or to maintain a channel's presence on the network. Modelled after this setup, in 2004 an IRC client following the client–server, called Smuxi, was launched. Search engines There are numerous search engines available to aid the user in finding what they are looking for on IRC. Generally the search engine consists of two parts, a "back-end" (or "spider/crawler") and a front-end "search engine". The back-end (spider/webcrawler) is the work horse of the search engine. It is responsible for crawling IRC servers to index the information being sent across them. The information that is indexed usually consists solely of channel text (text that is publicly displayed in public channels). The storage method is usually some sort of relational database, like MySQL or Oracle. The front-end "search engine" is the user interface to the database. It supplies users with a way to search the database of indexed information to retrieve the data they are looking for. These front-end search engines can also be coded in numerous programming languages. Most search engines have their own spider that is a single application responsible for crawling IRC and indexing data itself; however, others are "user based" indexers. The latter rely on users to install their "add-on" to their IRC client; the add-on is what sends the database the channel information of whatever channels the user happens to be on. Many users have implemented their own ad hoc search engines using the logging features built into many IRC clients. These search engines are usually implemented as bots and dedicated to a particular channel or group of associated channels. Character encoding IRC still lacks a single globally accepted standard convention for how to transmit characters outside the 7-bit ASCII repertoire. IRC servers normally transfer messages from a client to another client just as byte sequences, without any interpretation or recoding of characters. The IRC protocol (unlike e.g. MIME or HTTP) lacks mechanisms for announcing and negotiating character encoding options. This has put the responsibility for choosing the appropriate character codec on the client. In practice, IRC channels have largely used the same character encodings that were also used by operating systems (in particular Unix derivatives) in the respective language communities: 7-bit era: In the early days of IRC, especially among Scandinavian and Finnish language users, national variants of ISO 646 were the dominant character encodings. These encode non-ASCII characters like Ä Ö Å ä ö å at code positions 0x5B 0x5C 0x5D 0x7B 0x7C 0x7D (US-ASCII: [ \ ] { | }). That is why these codes are always allowed in nicknames. According to RFC 1459, { | } in nicknames should be treated as lowercase equivalents of [ \ ] respectively. By the late 1990s, the use of 7-bit encodings had disappeared in favour of ISO 8859-1, and such equivalence mappings were dropped from some IRC daemons. 8-bit era: Since the early 1990s, 8-bit encodings such as ISO 8859-1 have become commonly used for European languages. Russian users had a choice of KOI8-R, ISO 8859-5 and CP1251, and since about 2000, modern Russian IRC networks convert between these different commonly used encodings of the Cyrillic script. Multi-byte era: For a long time, East Asian IRC channels with logographic scripts in China, Japan, and Korea have been using multi-byte encodings such as EUC or ISO-2022-JP. With the common migration from ISO 8859 to UTF-8 on Linux and Unix platforms since about 2002, UTF-8 has become an increasingly popular substitute for many of the previously used 8-bit encodings in European channels. Some IRC clients are now capable of reading messages both in ISO 8859-1 or UTF-8 in the same channel, heuristically autodetecting which encoding is used. The shift to UTF-8 began in particular on Finnish-speaking IRC (Merkistö (Finnish)). Today, the UTF-8 encoding of Unicode/ISO 10646 would be the most likely contender for a single future standard character encoding for all IRC communication, if such standard ever relaxed the 510-byte message size restriction. UTF-8 is ASCII compatible and covers the superset of all other commonly used coded character set standards. File sharing Much like conventional P2P file sharing, users can create file servers that allow them to share files with each other by using customised IRC bots or scripts for their IRC client. Often users will group together to distribute warez via a network of IRC bots. Technically, IRC provides no file transfer mechanisms itself; file sharing is implemented by IRC clients, typically using the Direct Client-to-Client (DCC) protocol, in which file transfers are negotiated through the exchange of private messages between clients. The vast majority of IRC clients feature support for DCC file transfers, hence the view that file sharing is an integral feature of IRC. The commonplace usage of this protocol, however, sometimes also causes DCC spam. DCC commands have also been used to exploit vulnerable clients into performing an action such as disconnecting from the server or exiting the client.
Technology
Internet
null
14734
https://en.wikipedia.org/wiki/Iron
Iron
Iron is a chemical element; it has the symbol Fe () and atomic number 26. It is a metal that belongs to the first transition series and group 8 of the periodic table. It is, by mass, the most common element on Earth, forming much of Earth's outer and inner core. It is the fourth most abundant element in the Earth's crust, being mainly deposited by meteorites in its metallic state. Extracting usable metal from iron ores requires kilns or furnaces capable of reaching , about higher than that required to smelt copper. Humans started to master that process in Eurasia during the 2nd millennium BC and the use of iron tools and weapons began to displace copper alloys – in some regions, only around 1200 BC. That event is considered the transition from the Bronze Age to the Iron Age. In the modern world, iron alloys, such as steel, stainless steel, cast iron and special steels, are by far the most common industrial metals, due to their mechanical properties and low cost. The iron and steel industry is thus very important economically, and iron is the cheapest metal, with a price of a few dollars per kilogram or pound. Pristine and smooth pure iron surfaces are a mirror-like silvery-gray. Iron reacts readily with oxygen and water to produce brown-to-black hydrated iron oxides, commonly known as rust. Unlike the oxides of some other metals that form passivating layers, rust occupies more volume than the metal and thus flakes off, exposing more fresh surfaces for corrosion. Chemically, the most common oxidation states of iron are iron(II) and iron(III). Iron shares many properties of other transition metals, including the other group 8 elements, ruthenium and osmium. Iron forms compounds in a wide range of oxidation states, −4 to +7. Iron also forms many coordination complexs; some of them, such as ferrocene, ferrioxalate, and Prussian blue have substantial industrial, medical, or research applications. The body of an adult human contains about 4 grams (0.005% body weight) of iron, mostly in hemoglobin and myoglobin. These two proteins play essential roles in oxygen transport by blood and oxygen storage in muscles. To maintain the necessary levels, human iron metabolism requires a minimum of iron in the diet. Iron is also the metal at the active site of many important redox enzymes dealing with cellular respiration and oxidation and reduction in plants and animals. Characteristics Allotropes At least four allotropes of iron (differing atom arrangements in the solid) are known, conventionally denoted α, γ, δ, and ε. The first three forms are observed at ordinary pressures. As molten iron cools past its freezing point of 1538 °C, it crystallizes into its δ allotrope, which has a body-centered cubic (bcc) crystal structure. As it cools further to 1394 °C, it changes to its γ-iron allotrope, a face-centered cubic (fcc) crystal structure, or austenite. At 912 °C and below, the crystal structure again becomes the bcc α-iron allotrope. The physical properties of iron at very high pressures and temperatures have also been studied extensively, because of their relevance to theories about the cores of the Earth and other planets. Above approximately 10 GPa and temperatures of a few hundred kelvin or less, α-iron changes into another hexagonal close-packed (hcp) structure, which is also known as ε-iron. The higher-temperature γ-phase also changes into ε-iron, but does so at higher pressure. Some controversial experimental evidence exists for a stable β phase at pressures above 50 GPa and temperatures of at least 1500 K. It is supposed to have an orthorhombic or a double hcp structure. (Confusingly, the term "β-iron" is sometimes also used to refer to α-iron above its Curie point, when it changes from being ferromagnetic to paramagnetic, even though its crystal structure has not changed.) The Earth's inner core is generally presumed to consist of an iron-nickel alloy with ε (or β) structure. Melting and boiling points The melting and boiling points of iron, along with its enthalpy of atomization, are lower than those of the earlier 3d elements from scandium to chromium, showing the lessened contribution of the 3d electrons to metallic bonding as they are attracted more and more into the inert core by the nucleus; however, they are higher than the values for the previous element manganese because that element has a half-filled 3d sub-shell and consequently its d-electrons are not easily delocalized. This same trend appears for ruthenium but not osmium. The melting point of iron is experimentally well defined for pressures less than 50 GPa. For greater pressures, published data (as of 2007) still varies by tens of gigapascals and over a thousand kelvin. Magnetic properties Below its Curie point of , α-iron changes from paramagnetic to ferromagnetic: the spins of the two unpaired electrons in each atom generally align with the spins of its neighbors, creating an overall magnetic field. This happens because the orbitals of those two electrons (dz2 and dx2 − y2) do not point toward neighboring atoms in the lattice, and therefore are not involved in metallic bonding. In the absence of an external source of magnetic field, the atoms get spontaneously partitioned into magnetic domains, about 10 micrometers across, such that the atoms in each domain have parallel spins, but some domains have other orientations. Thus a macroscopic piece of iron will have a nearly zero overall magnetic field. Application of an external magnetic field causes the domains that are magnetized in the same general direction to grow at the expense of adjacent ones that point in other directions, reinforcing the external field. This effect is exploited in devices that need to channel magnetic fields to fulfill design function, such as electrical transformers, magnetic recording heads, and electric motors. Impurities, lattice defects, or grain and particle boundaries can "pin" the domains in the new positions, so that the effect persists even after the external field is removed – thus turning the iron object into a (permanent) magnet. Similar behavior is exhibited by some iron compounds, such as the ferrites including the mineral magnetite, a crystalline form of the mixed iron(II,III) oxide (although the atomic-scale mechanism, ferrimagnetism, is somewhat different). Pieces of magnetite with natural permanent magnetization (lodestones) provided the earliest compasses for navigation. Particles of magnetite were extensively used in magnetic recording media such as core memories, magnetic tapes, floppies, and disks, until they were replaced by cobalt-based materials. Isotopes Iron has four stable isotopes: 54Fe (5.845% of natural iron), 56Fe (91.754%), 57Fe (2.119%) and 58Fe (0.282%). Twenty-four artificial isotopes have also been created. Of these stable isotopes, only 57Fe has a nuclear spin (−). The nuclide 54Fe theoretically can undergo double electron capture to 54Cr, but the process has never been observed and only a lower limit on the half-life of 4.4×1020 years has been established. 60Fe is an extinct radionuclide of long half-life (2.6 million years). It is not found on Earth, but its ultimate decay product is its granddaughter, the stable nuclide 60Ni. Much of the past work on isotopic composition of iron has focused on the nucleosynthesis of 60Fe through studies of meteorites and ore formation. In the last decade, advances in mass spectrometry have allowed the detection and quantification of minute, naturally occurring variations in the ratios of the stable isotopes of iron. Much of this work is driven by the Earth and planetary science communities, although applications to biological and industrial systems are emerging. In phases of the meteorites Semarkona and Chervony Kut, a correlation between the concentration of 60Ni, the granddaughter of 60Fe, and the abundance of the stable iron isotopes provided evidence for the existence of 60Fe at the time of formation of the Solar System. Possibly the energy released by the decay of 60Fe, along with that released by 26Al, contributed to the remelting and differentiation of asteroids after their formation 4.6 billion years ago. The abundance of 60Ni present in extraterrestrial material may bring further insight into the origin and early history of the Solar System. The most abundant iron isotope 56Fe is of particular interest to nuclear scientists because it represents the most common endpoint of nucleosynthesis. Since 56Ni (14 alpha particles) is easily produced from lighter nuclei in the alpha process in nuclear reactions in supernovae (see silicon burning process), it is the endpoint of fusion chains inside extremely massive stars. Although adding more alpha particles is possible, but nonetheless the sequence does effectively end at 56Ni because conditions in stellar interiors cause the competition between photodisintegration and the alpha process to favor photodisintegration around 56Ni. This 56Ni, which has a half-life of about 6 days, is created in quantity in these stars, but soon decays by two successive positron emissions within supernova decay products in the supernova remnant gas cloud, first to radioactive 56Co, and then to stable 56Fe. As such, iron is the most abundant element in the core of red giants, and is the most abundant metal in iron meteorites and in the dense metal cores of planets such as Earth. It is also very common in the universe, relative to other stable metals of approximately the same atomic weight. Iron is the sixth most abundant element in the universe, and the most common refractory element. Although a further tiny energy gain could be extracted by synthesizing 62Ni, which has a marginally higher binding energy than 56Fe, conditions in stars are unsuitable for this process. Element production in supernovas greatly favor iron over nickel, and in any case, 56Fe still has a lower mass per nucleon than 62Ni due to its higher fraction of lighter protons. Hence, elements heavier than iron require a supernova for their formation, involving rapid neutron capture by starting 56Fe nuclei. In the far future of the universe, assuming that proton decay does not occur, cold fusion occurring via quantum tunnelling would cause the light nuclei in ordinary matter to fuse into 56Fe nuclei. Fission and alpha-particle emission would then make heavy nuclei decay into iron, converting all stellar-mass objects to cold spheres of pure iron. Origin and occurrence in nature Cosmogenesis Iron's abundance in rocky planets like Earth is due to its abundant production during the runaway fusion and explosion of type Ia supernovae, which scatters the iron into space. Metallic iron Metallic or native iron is rarely found on the surface of the Earth because it tends to oxidize. However, both the Earth's inner and outer core, which together account for 35% of the mass of the whole Earth, are believed to consist largely of an iron alloy, possibly with nickel. Electric currents in the liquid outer core are believed to be the origin of the Earth's magnetic field. The other terrestrial planets (Mercury, Venus, and Mars) as well as the Moon are believed to have a metallic core consisting mostly of iron. The M-type asteroids are also believed to be partly or mostly made of metallic iron alloy. The rare iron meteorites are the main form of natural metallic iron on the Earth's surface. Items made of cold-worked meteoritic iron have been found in various archaeological sites dating from a time when iron smelting had not yet been developed; and the Inuit in Greenland have been reported to use iron from the Cape York meteorite for tools and hunting weapons. About 1 in 20 meteorites consist of the unique iron-nickel minerals taenite (35–80% iron) and kamacite (90–95% iron). Native iron is also rarely found in basalts that have formed from magmas that have come into contact with carbon-rich sedimentary rocks, which have reduced the oxygen fugacity sufficiently for iron to crystallize. This is known as telluric iron and is described from a few localities, such as Disko Island in West Greenland, Yakutia in Russia and Bühl in Germany. Mantle minerals Ferropericlase , a solid solution of periclase (MgO) and wüstite (FeO), makes up about 20% of the volume of the lower mantle of the Earth, which makes it the second most abundant mineral phase in that region after silicate perovskite ; it also is the major host for iron in the lower mantle. At the bottom of the transition zone of the mantle, the reaction γ- transforms γ-olivine into a mixture of silicate perovskite and ferropericlase and vice versa. In the literature, this mineral phase of the lower mantle is also often called magnesiowüstite. Silicate perovskite may form up to 93% of the lower mantle, and the magnesium iron form, , is considered to be the most abundant mineral in the Earth, making up 38% of its volume. Earth's crust While iron is the most abundant element on Earth, most of this iron is concentrated in the inner and outer cores. The fraction of iron that is in Earth's crust only amounts to about 5% of the overall mass of the crust and is thus only the fourth most abundant element in that layer (after oxygen, silicon, and aluminium). Most of the iron in the crust is combined with various other elements to form many iron minerals. An important class is the iron oxide minerals such as hematite (Fe2O3), magnetite (Fe3O4), and siderite (FeCO3), which are the major ores of iron. Many igneous rocks also contain the sulfide minerals pyrrhotite and pentlandite. During weathering, iron tends to leach from sulfide deposits as the sulfate and from silicate deposits as the bicarbonate. Both of these are oxidized in aqueous solution and precipitate in even mildly elevated pH as iron(III) oxide. Large deposits of iron are banded iron formations, a type of rock consisting of repeated thin layers of iron oxides alternating with bands of iron-poor shale and chert. The banded iron formations were laid down in the time between and . Materials containing finely ground iron(III) oxides or oxide-hydroxides, such as ochre, have been used as yellow, red, and brown pigments since pre-historical times. They contribute as well to the color of various rocks and clays, including entire geological formations like the Painted Hills in Oregon and the Buntsandstein ("colored sandstone", British Bunter). Through Eisensandstein (a jurassic 'iron sandstone', e.g. from Donzdorf in Germany) and Bath stone in the UK, iron compounds are responsible for the yellowish color of many historical buildings and sculptures. The proverbial red color of the surface of Mars is derived from an iron oxide-rich regolith. Significant amounts of iron occur in the iron sulfide mineral pyrite (FeS2), but it is difficult to extract iron from it and it is therefore not exploited. In fact, iron is so common that production generally focuses only on ores with very high quantities of it. According to the International Resource Panel's Metal Stocks in Society report, the global stock of iron in use in society is 2,200 kg per capita. More-developed countries differ in this respect from less-developed countries (7,000–14,000 vs 2,000 kg per capita). Oceans Ocean science demonstrated the role of the iron in the ancient seas in both marine biota and climate. Chemistry and compounds Iron shows the characteristic chemical properties of the transition metals, namely the ability to form variable oxidation states differing by steps of one and a very large coordination and organometallic chemistry: indeed, it was the discovery of an iron compound, ferrocene, that revolutionalized the latter field in the 1950s. Iron is sometimes considered as a prototype for the entire block of transition metals, due to its abundance and the immense role it has played in the technological progress of humanity. Its 26 electrons are arranged in the configuration [Ar]3d64s2, of which the 3d and 4s electrons are relatively close in energy, and thus a number of electrons can be ionized. Iron forms compounds mainly in the oxidation states +2 (iron(II), "ferrous") and +3 (iron(III), "ferric"). Iron also occurs in higher oxidation states, e.g., the purple potassium ferrate (K2FeO4), which contains iron in its +6 oxidation state. The anion [FeO4]– with iron in its +7 oxidation state, along with an iron(V)-peroxo isomer, has been detected by infrared spectroscopy at 4 K after cocondensation of laser-ablated Fe atoms with a mixture of O2/Ar. Iron(IV) is a common intermediate in many biochemical oxidation reactions. Numerous organoiron compounds contain formal oxidation states of +1, 0, −1, or even −2. The oxidation states and other bonding properties are often assessed using the technique of Mössbauer spectroscopy. Many mixed valence compounds contain both iron(II) and iron(III) centers, such as magnetite and Prussian blue (). The latter is used as the traditional "blue" in blueprints. Iron is the first of the transition metals that cannot reach its group oxidation state of +8, although its heavier congeners ruthenium and osmium can, with ruthenium having more difficulty than osmium. Ruthenium exhibits an aqueous cationic chemistry in its low oxidation states similar to that of iron, but osmium does not, favoring high oxidation states in which it forms anionic complexes. In the second half of the 3d transition series, vertical similarities down the groups compete with the horizontal similarities of iron with its neighbors cobalt and nickel in the periodic table, which are also ferromagnetic at room temperature and share similar chemistry. As such, iron, cobalt, and nickel are sometimes grouped together as the iron triad. Unlike many other metals, iron does not form amalgams with mercury. As a result, mercury is traded in standardized 76 pound flasks (34 kg) made of iron. Iron is by far the most reactive element in its group; it is pyrophoric when finely divided and dissolves easily in dilute acids, giving Fe2+. However, it does not react with concentrated nitric acid and other oxidizing acids due to the formation of an impervious oxide layer, which can nevertheless react with hydrochloric acid. High-purity iron, called electrolytic iron, is considered to be resistant to rust, due to its oxide layer. Binary compounds Oxides and sulfides Iron forms various oxide and hydroxide compounds; the most common are iron(II,III) oxide (Fe3O4), and iron(III) oxide (Fe2O3). Iron(II) oxide also exists, though it is unstable at room temperature. Despite their names, they are actually all non-stoichiometric compounds whose compositions may vary. These oxides are the principal ores for the production of iron (see bloomery and blast furnace). They are also used in the production of ferrites, useful magnetic storage media in computers, and pigments. The best known sulfide is iron pyrite (FeS2), also known as fool's gold owing to its golden luster. It is not an iron(IV) compound, but is actually an iron(II) polysulfide containing Fe2+ and ions in a distorted sodium chloride structure. Halides The binary ferrous and ferric halides are well-known. The ferrous halides typically arise from treating iron metal with the corresponding hydrohalic acid to give the corresponding hydrated salts. Fe + 2 HX → FeX2 + H2 (X = F, Cl, Br, I) Iron reacts with fluorine, chlorine, and bromine to give the corresponding ferric halides, ferric chloride being the most common. 2 Fe + 3 X2 → 2 FeX3 (X = F, Cl, Br) Ferric iodide is an exception, being thermodynamically unstable due to the oxidizing power of Fe3+ and the high reducing power of I−: 2 I− + 2 Fe3+ → I2 + 2 Fe2+ (E0 = +0.23 V) Ferric iodide, a black solid, is not stable in ordinary conditions, but can be prepared through the reaction of iron pentacarbonyl with iodine and carbon monoxide in the presence of hexane and light at the temperature of −20 °C, with oxygen and water excluded. Complexes of ferric iodide with some soft bases are known to be stable compounds. Solution chemistry The standard reduction potentials in acidic aqueous solution for some common iron ions are given below: {| |- | [Fe(H2O)6]2+ + 2 e−|| Fe || E0 = −0.447 V |- | [Fe(H2O)6]3+ + e−|| [Fe(H2O)6]2+ || E0 = +0.77 V |- | + 8 H3O+ + 3 e−|| [Fe(H2O)6]3+ + 6 H2O || E0 = +2.20 V |} The red-purple tetrahedral ferrate(VI) anion is such a strong oxidizing agent that it oxidizes ammonia to nitrogen (N2) and water to oxygen: 4 + 34 → 4 + 20 + 3 O2 The pale-violet hexaquo complex is an acid such that above pH 0 it is fully hydrolyzed: {| |- | || || K = 10−3.05 mol dm−3 |- | || || K = 10−3.26 mol dm−3 |- | || || K = 10−2.91 mol dm−3 |} As pH rises above 0 the above yellow hydrolyzed species form and as it rises above 2–3, reddish-brown hydrous iron(III) oxide precipitates out of solution. Although Fe3+ has a d5 configuration, its absorption spectrum is not like that of Mn2+ with its weak, spin-forbidden d–d bands, because Fe3+ has higher positive charge and is more polarizing, lowering the energy of its ligand-to-metal charge transfer absorptions. Thus, all the above complexes are rather strongly colored, with the single exception of the hexaquo ion – and even that has a spectrum dominated by charge transfer in the near ultraviolet region. On the other hand, the pale green iron(II) hexaquo ion does not undergo appreciable hydrolysis. Carbon dioxide is not evolved when carbonate anions are added, which instead results in white iron(II) carbonate being precipitated out. In excess carbon dioxide this forms the slightly soluble bicarbonate, which occurs commonly in groundwater, but it oxidises quickly in air to form iron(III) oxide that accounts for the brown deposits present in a sizeable number of streams. Coordination compounds Due to its electronic structure, iron has a very large coordination and organometallic chemistry. Many coordination compounds of iron are known. A typical six-coordinate anion is hexachloroferrate(III), [FeCl6]3−, found in the mixed salt tetrakis(methylammonium) hexachloroferrate(III) chloride. Complexes with multiple bidentate ligands have geometric isomers. For example, the trans-chlorohydridobis(bis-1,2-(diphenylphosphino)ethane)iron(II) complex is used as a starting material for compounds with the moiety. The ferrioxalate ion with three oxalate ligands displays helical chirality with its two non-superposable geometries labelled Λ (lambda) for the left-handed screw axis and Δ (delta) for the right-handed screw axis, in line with IUPAC conventions. Potassium ferrioxalate is used in chemical actinometry and along with its sodium salt undergoes photoreduction applied in old-style photographic processes. The dihydrate of iron(II) oxalate has a polymeric structure with co-planar oxalate ions bridging between iron centres with the water of crystallisation located forming the caps of each octahedron, as illustrated below. Iron(III) complexes are quite similar to those of chromium(III) with the exception of iron(III)'s preference for O-donor instead of N-donor ligands. The latter tend to be rather more unstable than iron(II) complexes and often dissociate in water. Many Fe–O complexes show intense colors and are used as tests for phenols or enols. For example, in the ferric chloride test, used to determine the presence of phenols, iron(III) chloride reacts with a phenol to form a deep violet complex: 3 ArOH + FeCl3 → Fe(OAr)3 + 3 HCl (Ar = aryl) Among the halide and pseudohalide complexes, fluoro complexes of iron(III) are the most stable, with the colorless [FeF5(H2O)]2− being the most stable in aqueous solution. Chloro complexes are less stable and favor tetrahedral coordination as in [FeCl4]−; [FeBr4]− and [FeI4]− are reduced easily to iron(II). Thiocyanate is a common test for the presence of iron(III) as it forms the blood-red [Fe(SCN)(H2O)5]2+. Like manganese(II), most iron(III) complexes are high-spin, the exceptions being those with ligands that are high in the spectrochemical series such as cyanide. An example of a low-spin iron(III) complex is [Fe(CN)6]3−. Iron shows a great variety of electronic spin states, including every possible spin quantum number value for a d-block element from 0 (diamagnetic) to (5 unpaired electrons). This value is always half the number of unpaired electrons. Complexes with zero to two unpaired electrons are considered low-spin and those with four or five are considered high-spin. Iron(II) complexes are less stable than iron(III) complexes but the preference for O-donor ligands is less marked, so that for example is known while is not. They have a tendency to be oxidized to iron(III) but this can be moderated by low pH and the specific ligands used. Organometallic compounds Organoiron chemistry is the study of organometallic compounds of iron, where carbon atoms are covalently bound to the metal atom. They are many and varied, including cyanide complexes, carbonyl complexes, sandwich and half-sandwich compounds. Prussian blue or "ferric ferrocyanide", Fe4[Fe(CN)6]3, is an old and well-known iron-cyanide complex, extensively used as pigment and in several other applications. Its formation can be used as a simple wet chemistry test to distinguish between aqueous solutions of Fe2+ and Fe3+ as they react (respectively) with potassium ferricyanide and potassium ferrocyanide to form Prussian blue. Another old example of an organoiron compound is iron pentacarbonyl, Fe(CO)5, in which a neutral iron atom is bound to the carbon atoms of five carbon monoxide molecules. The compound can be used to make carbonyl iron powder, a highly reactive form of metallic iron. Thermolysis of iron pentacarbonyl gives triiron dodecacarbonyl, , a complex with a cluster of three iron atoms at its core. Collman's reagent, disodium tetracarbonylferrate, is a useful reagent for organic chemistry; it contains iron in the −2 oxidation state. Cyclopentadienyliron dicarbonyl dimer contains iron in the rare +1 oxidation state. A landmark in this field was the discovery in 1951 of the remarkably stable sandwich compound ferrocene , by Pauson and Kealy and independently by Miller and colleagues, whose surprising molecular structure was determined only a year later by Woodward and Wilkinson and Fischer. Ferrocene is still one of the most important tools and models in this class. Iron-centered organometallic species are used as catalysts. The Knölker complex, for example, is a transfer hydrogenation catalyst for ketones. Industrial uses The iron compounds produced on the largest scale in industry are iron(II) sulfate (FeSO4·7H2O) and iron(III) chloride (FeCl3). The former is one of the most readily available sources of iron(II), but is less stable to aerial oxidation than Mohr's salt (). Iron(II) compounds tend to be oxidized to iron(III) compounds in the air. History Development of iron metallurgy Iron is one of the elements undoubtedly known to the ancient world. It has been worked, or wrought, for millennia. However, iron artefacts of great age are much rarer than objects made of gold or silver due to the ease with which iron corrodes. The technology developed slowly, and even after the discovery of smelting it took many centuries for iron to replace bronze as the metal of choice for tools and weapons. Meteoritic iron Beads made from meteoric iron in 3500 BC or earlier were found in Gerzeh, Egypt by G. A. Wainwright. The beads contain 7.5% nickel, which is a signature of meteoric origin since iron found in the Earth's crust generally has only minuscule nickel impurities. Meteoric iron was highly regarded due to its origin in the heavens and was often used to forge weapons and tools. For example, a dagger made of meteoric iron was found in the tomb of Tutankhamun, containing similar proportions of iron, cobalt, and nickel to a meteorite discovered in the area, deposited by an ancient meteor shower. Items that were likely made of iron by Egyptians date from 3000 to 2500 BC. Meteoritic iron is comparably soft and ductile and easily cold forged but may get brittle when heated because of the nickel content. Wrought iron The first iron production started in the Middle Bronze Age, but it took several centuries before iron displaced bronze. Samples of smelted iron from Asmar, Mesopotamia and Tall Chagar Bazaar in northern Syria were made sometime between 3000 and 2700 BC. The Hittites established an empire in north-central Anatolia around 1600 BC. They appear to be the first to understand the production of iron from its ores and regard it highly in their society. The Hittites began to smelt iron between 1500 and 1200 BC and the practice spread to the rest of the Near East after their empire fell in 1180 BC. The subsequent period is called the Iron Age. Artifacts of smelted iron are found in India dating from 1800 to 1200 BC, and in the Levant from about 1500 BC (suggesting smelting in Anatolia or the Caucasus). Alleged references (compare history of metallurgy in South Asia) to iron in the Indian Vedas have been used for claims of a very early usage of iron in India respectively to date the texts as such. The rigveda term ayas (metal) refers to copper, while iron which is called as śyāma ayas, literally "black copper", first is mentioned in the post-rigvedic Atharvaveda. Some archaeological evidence suggests iron was smelted in Zimbabwe and southeast Africa as early as the eighth century BC. Iron working was introduced to Greece in the late 11th century BC, from which it spread quickly throughout Europe. The spread of ironworking in Central and Western Europe is associated with Celtic expansion. According to Pliny the Elder, iron use was common in the Roman era. In the lands of what is now considered China, iron appears approximately 700–500 BC. Iron smelting may have been introduced into China through Central Asia. The earliest evidence of the use of a blast furnace in China dates to the 1st century AD, and cupola furnaces were used as early as the Warring States period (403–221 BC). Usage of the blast and cupola furnace remained widespread during the Tang and Song dynasties. During the Industrial Revolution in Britain, Henry Cort began refining iron from pig iron to wrought iron (or bar iron) using innovative production systems. In 1783 he patented the puddling process for refining iron ore. It was later improved by others, including Joseph Hall. Cast iron Cast iron was first produced in China during 5th century BC, but was hardly in Europe until the medieval period. The earliest cast iron artifacts were discovered by archaeologists in what is now modern Luhe County, Jiangsu in China. Cast iron was used in ancient China for warfare, agriculture, and architecture. During the medieval period, means were found in Europe of producing wrought iron from cast iron (in this context known as pig iron) using finery forges. For all these processes, charcoal was required as fuel. Medieval blast furnaces were about tall and made of fireproof brick; forced air was usually provided by hand-operated bellows. Modern blast furnaces have grown much bigger, with hearths fourteen meters in diameter that allow them to produce thousands of tons of iron each day, but essentially operate in much the same way as they did during medieval times. In 1709, Abraham Darby I established a coke-fired blast furnace to produce cast iron, replacing charcoal, although continuing to use blast furnaces. The ensuing availability of inexpensive iron was one of the factors leading to the Industrial Revolution. Toward the end of the 18th century, cast iron began to replace wrought iron for certain purposes, because it was cheaper. Carbon content in iron was not implicated as the reason for the differences in properties of wrought iron, cast iron, and steel until the 18th century. Since iron was becoming cheaper and more plentiful, it also became a major structural material following the building of the innovative first iron bridge in 1778. This bridge still stands today as a monument to the role iron played in the Industrial Revolution. Following this, iron was used in rails, boats, ships, aqueducts, and buildings, as well as in iron cylinders in steam engines. Railways have been central to the formation of modernity and ideas of progress and various languages refer to railways as iron road (e.g. French , German , Turkish , Russian , Chinese, Japanese, and Korean 鐵道, Vietnamese ). Steel Steel (with smaller carbon content than pig iron but more than wrought iron) was first produced in antiquity by using a bloomery. Blacksmiths in Luristan in western Persia were making good steel by 1000 BC. Then improved versions, Wootz steel by India and Damascus steel were developed around 300 BC and AD 500 respectively. These methods were specialized, and so steel did not become a major commodity until the 1850s. New methods of producing it by carburizing bars of iron in the cementation process were devised in the 17th century. In the Industrial Revolution, new methods of producing bar iron without charcoal were devised and these were later applied to produce steel. In the late 1850s, Henry Bessemer invented a new steelmaking process, involving blowing air through molten pig iron, to produce mild steel. This made steel much more economical, thereby leading to wrought iron no longer being produced in large quantities. Foundations of modern chemistry In 1774, Antoine Lavoisier used the reaction of water steam with metallic iron inside an incandescent iron tube to produce hydrogen in his experiments leading to the demonstration of the conservation of mass, which was instrumental in changing chemistry from a qualitative science to a quantitative one. Symbolic role Iron plays a certain role in mythology and has found various usage as a metaphor and in folklore. The Greek poet Hesiod's Works and Days (lines 109–201) lists different ages of man named after metals like gold, silver, bronze and iron to account for successive ages of humanity. The Iron Age was closely related with Rome, and in Ovid's Metamorphoses An example of the importance of iron's symbolic role may be found in the German Campaign of 1813. Frederick William III commissioned then the first Iron Cross as military decoration. Berlin iron jewellery reached its peak production between 1813 and 1815, when the Prussian royal family urged citizens to donate gold and silver jewellery for military funding. The inscription Ich gab Gold für Eisen (I gave gold for iron) was used as well in later war efforts. Laboratory routes For a few limited purposes when it is needed, pure iron is produced in the laboratory in small quantities by reducing the pure oxide or hydroxide with hydrogen, or forming iron pentacarbonyl and heating it to 250 °C so that it decomposes to form pure iron powder. Another method is electrolysis of ferrous chloride onto an iron cathode. Main industrial route Nowadays, the industrial production of iron or steel consists of two main stages. In the first stage, iron ore is reduced with coke in a blast furnace, and the molten metal is separated from gross impurities such as silicate minerals. This stage yields an alloy – pig iron – that contains relatively large amounts of carbon. In the second stage, the amount of carbon in the pig iron is lowered by oxidation to yield wrought iron, steel, or cast iron. Other metals can be added at this stage to form alloy steels. Blast furnace processing The blast furnace is loaded with iron ores, usually hematite or magnetite , along with coke (coal that has been separately baked to remove volatile components) and flux (limestone or dolomite). "Blasts" of air pre-heated to 900 °C (sometimes with oxygen enrichment) is blown through the mixture, in sufficient amount to turn the carbon into carbon monoxide: This reaction raises the temperature to about 2000 °C. The carbon monoxide reduces the iron ore to metallic iron: Some iron in the high-temperature lower region of the furnace reacts directly with the coke: The flux removes silicaceous minerals in the ore, which would otherwise clog the furnace: The heat of the furnace decomposes the carbonates to calcium oxide, which reacts with any excess silica to form a slag composed of calcium silicate or other products. At the furnace's temperature, the metal and the slag are both molten. They collect at the bottom as two immiscible liquid layers (with the slag on top), that are then easily separated. The slag can be used as a material in road construction or to improve mineral-poor soils for agriculture. Steelmaking thus remains one of the largest industrial contributors of CO2 emissions in the world. Steelmaking The pig iron produced by the blast furnace process contains up to 4–5% carbon (by mass), with small amounts of other impurities like sulfur, magnesium, phosphorus, and manganese. This high level of carbon makes it relatively weak and brittle. Reducing the amount of carbon to 0.002–2.1% produces steel, which may be up to 1000 times harder than pure iron. A great variety of steel articles can then be made by cold working, hot rolling, forging, machining, etc. Removing the impurities from pig iron, but leaving 2–4% carbon, results in cast iron, which is cast by foundries into articles such as stoves, pipes, radiators, lamp-posts, and rails. Steel products often undergo various heat treatments after they are forged to shape. Annealing consists of heating them to 700–800 °C for several hours and then gradual cooling. It makes the steel softer and more workable. Direct iron reduction Owing to environmental concerns, alternative methods of processing iron have been developed. "Direct iron reduction" reduces iron ore to a ferrous lump called "sponge" iron or "direct" iron that is suitable for steelmaking. Two main reactions comprise the direct reduction process: Natural gas is partially oxidized (with heat and a catalyst): Iron ore is then treated with these gases in a furnace, producing solid sponge iron: Silica is removed by adding a limestone flux as described above. Thermite process Ignition of a mixture of aluminium powder and iron oxide yields metallic iron via the thermite reaction: Alternatively pig iron may be made into steel (with up to about 2% carbon) or wrought iron (commercially pure iron). Various processes have been used for this, including finery forges, puddling furnaces, Bessemer converters, open hearth furnaces, basic oxygen furnaces, and electric arc furnaces. In all cases, the objective is to oxidize some or all of the carbon, together with other impurities. On the other hand, other metals may be added to make alloy steels. Molten oxide electrolysis Molten oxide electrolysis (MOE) uses electrolysis of molten iron oxide to yield metallic iron. It is studied in laboratory-scale experiments and is proposed as a method for industrial iron production that has no direct emissions of carbon dioxide. It uses a liquid iron cathode, an anode formed from an alloy of chromium, aluminium and iron, and the electrolyte is a mixture of molten metal oxides into which iron ore is dissolved. The current keeps the electrolyte molten and reduces the iron oxide. Oxygen gas is produced in addition to liquid iron. The only carbon dioxide emissions come from any fossil fuel-generated electricity used to heat and reduce the metal. Applications As structural material Iron is the most widely used of all the metals, accounting for over 90% of worldwide metal production. Its low cost and high strength often make it the material of choice to withstand stress or transmit forces, such as the construction of machinery and machine tools, rails, automobiles, ship hulls, concrete reinforcing bars, and the load-carrying framework of buildings. Since pure iron is quite soft, it is most commonly combined with alloying elements to make steel. Mechanical properties The mechanical properties of iron and its alloys are extremely relevant to their structural applications. Those properties can be evaluated in various ways, including the Brinell test, the Rockwell test and the Vickers hardness test. The properties of pure iron are often used to calibrate measurements or to compare tests. However, the mechanical properties of iron are significantly affected by the sample's purity: pure, single crystals of iron are actually softer than aluminium, and the purest industrially produced iron (99.99%) has a hardness of 20–30 Brinell. The pure iron (99.9%~99.999%), especially called electrolytic iron, is industrially produced by electrolytic refining. An increase in the carbon content will cause a significant increase in the hardness and tensile strength of iron. Maximum hardness of 65 Rc is achieved with a 0.6% carbon content, although the alloy has low tensile strength. Because of the softness of iron, it is much easier to work with than its heavier congeners ruthenium and osmium. Types of steels and alloys α-Iron is a fairly soft metal that can dissolve only a small concentration of carbon (no more than 0.021% by mass at 910 °C). Austenite (γ-iron) is similarly soft and metallic but can dissolve considerably more carbon (as much as 2.04% by mass at 1146 °C). This form of iron is used in the type of stainless steel used for making cutlery, and hospital and food-service equipment. Commercially available iron is classified based on purity and the abundance of additives. Pig iron has 3.5–4.5% carbon and contains varying amounts of contaminants such as sulfur, silicon and phosphorus. Pig iron is not a saleable product, but rather an intermediate step in the production of cast iron and steel. The reduction of contaminants in pig iron that negatively affect material properties, such as sulfur and phosphorus, yields cast iron containing 2–4% carbon, 1–6% silicon, and small amounts of manganese. Pig iron has a melting point in the range of 1420–1470 K, which is lower than either of its two main components, and makes it the first product to be melted when carbon and iron are heated together. Its mechanical properties vary greatly and depend on the form the carbon takes in the alloy. "White" cast irons contain their carbon in the form of cementite, or iron carbide (Fe3C). This hard, brittle compound dominates the mechanical properties of white cast irons, rendering them hard, but unresistant to shock. The broken surface of a white cast iron is full of fine facets of the broken iron carbide, a very pale, silvery, shiny material, hence the appellation. Cooling a mixture of iron with 0.8% carbon slowly below 723 °C to room temperature results in separate, alternating layers of cementite and α-iron, which is soft and malleable and is called pearlite for its appearance. Rapid cooling, on the other hand, does not allow time for this separation and creates hard and brittle martensite. The steel can then be tempered by reheating to a temperature in between, changing the proportions of pearlite and martensite. The end product below 0.8% carbon content is a pearlite-αFe mixture, and that above 0.8% carbon content is a pearlite-cementite mixture. In gray iron the carbon exists as separate, fine flakes of graphite, and also renders the material brittle due to the sharp edged flakes of graphite that produce stress concentration sites within the material. A newer variant of gray iron, referred to as ductile iron, is specially treated with trace amounts of magnesium to alter the shape of graphite to spheroids, or nodules, reducing the stress concentrations and vastly increasing the toughness and strength of the material. Wrought iron contains less than 0.25% carbon but large amounts of slag that give it a fibrous characteristic. Wrought iron is more corrosion resistant than steel. It has been almost completely replaced by mild steel, which corrodes more readily than wrought iron, but is cheaper and more widely available. Carbon steel contains 2.0% carbon or less, with small amounts of manganese, sulfur, phosphorus, and silicon. Alloy steels contain varying amounts of carbon as well as other metals, such as chromium, vanadium, molybdenum, nickel, tungsten, etc. Their alloy content raises their cost, and so they are usually only employed for specialist uses. One common alloy steel, though, is stainless steel. Recent developments in ferrous metallurgy have produced a growing range of microalloyed steels, also termed 'HSLA' or high-strength, low alloy steels, containing tiny additions to produce high strengths and often spectacular toughness at minimal cost. Alloys with high purity elemental makeups (such as alloys of electrolytic iron) have specifically enhanced properties such as ductility, tensile strength, toughness, fatigue strength, heat resistance, and corrosion resistance. Apart from traditional applications, iron is also used for protection from ionizing radiation. Although it is lighter than another traditional protection material, lead, it is much stronger mechanically. The main disadvantage of iron and steel is that pure iron, and most of its alloys, suffer badly from rust if not protected in some way, a cost amounting to over 1% of the world's economy. Painting, galvanization, passivation, plastic coating and bluing are all used to protect iron from rust by excluding water and oxygen or by cathodic protection. The mechanism of the rusting of iron is as follows: Cathode: 3 O2 + 6 H2O + 12 e− → 12 OH− Anode: 4 Fe → 4 Fe2+ + 8 e−; 4 Fe2+ → 4 Fe3+ + 4 e− Overall: 4 Fe + 3 O2 + 6 H2O → 4 Fe3+ + 12 OH− → 4 Fe(OH)3 or 4 FeO(OH) + 4 H2O The electrolyte is usually iron(II) sulfate in urban areas (formed when atmospheric sulfur dioxide attacks iron), and salt particles in the atmosphere in seaside areas. Catalysts and reagents Because Fe is inexpensive and nontoxic, much effort has been devoted to the development of Fe-based catalysts and reagents. Iron is however less common as a catalyst in commercial processes than more expensive metals. In biology, Fe-containing enzymes are pervasive. Iron catalysts are traditionally used in the Haber–Bosch process for the production of ammonia and the Fischer–Tropsch process for conversion of carbon monoxide to hydrocarbons for fuels and lubricants. Powdered iron in an acidic medium is used in the Bechamp reduction, the conversion of nitrobenzene to aniline. Iron compounds Iron(III) oxide mixed with aluminium powder can be ignited to create a thermite reaction, used in welding large iron parts (like rails) and purifying ores. Iron(III) oxide and oxyhydroxide are used as reddish and ocher pigments. Iron(III) chloride finds use in water purification and sewage treatment, in the dyeing of cloth, as a coloring agent in paints, as an additive in animal feed, and as an etchant for copper in the manufacture of printed circuit boards. It can also be dissolved in alcohol to form tincture of iron, which is used as a medicine to stop bleeding in canaries. Iron(II) sulfate is used as a precursor to other iron compounds. It is also used to reduce chromate in cement. It is used to fortify foods and treat iron deficiency anemia. Iron(III) sulfate is used in settling minute sewage particles in tank water. Iron(II) chloride is used as a reducing flocculating agent, in the formation of iron complexes and magnetic iron oxides, and as a reducing agent in organic synthesis. Sodium nitroprusside is a drug used as a vasodilator. It is on the World Health Organization's List of Essential Medicines. Biological and pathological role Iron is required for life. The iron–sulfur clusters are pervasive and include nitrogenase, the enzymes responsible for biological nitrogen fixation. Iron-containing proteins participate in transport, storage and use of oxygen. Iron proteins are involved in electron transfer. Examples of iron-containing proteins in higher organisms include hemoglobin, cytochrome (see high-valent iron), and catalase. The average adult human contains about 0.005% body weight of iron, or about four grams, of which three quarters is in hemoglobin—a level that remains constant despite only about one milligram of iron being absorbed each day, because the human body recycles its hemoglobin for the iron content. Microbial growth may be assisted by oxidation of iron(II) or by reduction of iron(III). Biochemistry Iron acquisition poses a problem for aerobic organisms because ferric iron is poorly soluble near neutral pH. Thus, these organisms have developed means to absorb iron as complexes, sometimes taking up ferrous iron before oxidising it back to ferric iron. In particular, bacteria have evolved very high-affinity sequestering agents called siderophores. After uptake in human cells, iron storage is precisely regulated. A major component of this regulation is the protein transferrin, which binds iron ions absorbed from the duodenum and carries it in the blood to cells. Transferrin contains Fe3+ in the middle of a distorted octahedron, bonded to one nitrogen, three oxygens and a chelating carbonate anion that traps the Fe3+ ion: it has such a high stability constant that it is very effective at taking up Fe3+ ions even from the most stable complexes. At the bone marrow, transferrin is reduced from Fe3+ to Fe2+ and stored as ferritin to be incorporated into hemoglobin. The most commonly known and studied bioinorganic iron compounds (biological iron molecules) are the heme proteins: examples are hemoglobin, myoglobin, and cytochrome P450. These compounds participate in transporting gases, building enzymes, and transferring electrons. Metalloproteins are a group of proteins with metal ion cofactors. Some examples of iron metalloproteins are ferritin and rubredoxin. Many enzymes vital to life contain iron, such as catalase, lipoxygenases, and IRE-BP. Hemoglobin is an oxygen carrier that occurs in red blood cells and contributes their color, transporting oxygen in the arteries from the lungs to the muscles where it is transferred to myoglobin, which stores it until it is needed for the metabolic oxidation of glucose, generating energy. Here the hemoglobin binds to carbon dioxide, produced when glucose is oxidized, which is transported through the veins by hemoglobin (predominantly as bicarbonate anions) back to the lungs where it is exhaled. In hemoglobin, the iron is in one of four heme groups and has six possible coordination sites; four are occupied by nitrogen atoms in a porphyrin ring, the fifth by an imidazole nitrogen in a histidine residue of one of the protein chains attached to the heme group, and the sixth is reserved for the oxygen molecule it can reversibly bind to. When hemoglobin is not attached to oxygen (and is then called deoxyhemoglobin), the Fe2+ ion at the center of the heme group (in the hydrophobic protein interior) is in a high-spin configuration. It is thus too large to fit inside the porphyrin ring, which bends instead into a dome with the Fe2+ ion about 55 picometers above it. In this configuration, the sixth coordination site reserved for the oxygen is blocked by another histidine residue. When deoxyhemoglobin picks up an oxygen molecule, this histidine residue moves away and returns once the oxygen is securely attached to form a hydrogen bond with it. This results in the Fe2+ ion switching to a low-spin configuration, resulting in a 20% decrease in ionic radius so that now it can fit into the porphyrin ring, which becomes planar. Additionally, this hydrogen bonding results in the tilting of the oxygen molecule, resulting in a Fe–O–O bond angle of around 120° that avoids the formation of Fe–O–Fe or Fe–O2–Fe bridges that would lead to electron transfer, the oxidation of Fe2+ to Fe3+, and the destruction of hemoglobin. This results in a movement of all the protein chains that leads to the other subunits of hemoglobin changing shape to a form with larger oxygen affinity. Thus, when deoxyhemoglobin takes up oxygen, its affinity for more oxygen increases, and vice versa. Myoglobin, on the other hand, contains only one heme group and hence this cooperative effect cannot occur. Thus, while hemoglobin is almost saturated with oxygen in the high partial pressures of oxygen found in the lungs, its affinity for oxygen is much lower than that of myoglobin, which oxygenates even at low partial pressures of oxygen found in muscle tissue. As described by the Bohr effect (named after Christian Bohr, the father of Niels Bohr), the oxygen affinity of hemoglobin diminishes in the presence of carbon dioxide. Carbon monoxide and phosphorus trifluoride are poisonous to humans because they bind to hemoglobin similarly to oxygen, but with much more strength, so that oxygen can no longer be transported throughout the body. Hemoglobin bound to carbon monoxide is known as carboxyhemoglobin. This effect also plays a minor role in the toxicity of cyanide, but there the major effect is by far its interference with the proper functioning of the electron transport protein cytochrome a. The cytochrome proteins also involve heme groups and are involved in the metabolic oxidation of glucose by oxygen. The sixth coordination site is then occupied by either another imidazole nitrogen or a methionine sulfur, so that these proteins are largely inert to oxygen—with the exception of cytochrome a, which bonds directly to oxygen and thus is very easily poisoned by cyanide. Here, the electron transfer takes place as the iron remains in low spin but changes between the +2 and +3 oxidation states. Since the reduction potential of each step is slightly greater than the previous one, the energy is released step-by-step and can thus be stored in adenosine triphosphate. Cytochrome a is slightly distinct, as it occurs at the mitochondrial membrane, binds directly to oxygen, and transports protons as well as electrons, as follows: 4 Cytc2+ + O2 + 8H → 4 Cytc3+ + 2 H2O + 4H Although the heme proteins are the most important class of iron-containing proteins, the iron–sulfur proteins are also very important, being involved in electron transfer, which is possible since iron can exist stably in either the +2 or +3 oxidation states. These have one, two, four, or eight iron atoms that are each approximately tetrahedrally coordinated to four sulfur atoms; because of this tetrahedral coordination, they always have high-spin iron. The simplest of such compounds is rubredoxin, which has only one iron atom coordinated to four sulfur atoms from cysteine residues in the surrounding peptide chains. Another important class of iron–sulfur proteins is the ferredoxins, which have multiple iron atoms. Transferrin does not belong to either of these classes. The ability of sea mussels to maintain their grip on rocks in the ocean is facilitated by their use of organometallic iron-based bonds in their protein-rich cuticles. Based on synthetic replicas, the presence of iron in these structures increased elastic modulus 770 times, tensile strength 58 times, and toughness 92 times. The amount of stress required to permanently damage them increased 76 times. Nutrition Diet Iron is pervasive, but particularly rich sources of dietary iron include red meat, oysters, beans, poultry, fish, leaf vegetables, watercress, tofu, and blackstrap molasses. Bread and breakfast cereals are sometimes specifically fortified with iron. Iron provided by dietary supplements is often found as iron(II) fumarate, although iron(II) sulfate is cheaper and is absorbed equally well. Elemental iron, or reduced iron, despite being absorbed at only one-third to two-thirds the efficiency (relative to iron sulfate), is often added to foods such as breakfast cereals or enriched wheat flour. Iron is most available to the body when chelated to amino acids and is also available for use as a common iron supplement. Glycine, the least expensive amino acid, is most often used to produce iron glycinate supplements. Dietary recommendations The U.S. Institute of Medicine (IOM) updated Estimated Average Requirements (EARs) and Recommended Dietary Allowances (RDAs) for iron in 2001. The current EAR for iron for women ages 1418 is 7.9 mg/day, 8.1 mg/day for ages 1950 and 5.0 mg/day thereafter (postmenopause). For men, the EAR is 6.0 mg/day for ages 19 and up. The RDA is 15.0 mg/day for women ages 1518, 18.0 mg/day for ages 1950 and 8.0 mg/day thereafter. For men, 8.0 mg/day for ages 19 and up. RDAs are higher than EARs so as to identify amounts that will cover people with higher-than-average requirements. RDA for pregnancy is 27 mg/day and, for lactation, 9 mg/day. For children ages 13 years 7 mg/day, 10 mg/day for ages 4–8 and 8 mg/day for ages 913. As for safety, the IOM also sets Tolerable upper intake levels (ULs) for vitamins and minerals when evidence is sufficient. In the case of iron, the UL is set at 45 mg/day. Collectively the EARs, RDAs and ULs are referred to as Dietary Reference Intakes. The European Food Safety Authority (EFSA) refers to the collective set of information as Dietary Reference Values, with Population Reference Intake (PRI) instead of RDA, and Average Requirement instead of EAR. AI and UL are defined the same as in the United States. For women the PRI is 13 mg/day ages 1517 years, 16 mg/day for women ages 18 and up who are premenopausal and 11 mg/day postmenopausal. For pregnancy and lactation, 16 mg/day. For men the PRI is 11 mg/day ages 15 and older. For children ages 1 to 14, the PRI increases from 7 to 11 mg/day. The PRIs are higher than the U.S. RDAs, with the exception of pregnancy. The EFSA reviewed the same safety question did not establish a UL. Infants may require iron supplements if they are bottle-fed cow's milk. Frequent blood donors are at risk of low iron levels and are often advised to supplement their iron intake. For U.S. food and dietary supplement labeling purposes, the amount in a serving is expressed as a percent of Daily Value (%DV). For iron labeling purposes, 100% of the Daily Value was 18 mg, and remained unchanged at 18 mg. A table of the old and new adult daily values is provided at Reference Daily Intake. Deficiency Iron deficiency is the most common nutritional deficiency in the world. When loss of iron is not adequately compensated by adequate dietary iron intake, a state of latent iron deficiency occurs, which over time leads to iron-deficiency anemia if left untreated, which is characterised by an insufficient number of red blood cells and an insufficient amount of hemoglobin. Children, pre-menopausal women (women of child-bearing age), and people with poor diet are most susceptible to the disease. Most cases of iron-deficiency anemia are mild, but if not treated can cause problems like fast or irregular heartbeat, complications during pregnancy, and delayed growth in infants and children. The brain is resistant to acute iron deficiency due to the slow transport of iron through the blood brain barrier. Acute fluctuations in iron status (marked by serum ferritin levels) do not reflect brain iron status, but prolonged nutritional iron deficiency is suspected to reduce brain iron concentrations over time. In the brain, iron plays a role in oxygen transport, myelin synthesis, mitochondrial respiration, and as a cofactor for neurotransmitter synthesis and metabolism. Animal models of nutritional iron deficiency report biomolecular changes resembling those seen in Parkinson's and Huntington's disease. However, age-related accumulation of iron in the brain has also been linked to the development of Parkinson's. Excess Iron uptake is tightly regulated by the human body, which has no regulated physiological means of excreting iron. Only small amounts of iron are lost daily due to mucosal and skin epithelial cell sloughing, so control of iron levels is primarily accomplished by regulating uptake. Regulation of iron uptake is impaired in some people as a result of a genetic defect that maps to the HLA-H gene region on chromosome 6 and leads to abnormally low levels of hepcidin, a key regulator of the entry of iron into the circulatory system in mammals. In these people, excessive iron intake can result in iron overload disorders, known medically as hemochromatosis. Many people have an undiagnosed genetic susceptibility to iron overload, and are not aware of a family history of the problem. For this reason, people should not take iron supplements unless they suffer from iron deficiency and have consulted a doctor. Hemochromatosis is estimated to be the cause of 0.3–0.8% of all metabolic diseases of Caucasians. Overdoses of ingested iron can cause excessive levels of free iron in the blood. High blood levels of free ferrous iron react with peroxides to produce highly reactive free radicals that can damage DNA, proteins, lipids, and other cellular components. Iron toxicity occurs when the cell contains free iron, which generally occurs when iron levels exceed the availability of transferrin to bind the iron. Damage to the cells of the gastrointestinal tract can also prevent them from regulating iron absorption, leading to further increases in blood levels. Iron typically damages cells in the heart, liver and elsewhere, causing adverse effects that include coma, metabolic acidosis, shock, liver failure, coagulopathy, long-term organ damage, and even death. Humans experience iron toxicity when the iron exceeds 20 milligrams for every kilogram of body mass; 60 milligrams per kilogram is considered a lethal dose. Overconsumption of iron, often the result of children eating large quantities of ferrous sulfate tablets intended for adult consumption, is one of the most common toxicological causes of death in children under six. The Dietary Reference Intake (DRI) sets the Tolerable Upper Intake Level (UL) for adults at 45 mg/day. For children under fourteen years old the UL is 40 mg/day. The medical management of iron toxicity is complicated, and can include use of a specific chelating agent called deferoxamine to bind and expel excess iron from the body. ADHD Some research has suggested that low thalamic iron levels may play a role in the pathophysiology of ADHD. Some researchers have found that iron supplementation can be effective especially in the inattentive subtype of the disorder. Some researchers in the 2000s suggested a link between low levels of iron in the blood and ADHD. A 2012 study found no such correlation. Cancer The role of iron in cancer defense can be described as a "double-edged sword" because of its pervasive presence in non-pathological processes. People having chemotherapy may develop iron deficiency and anemia, for which intravenous iron therapy is used to restore iron levels. Iron overload, which may occur from high consumption of red meat, may initiate tumor growth and increase susceptibility to cancer onset, particularly for colorectal cancer. Marine systems Iron plays an essential role in marine systems and can act as a limiting nutrient for planktonic activity. Because of this, too much of a decrease in iron may lead to a decrease in growth rates in phytoplanktonic organisms such as diatoms. Iron can also be oxidized by marine microbes under conditions that are high in iron and low in oxygen. Iron can enter marine systems through adjoining rivers and directly from the atmosphere. Once iron enters the ocean, it can be distributed throughout the water column through ocean mixing and through recycling on the cellular level. In the arctic, sea ice plays a major role in the store and distribution of iron in the ocean, depleting oceanic iron as it freezes in the winter and releasing it back into the water when thawing occurs in the summer. The iron cycle can fluctuate the forms of iron from aqueous to particle forms altering the availability of iron to primary producers. Increased light and warmth increases the amount of iron that is in forms that are usable by primary producers.
Physical sciences
Chemistry
null
14749
https://en.wikipedia.org/wiki/Indium
Indium
Indium is a chemical element; it has symbol In and atomic number 49. It is a silvery-white post-transition metal and one of the softest elements. Chemically, indium is similar to gallium and thallium, and its properties are largely intermediate between the two. It was discovered in 1863 by Ferdinand Reich and Hieronymous Theodor Richter by spectroscopic methods and named for the indigo blue line in its spectrum. Indium is a technology-critical element used primarily in the production of flat-panel displays as indium tin oxide (ITO), a transparent and conductive coating applied to glass. Indium is also used in the semiconductor industry, in low-melting-point metal alloys such as solders and soft-metal high-vacuum seals. It is produced exclusively as a by-product during the processing of the ores of other metals, chiefly from sphalerite and other zinc sulfide ores. Indium has no biological role and its compounds are toxic when inhaled or injected into the bloodstream, although they are poorly absorbed following ingestion. Etymology The name comes from the Latin word indicum meaning violet or indigo. The word indicum means "Indian", as the naturally based dye indigo was originally exported to Europe from India. Properties Physical Indium is a shiny silvery-white, highly ductile post-transition metal with a bright luster. It is so soft (Mohs hardness 1.2) that it can be cut with a knife and leaves a visible line like a pencil when rubbed on paper. It is a member of group 13 on the periodic table and its properties are mostly intermediate between its vertical neighbors gallium and thallium. As with tin, a high-pitched cry is heard when indium is bent – a crackling sound due to crystal twinning. Like gallium, indium is able to wet glass. Like both, indium has a low melting point, 156.60 °C (313.88 °F); higher than its lighter homologue, gallium, but lower than its heavier homologue, thallium, and lower than tin. The boiling point is 2072 °C (3762 °F), higher than that of thallium, but lower than gallium, conversely to the general trend of melting points, but similarly to the trends down the other post-transition metal groups because of the weakness of the metallic bonding with few electrons delocalized. The density of indium, 7.31 g/cm3, is also greater than gallium, but lower than thallium. Below the critical temperature, 3.41 K, indium becomes a superconductor. Indium crystallizes in the body-centered tetragonal crystal system in the space group I4/mmm (lattice parameters: a = 325 pm, c = 495 pm): this is a slightly distorted face-centered cubic structure, where each indium atom has four neighbours at 324 pm distance and eight neighbours slightly further (336 pm). Indium has greater solubility in liquid mercury than any other metal (more than 50 mass percent of indium at 0 °C). Indium displays a ductile viscoplastic response, found to be size-independent in tension and compression. However it does have a size effect in bending and indentation, associated to a length-scale of order 50–100 μm, significantly large when compared with other metals. Chemical Indium has 49 electrons, with an electronic configuration of [Kr]4d5s5p. In compounds, indium most commonly donates the three outermost electrons to become indium(III), In. In some cases, the pair of 5s-electrons are not donated, resulting in indium(I), In. The stabilization of the monovalent state is attributed to the inert pair effect, in which relativistic effects stabilize the 5s-orbital, observed in heavier elements. Thallium (indium's heavier homolog) shows an even stronger effect, causing oxidation to thallium(I) to be more probable than to thallium(III), whereas gallium (indium's lighter homolog) commonly shows only the +3 oxidation state. Thus, although thallium(III) is a moderately strong oxidizing agent, indium(III) is not, and many indium(I) compounds are powerful reducing agents. While the energy required to include the s-electrons in chemical bonding is lowest for indium among the group 13 metals, bond energies decrease down the group so that by indium, the energy released in forming two additional bonds and attaining the +3 state is not always enough to outweigh the energy needed to involve the 5s-electrons. Indium(I) oxide and hydroxide are more basic and indium(III) oxide and hydroxide are more acidic. A number of standard electrode potentials, depending on the reaction under study, are reported for indium, reflecting the decreased stability of the +3 oxidation state: {| |- | In2+ + e−|| ⇌ In+ || E0 = −0.40 V |- | In3+ + e−|| ⇌ In2+ || E0 = −0.49 V |- | In3+ + 2 e−|| ⇌ In+ || E0 = −0.443 V |- | In3+ + 3 e−|| ⇌ In || E0 = −0.3382 V |- | In+ + e−|| ⇌ In || E0 = −0.14 V |} Indium metal does not react with water, but it is oxidized by stronger oxidizing agents such as halogens to give indium(III) compounds. It does not form a boride, silicide, or carbide, and the hydride InH3 has at best a transitory existence in ethereal solutions at low temperatures, being unstable enough to spontaneously polymerize without coordination. Indium is rather basic in aqueous solution, showing only slight amphoteric characteristics, and unlike its lighter homologs aluminium and gallium, it is insoluble in aqueous alkaline solutions. Isotopes Indium has 39 known isotopes, ranging in mass number from 97 to 135. Only two isotopes occur naturally as primordial nuclides: indium-113, the only stable isotope, and indium-115, which has a half-life of 4.41 years, four orders of magnitude greater than the age of the Universe and nearly 30,000 times greater than half life of thorium-232. The half-life of 115In is very long because the beta decay to 115Sn is spin-forbidden. Indium-115 makes up 95.7% of all indium. Indium is one of three known elements (the others being tellurium and rhenium) of which the stable isotope is less abundant in nature than the long-lived primordial radioisotopes. The stablest artificial isotope is indium-111, with a half-life of approximately 2.8 days. All other isotopes have half-lives shorter than 5 hours. Indium also has 47 meta states, among which indium-114m1 (half-life about 49.51 days) is the most stable, more stable than the ground state of any indium isotope other than the primordial. All decay by isomeric transition. The indium isotopes lighter than 113In predominantly decay through electron capture or positron emission to form cadmium isotopes, while the indium isotopes heavier than 113In predominantly decay through beta-minus decay to form tin isotopes. Compounds Indium(III) Indium(III) oxide, In2O3, forms when indium metal is burned in air or when the hydroxide or nitrate is heated. In2O3 adopts a structure like alumina and is amphoteric, that is able to react with both acids and bases. Indium reacts with water to reproduce soluble indium(III) hydroxide, which is also amphoteric; with alkalis to produce indates(III); and with acids to produce indium(III) salts: In(OH)3 + 3 HCl → InCl3 + 3 H2O The analogous sesqui-chalcogenides with sulfur, selenium, and tellurium are also known. Indium forms the expected trihalides. Chlorination, bromination, and iodination of In produce colorless InCl3, InBr3, and yellow InI3. The compounds are Lewis acids, somewhat akin to the better known aluminium trihalides. Again like the related aluminium compound, InF3 is polymeric. Direct reaction of indium with the pnictogens produces the gray or semimetallic III–V semiconductors. Many of them slowly decompose in moist air, necessitating careful storage of semiconductor compounds to prevent contact with the atmosphere. Indium nitride is readily attacked by acids and alkalis. Indium(I) Indium(I) compounds are not common. The chloride, bromide, and iodide are deeply colored, unlike the parent trihalides from which they are prepared. The fluoride is known only as an unstable gas. Indium(I) oxide black powder is produced when indium(III) oxide decomposes upon heating to 700 °C. Other oxidation states Less frequently, indium forms compounds in oxidation state +2 and even fractional oxidation states. Usually such materials feature In–In bonding, most notably in the halides In2X4 and [In2X6]2−, and various subchalcogenides such as In4Se3. Several other compounds are known to combine indium(I) and indium(III), such as InI6(InIIICl6)Cl3, InI5(InIIIBr4)2(InIIIBr6), and InIInIIIBr4. Organoindium compounds Organoindium compounds feature In–C bonds. Most are In(III) derivatives, but cyclopentadienylindium(I) is an exception. It was the first known organoindium(I) compound, and is polymeric, consisting of zigzag chains of alternating indium atoms and cyclopentadienyl complexes. Perhaps the best-known organoindium compound is trimethylindium, In(CH3)3, used to prepare certain semiconducting materials. History In 1863, German chemists Ferdinand Reich and Hieronymus Theodor Richter were testing ores from the mines around Freiberg, Saxony. They dissolved the minerals pyrite, arsenopyrite, galena and sphalerite in hydrochloric acid and distilled raw zinc chloride. Reich, who was color-blind, employed Richter as an assistant for detecting the colored spectral lines. Knowing that ores from that region sometimes contain thallium, they searched for the green thallium emission spectrum lines. Instead, they found a bright blue line. Because that blue line did not match any known element, they hypothesized a new element was present in the minerals. They named the element indium, from the indigo color seen in its spectrum, after the Latin indicum, meaning 'of India'. Richter went on to isolate the metal in 1864. An ingot of was presented at the World Fair 1867. Reich and Richter later fell out when the latter claimed to be the sole discoverer. Occurrence Indium is created by the long-lasting (up to thousands of years) s-process (slow neutron capture) in low-to-medium-mass stars (range in mass between 0.6 and 10 solar masses). When a silver-109 atom captures a neutron, it transmutes into silver-110, which then undergoes beta decay to become cadmium-110. Capturing further neutrons, it becomes cadmium-115, which decays to indium-115 by another beta decay. This explains why the radioactive isotope is more abundant than the stable one. The stable indium isotope, indium-113, is one of the p-nuclei, the origin of which is not fully understood; although indium-113 is known to be made directly in the s- and r-processes (rapid neutron capture), and also as the daughter of very long-lived cadmium-113, which has a half-life of about eight quadrillion years, this cannot account for all indium-113. Indium is the 68th most abundant element in Earth's crust at approximately 50 ppb. This is similar to the crustal abundance of silver, bismuth and mercury. It very rarely forms its own minerals, or occurs in elemental form. Fewer than 10 indium minerals such as roquesite (CuInS2) are known, and none occur at sufficient concentrations for economic extraction. Instead, indium is usually a trace constituent of more common ore minerals, such as sphalerite and chalcopyrite. From these, it can be extracted as a by-product during smelting. While the enrichment of indium in these deposits is high relative to its crustal abundance, it is insufficient, at current prices, to support extraction of indium as the main product. Different estimates exist of the amounts of indium contained within the ores of other metals. However, these amounts are not extractable without mining of the host materials (see Production and availability). Thus, the availability of indium is fundamentally determined by the rate at which these ores are extracted, and not their absolute amount. This is an aspect that is often forgotten in the current debate, e.g. by the Graedel group at Yale in their criticality assessments, explaining the paradoxically low depletion times some studies cite. Production and availability Indium is produced exclusively as a by-product during the processing of the ores of other metals. Its main source material are sulfidic zinc ores, where it is mostly hosted by sphalerite. Minor amounts are also extracted from sulfidic copper ores. During the roast-leach-electrowinning process of zinc smelting, indium accumulates in the iron-rich residues. From these, it can be extracted in different ways. It may also be recovered directly from the process solutions. Further purification is done by electrolysis. The exact process varies with the mode of operation of the smelter. Its by-product status means that indium production is constrained by the amount of sulfidic zinc (and copper) ores extracted each year. Therefore, its availability needs to be discussed in terms of supply potential. The supply potential of a by-product is defined as that amount which is economically extractable from its host materials per year under current market conditions (i.e. technology and price). Reserves and resources are not relevant for by-products, since they cannot be extracted independently from the main-products. Recent estimates put the supply potential of indium at a minimum of 1,300 t/yr from sulfidic zinc ores and 20 t/yr from sulfidic copper ores. These figures are significantly greater than current production (655 t in 2016). Thus, major future increases in the by-product production of indium will be possible without significant increases in production costs or price. The average indium price in 2016 was 240/kg, down from 705/kg in 2014. China is a leading producer of indium (290 tonnes in 2016), followed by South Korea (195 t), Japan (70 t) and Canada (65 t). The Teck Resources refinery in Trail, British Columbia, is a large single-source indium producer, with an output of 32.5 tonnes in 2005, 41.8 tonnes in 2004 and 36.1 tonnes in 2003. The primary consumption of indium worldwide is LCD production. Demand rose rapidly from the late 1990s to 2010 with the popularity of LCD computer monitors and television sets, which now account for 50% of indium consumption. Increased manufacturing efficiency and recycling (especially in Japan) maintain a balance between demand and supply. According to the UNEP, indium's end-of-life recycling rate is less than 1%. Applications Industrial uses In 1924, indium was found to have a valued property of stabilizing non-ferrous metals, and that became the first significant use for the element. The first large-scale application for indium was coating bearings in high-performance aircraft engines during World War II, to protect against damage and corrosion; this is no longer a major use of the element. New uses were found in fusible alloys, solders, and electronics. In the 1950s, tiny beads of indium were used for the emitters and collectors of PNP alloy-junction transistors. In the middle and late 1980s, the development of indium phosphide semiconductors and indium tin oxide thin films for liquid-crystal displays (LCD) aroused much interest. By 1992, the thin-film application had become the largest end use. Indium(III) oxide and indium tin oxide (ITO) are used as a transparent conductive coating on glass substrates in electroluminescent panels. Indium tin oxide is used as a light filter in low-pressure sodium-vapor lamps. The infrared radiation is reflected back into the lamp, which increases the temperature within the tube and improves the performance of the lamp. Indium has many semiconductor-related applications. Some indium compounds, such as indium antimonide and indium phosphide, are semiconductors with useful properties: one precursor is usually trimethylindium (TMI), which is also used as the semiconductor dopant in II–VI compound semiconductors. InAs and InSb are used for low-temperature transistors and InP for high-temperature transistors. The compound semiconductors InGaN and InGaP are used in light-emitting diodes (LEDs) and laser diodes. Indium is used in photovoltaics as the semiconductor copper indium gallium selenide (CIGS), also called CIGS solar cells, a type of second-generation thin-film solar cell. Indium is used in PNP bipolar junction transistors with germanium: when soldered at low temperature, indium does not stress the germanium. Indium wire is used as a vacuum seal and a thermal conductor in cryogenics and ultra-high-vacuum applications, in such manufacturing applications as gaskets that deform to fill gaps. Owing to its great plasticity and adhesion to metals, Indium sheets are sometimes used for cold-soldering in microwave circuits and waveguide joints, where direct soldering is complicated. Indium is an ingredient in the gallium–indium–tin alloy galinstan, which is liquid at room temperature and replaces mercury in some thermometers. Other alloys of indium with bismuth, cadmium, lead, and tin, which have higher but still low melting points (between 50 and 100 °C), are used in fire sprinkler systems and heat regulators. Indium is one of many substitutes for mercury in alkaline batteries to prevent the zinc from corroding and releasing hydrogen gas. Indium is added to some dental amalgam alloys to decrease the surface tension of the mercury and allow for less mercury and easier amalgamation. Indium's high neutron-capture cross-section for thermal neutrons makes it suitable for use in control rods for nuclear reactors, typically in an alloy of 80% silver, 15% indium, and 5% cadmium. In nuclear engineering, the (n,n') reactions of 113In and 115In are used to determine magnitudes of neutron fluxes. In 2009, Professor Mas Subramanian and former graduate student Andrew Smith at Oregon State University discovered that indium can be combined with yttrium and manganese to form an intensely blue, non-toxic, inert, fade-resistant pigment, YInMn blue, the first new inorganic blue pigment discovered in 200 years. Medical applications Radioactive indium-111 (in very small amounts) is used in nuclear medicine tests, as a radiotracer to follow the movement of labeled proteins and white blood cells to diagnose different types of infection. Indium compounds are mostly not absorbed upon ingestion and are only moderately absorbed on inhalation; they tend to be stored temporarily in the muscles, skin, and bones before being excreted, and the biological half-life of indium is about two weeks in humans. It is also tagged to growth hormone analogues like octreotide to find growth hormone receptors in neuroendocrine tumors. Biological role and precautions Indium has no metabolic role in any organism. In a similar way to aluminium salts, indium(III) ions can be toxic to the kidney when given by injection. Indium tin oxide and indium phosphide harm the pulmonary and immune systems, predominantly through ionic indium, though hydrated indium oxide is more than forty times as toxic when injected, measured by the quantity of indium introduced. People can be exposed to indium in the workplace by inhalation, ingestion, skin contact, and eye contact. Indium lung is a lung disease characterized by pulmonary alveolar proteinosis and pulmonary fibrosis, first described by Japanese researchers in 2003. , 10 cases had been described, though more than 100 indium workers had documented respiratory abnormalities. The National Institute for Occupational Safety and Health has set a recommended exposure limit (REL) of 0.1 mg/m over an eight-hour workday.
Physical sciences
Chemical elements_2
null
14750
https://en.wikipedia.org/wiki/Iodine
Iodine
Iodine is a chemical element; it has symbol I and atomic number 53. The heaviest of the stable halogens, it exists at standard conditions as a semi-lustrous, non-metallic solid that melts to form a deep violet liquid at , and boils to a violet gas at . The element was discovered by the French chemist Bernard Courtois in 1811 and was named two years later by Joseph Louis Gay-Lussac, after the Ancient Greek , meaning 'violet'. Iodine occurs in many oxidation states, including iodide (I−), iodate (), and the various periodate anions. As the heaviest essential mineral nutrient, iodine is required for the synthesis of thyroid hormones. Iodine deficiency affects about two billion people and is the leading preventable cause of intellectual disabilities. The dominant producers of iodine today are Chile and Japan. Due to its high atomic number and ease of attachment to organic compounds, it has also found favour as a non-toxic radiocontrast material. Because of the specificity of its uptake by the human body, radioactive isotopes of iodine can also be used to treat thyroid cancer. Iodine is also used as a catalyst in the industrial production of acetic acid and some polymers. It is on the World Health Organization's List of Essential Medicines. History In 1811, iodine was discovered by French chemist Bernard Courtois, who was born to a family of manufacturers of saltpetre (an essential component of gunpowder). At the time of the Napoleonic Wars, saltpetre was in great demand in France. Saltpetre produced from French nitre beds required sodium carbonate, which could be isolated from seaweed collected on the coasts of Normandy and Brittany. To isolate the sodium carbonate, seaweed was burned and the ash washed with water. The remaining waste was destroyed by adding sulfuric acid. Courtois once added excessive sulfuric acid and a cloud of violet vapour rose. He noted that the vapour crystallised on cold surfaces, making dark black crystals. Courtois suspected that this material was a new element but lacked funding to pursue it further. Courtois gave samples to his friends, Charles Bernard Desormes (1777–1838) and Nicolas Clément (1779–1841), to continue research. He also gave some of the substance to chemist Joseph Louis Gay-Lussac (1778–1850), and to physicist André-Marie Ampère (1775–1836). On 29 November 1813, Desormes and Clément made Courtois' discovery public by describing the substance to a meeting of the Imperial Institute of France. On 6 December 1813, Gay-Lussac found and announced that the new substance was either an element or a compound of oxygen and he found that it is an element. Gay-Lussac suggested the name "iode" (anglicised as "iodine"), from the Ancient Greek (, "violet"), because of the colour of iodine vapour. Ampère had given some of his sample to British chemist Humphry Davy (1778–1829), who experimented on the substance and noted its similarity to chlorine and also found it as an element. Davy sent a letter dated 10 December to the Royal Society of London stating that he had identified a new element called iodine. Arguments erupted between Davy and Gay-Lussac over who identified iodine first, but both scientists found that both of them identified iodine first and also knew that Courtois is the first one to isolate the element. In 1873, the French medical researcher Casimir Davaine (1812–1882) discovered the antiseptic action of iodine. Antonio Grossich (1849–1926), an Istrian-born surgeon, was among the first to use sterilisation of the operative field. In 1908, he introduced tincture of iodine as a way to rapidly sterilise the human skin in the surgical field. In early periodic tables, iodine was often given the symbol J, for Jod, its name in German; in German texts, J is still frequently used in place of I. Properties Iodine is the fourth halogen, being a member of group 17 in the periodic table, below fluorine, chlorine, and bromine; since astatine and tennessine are radioactive, iodine is the heaviest stable halogen. Iodine has an electron configuration of [Kr]5s24d105p5, with the seven electrons in the fifth and outermost shell being its valence electrons. Like the other halogens, it is one electron short of a full octet and is hence an oxidising agent, reacting with many elements in order to complete its outer shell, although in keeping with periodic trends, it is the weakest oxidising agent among the stable halogens: it has the lowest electronegativity among them, just 2.66 on the Pauling scale (compare fluorine, chlorine, and bromine at 3.98, 3.16, and 2.96 respectively; astatine continues the trend with an electronegativity of 2.2). Elemental iodine hence forms diatomic molecules with chemical formula I2, where two iodine atoms share a pair of electrons in order to each achieve a stable octet for themselves; at high temperatures, these diatomic molecules reversibly dissociate a pair of iodine atoms. Similarly, the iodide anion, I−, is the strongest reducing agent among the stable halogens, being the most easily oxidised back to diatomic I2. (Astatine goes further, being indeed unstable as At− and readily oxidised to At0 or At+.) The halogens darken in colour as the group is descended: fluorine is a very pale yellow, chlorine is greenish-yellow, bromine is reddish-brown, and iodine is violet. Elemental iodine is slightly soluble in water, with one gram dissolving in 3450 mL at 20 °C and 1280 mL at 50 °C; potassium iodide may be added to increase solubility via formation of triiodide ions, among other polyiodides. Nonpolar solvents such as hexane and carbon tetrachloride provide a higher solubility. Polar solutions, such as aqueous solutions, are brown, reflecting the role of these solvents as Lewis bases; on the other hand, nonpolar solutions are violet, the color of iodine vapour. Charge-transfer complexes form when iodine is dissolved in polar solvents, hence changing the colour. Iodine is violet when dissolved in carbon tetrachloride and saturated hydrocarbons but deep brown in alcohols and amines, solvents that form charge-transfer adducts. The melting and boiling points of iodine are the highest among the halogens, conforming to the increasing trend down the group, since iodine has the largest electron cloud among them that is the most easily polarised, resulting in its molecules having the strongest Van der Waals interactions among the halogens. Similarly, iodine is the least volatile of the halogens, though the solid still can be observed to give off purple vapour. Due to this property iodine is commonly used to demonstrate sublimation directly from solid to gas, which gives rise to a misconception that it does not melt in atmospheric pressure. Because it has the largest atomic radius among the halogens, iodine has the lowest first ionisation energy, lowest electron affinity, lowest electronegativity and lowest reactivity of the halogens. The interhalogen bond in diiodine is the weakest of all the halogens. As such, 1% of a sample of gaseous iodine at atmospheric pressure is dissociated into iodine atoms at 575 °C. Temperatures greater than 750 °C are required for fluorine, chlorine, and bromine to dissociate to a similar extent. Most bonds to iodine are weaker than the analogous bonds to the lighter halogens. Gaseous iodine is composed of I2 molecules with an I–I bond length of 266.6 pm. The I–I bond is one of the longest single bonds known. It is even longer (271.5 pm) in solid orthorhombic crystalline iodine, which has the same crystal structure as chlorine and bromine. (The record is held by iodine's neighbour xenon: the Xe–Xe bond length is 308.71 pm.) As such, within the iodine molecule, significant electronic interactions occur with the two next-nearest neighbours of each atom, and these interactions give rise, in bulk iodine, to a shiny appearance and semiconducting properties. Iodine is a two-dimensional semiconductor with a band gap of 1.3 eV (125 kJ/mol): it is a semiconductor in the plane of its crystalline layers and an insulator in the perpendicular direction. Isotopes Of the forty known isotopes of iodine, only one occurs in nature, iodine-127. The others are radioactive and have half-lives too short to be primordial. As such, iodine is both monoisotopic and mononuclidic and its atomic weight is known to great precision, as it is a constant of nature. The longest-lived of the radioactive isotopes of iodine is iodine-129, which has a half-life of 15.7 million years, decaying via beta decay to stable xenon-129. Some iodine-129 was formed along with iodine-127 before the formation of the Solar System, but it has by now completely decayed away, making it an extinct radionuclide. Its former presence may be determined from an excess of its daughter xenon-129, but early attempts to use this characteristic to date the supernova source for elements in the Solar System are made difficult by alternative nuclear processes giving iodine-129 and by iodine's volatility at higher temperatures. Due to its mobility in the environment iodine-129 has been used to date very old groundwaters. Traces of iodine-129 still exist today, as it is also a cosmogenic nuclide, formed from cosmic ray spallation of atmospheric xenon: these traces make up 10−14 to 10−10 of all terrestrial iodine. It also occurs from open-air nuclear testing, and is not hazardous because of its very long half-life, the longest of all fission products. At the peak of thermonuclear testing in the 1960s and 1970s, iodine-129 still made up only about 10−7 of all terrestrial iodine. Excited states of iodine-127 and iodine-129 are often used in Mössbauer spectroscopy. The other iodine radioisotopes have much shorter half-lives, no longer than days. Some of them have medical applications involving the thyroid gland, where the iodine that enters the body is stored and concentrated. Iodine-123 has a half-life of thirteen hours and decays by electron capture to tellurium-123, emitting gamma radiation; it is used in nuclear medicine imaging, including single photon emission computed tomography (SPECT) and X-ray computed tomography (X-Ray CT) scans. Iodine-125 has a half-life of fifty-nine days, decaying by electron capture to tellurium-125 and emitting low-energy gamma radiation; the second-longest-lived iodine radioisotope, it has uses in biological assays, nuclear medicine imaging and in radiation therapy as brachytherapy to treat a number of conditions, including prostate cancer, uveal melanomas, and brain tumours. Finally, iodine-131, with a half-life of eight days, beta decays to an excited state of stable xenon-131 that then converts to the ground state by emitting gamma radiation. It is a common fission product and thus is present in high levels in radioactive fallout. It may then be absorbed through contaminated food, and will also accumulate in the thyroid. As it decays, it may cause damage to the thyroid. The primary risk from exposure to high levels of iodine-131 is the chance occurrence of radiogenic thyroid cancer in later life. Other risks include the possibility of non-cancerous growths and thyroiditis. Protection usually used against the negative effects of iodine-131 is by saturating the thyroid gland with stable iodine-127 in the form of potassium iodide tablets, taken daily for optimal prophylaxis. However, iodine-131 may also be used for medicinal purposes in radiation therapy for this very reason, when tissue destruction is desired after iodine uptake by the tissue. Iodine-131 is also used as a radioactive tracer. Chemistry and compounds Iodine is quite reactive, but it is less so than the lighter halogens, and it is a weaker oxidant. For example, it does not halogenate carbon monoxide, nitric oxide, and sulfur dioxide, which chlorine does. Many metals react with iodine. By the same token, however, since iodine has the lowest ionisation energy among the halogens and is the most easily oxidised of them, it has a more significant cationic chemistry and its higher oxidation states are rather more stable than those of bromine and chlorine, for example in iodine heptafluoride. Charge-transfer complexes The iodine molecule, I2, dissolves in CCl4 and aliphatic hydrocarbons to give bright violet solutions. In these solvents the absorption band maximum occurs in the 520 – 540 nm region and is assigned to a * to σ* transition. When I2 reacts with Lewis bases in these solvents a blue shift in I2 peak is seen and the new peak (230 – 330 nm) arises that is due to the formation of adducts, which are referred to as charge-transfer complexes. Hydrogen iodide The simplest compound of iodine is hydrogen iodide, HI. It is a colourless gas that reacts with oxygen to give water and iodine. Although it is useful in iodination reactions in the laboratory, it does not have large-scale industrial uses, unlike the other hydrogen halides. Commercially, it is usually made by reacting iodine with hydrogen sulfide or hydrazine: 2 I2 + N2H4 4 HI + N2 At room temperature, it is a colourless gas, like all of the hydrogen halides except hydrogen fluoride, since hydrogen cannot form strong hydrogen bonds to the large and only mildly electronegative iodine atom. It melts at and boils at . It is an endothermic compound that can exothermically dissociate at room temperature, although the process is very slow unless a catalyst is present: the reaction between hydrogen and iodine at room temperature to give hydrogen iodide does not proceed to completion. The H–I bond dissociation energy is likewise the smallest of the hydrogen halides, at 295 kJ/mol. Aqueous hydrogen iodide is known as hydroiodic acid, which is a strong acid. Hydrogen iodide is exceptionally soluble in water: one litre of water will dissolve 425 litres of hydrogen iodide, and the saturated solution has only four water molecules per molecule of hydrogen iodide. Commercial so-called "concentrated" hydroiodic acid usually contains 48–57% HI by mass; the solution forms an azeotrope with boiling point at 56.7 g HI per 100 g solution. Hence hydroiodic acid cannot be concentrated past this point by evaporation of water. Unlike gaseous hydrogen iodide, hydroiodic acid has major industrial use in the manufacture of acetic acid by the Cativa process. Other binary iodine compounds With the exception of the noble gases, nearly all elements on the periodic table up to einsteinium (EsI3 is known) are known to form binary compounds with iodine. Until 1990, nitrogen triiodide was only known as an ammonia adduct. Ammonia-free NI3 was found to be isolable at –196 °C but spontaneously decomposes at 0 °C. For thermodynamic reasons related to electronegativity of the elements, neutral sulfur and selenium iodides that are stable at room temperature are also nonexistent, although S2I2 and SI2 are stable up to 183 and 9 K, respectively. As of 2022, no neutral binary selenium iodide has been unambiguously identified (at any temperature). Sulfur- and selenium-iodine polyatomic cations (e.g., [S2I42+][AsF6–]2 and [Se2I42+][Sb2F11–]2) have been prepared and characterised crystallographically. Given the large size of the iodide anion and iodine's weak oxidising power, high oxidation states are difficult to achieve in binary iodides, the maximum known being in the pentaiodides of niobium, tantalum, and protactinium. Iodides can be made by reaction of an element or its oxide, hydroxide, or carbonate with hydroiodic acid, and then dehydrated by mildly high temperatures combined with either low pressure or anhydrous hydrogen iodide gas. These methods work best when the iodide product is stable to hydrolysis. Other syntheses include high-temperature oxidative iodination of the element with iodine or hydrogen iodide, high-temperature iodination of a metal oxide or other halide by iodine, a volatile metal halide, carbon tetraiodide, or an organic iodide. For example, molybdenum(IV) oxide reacts with aluminium(III) iodide at 230 °C to give molybdenum(II) iodide. An example involving halogen exchange is given below, involving the reaction of tantalum(V) chloride with excess aluminium(III) iodide at 400 °C to give tantalum(V) iodide: 3TaCl5 + \underset{(excess)}{5AlI3} -> 3TaI5 + 5AlCl3 Lower iodides may be produced either through thermal decomposition or disproportionation, or by reducing the higher iodide with hydrogen or a metal, for example: TaI5{} + Ta ->[\text{thermal gradient}] [\ce{630^\circ C\ ->\ 575^\circ C}] Ta6I14 Most metal iodides with the metal in low oxidation states (+1 to +3) are ionic. Nonmetals tend to form covalent molecular iodides, as do metals in high oxidation states from +3 and above. Both ionic and covalent iodides are known for metals in oxidation state +3 (e.g. scandium iodide is mostly ionic, but aluminium iodide is not). Ionic iodides MIn tend to have the lowest melting and boiling points among the halides MXn of the same element, because the electrostatic forces of attraction between the cations and anions are weakest for the large iodide anion. In contrast, covalent iodides tend to instead have the highest melting and boiling points among the halides of the same element, since iodine is the most polarisable of the halogens and, having the most electrons among them, can contribute the most to van der Waals forces. Naturally, exceptions abound in intermediate iodides where one trend gives way to the other. Similarly, solubilities in water of predominantly ionic iodides (e.g. potassium and calcium) are the greatest among ionic halides of that element, while those of covalent iodides (e.g. silver) are the lowest of that element. In particular, silver iodide is very insoluble in water and its formation is often used as a qualitative test for iodine. Iodine halides The halogens form many binary, diamagnetic interhalogen compounds with stoichiometries XY, XY3, XY5, and XY7 (where X is heavier than Y), and iodine is no exception. Iodine forms all three possible diatomic interhalogens, a trifluoride and trichloride, as well as a pentafluoride and, exceptionally among the halogens, a heptafluoride. Numerous cationic and anionic derivatives are also characterised, such as the wine-red or bright orange compounds of and the dark brown or purplish black compounds of I2Cl+. Apart from these, some pseudohalides are also known, such as cyanogen iodide (ICN), iodine thiocyanate (ISCN), and iodine azide (IN3). Iodine monofluoride (IF) is unstable at room temperature and disproportionates very readily and irreversibly to iodine and iodine pentafluoride, and thus cannot be obtained pure. It can be synthesised from the reaction of iodine with fluorine gas in trichlorofluoromethane at −45 °C, with iodine trifluoride in trichlorofluoromethane at −78 °C, or with silver(I) fluoride at 0 °C. Iodine monochloride (ICl) and iodine monobromide (IBr), on the other hand, are moderately stable. The former, a volatile red-brown compound, was discovered independently by Joseph Louis Gay-Lussac and Humphry Davy in 1813–1814 not long after the discoveries of chlorine and iodine, and it mimics the intermediate halogen bromine so well that Justus von Liebig was misled into mistaking bromine (which he had found) for iodine monochloride. Iodine monochloride and iodine monobromide may be prepared simply by reacting iodine with chlorine or bromine at room temperature and purified by fractional crystallisation. Both are quite reactive and attack even platinum and gold, though not boron, carbon, cadmium, lead, zirconium, niobium, molybdenum, and tungsten. Their reaction with organic compounds depends on conditions. Iodine chloride vapour tends to chlorinate phenol and salicylic acid, since when iodine chloride undergoes homolytic fission, chlorine and iodine are produced and the former is more reactive. However, iodine chloride in carbon tetrachloride solution results in iodination being the main reaction, since now heterolytic fission of the I–Cl bond occurs and I+ attacks phenol as an electrophile. However, iodine monobromide tends to brominate phenol even in carbon tetrachloride solution because it tends to dissociate into its elements in solution, and bromine is more reactive than iodine. When liquid, iodine monochloride and iodine monobromide dissociate into and ions (X = Cl, Br); thus they are significant conductors of electricity and can be used as ionising solvents. Iodine trifluoride (IF3) is an unstable yellow solid that decomposes above −28 °C. It is thus little-known. It is difficult to produce because fluorine gas would tend to oxidise iodine all the way to the pentafluoride; reaction at low temperature with xenon difluoride is necessary. Iodine trichloride, which exists in the solid state as the planar dimer I2Cl6, is a bright yellow solid, synthesised by reacting iodine with liquid chlorine at −80 °C; caution is necessary during purification because it easily dissociates to iodine monochloride and chlorine and hence can act as a strong chlorinating agent. Liquid iodine trichloride conducts electricity, possibly indicating dissociation to and ions. Iodine pentafluoride (IF5), a colourless, volatile liquid, is the most thermodynamically stable iodine fluoride, and can be made by reacting iodine with fluorine gas at room temperature. It is a fluorinating agent, but is mild enough to store in glass apparatus. Again, slight electrical conductivity is present in the liquid state because of dissociation to and . The pentagonal bipyramidal iodine heptafluoride (IF7) is an extremely powerful fluorinating agent, behind only chlorine trifluoride, chlorine pentafluoride, and bromine pentafluoride among the interhalogens: it reacts with almost all the elements even at low temperatures, fluorinates Pyrex glass to form iodine(VII) oxyfluoride (IOF5), and sets carbon monoxide on fire. Iodine oxides and oxoacids Iodine oxides are the most stable of all the halogen oxides, because of the strong I–O bonds resulting from the large electronegativity difference between iodine and oxygen, and they have been known for the longest time. The stable, white, hygroscopic iodine pentoxide (I2O5) has been known since its formation in 1813 by Gay-Lussac and Davy. It is most easily made by the dehydration of iodic acid (HIO3), of which it is the anhydride. It will quickly oxidise carbon monoxide completely to carbon dioxide at room temperature, and is thus a useful reagent in determining carbon monoxide concentration. It also oxidises nitrogen oxide, ethylene, and hydrogen sulfide. It reacts with sulfur trioxide and peroxydisulfuryl difluoride (S2O6F2) to form salts of the iodyl cation, [IO2]+, and is reduced by concentrated sulfuric acid to iodosyl salts involving [IO]+. It may be fluorinated by fluorine, bromine trifluoride, sulfur tetrafluoride, or chloryl fluoride, resulting iodine pentafluoride, which also reacts with iodine pentoxide, giving iodine(V) oxyfluoride, IOF3. A few other less stable oxides are known, notably I4O9 and I2O4; their structures have not been determined, but reasonable guesses are IIII(IVO3)3 and [IO]+[IO3]− respectively. More important are the four oxoacids: hypoiodous acid (HIO), iodous acid (HIO2), iodic acid (HIO3), and periodic acid (HIO4 or H5IO6). When iodine dissolves in aqueous solution, the following reactions occur: Hypoiodous acid is unstable to disproportionation. The hypoiodite ions thus formed disproportionate immediately to give iodide and iodate: Iodous acid and iodite are even less stable and exist only as a fleeting intermediate in the oxidation of iodide to iodate, if at all. Iodates are by far the most important of these compounds, which can be made by oxidising alkali metal iodides with oxygen at 600 °C and high pressure, or by oxidising iodine with chlorates. Unlike chlorates, which disproportionate very slowly to form chloride and perchlorate, iodates are stable to disproportionation in both acidic and alkaline solutions. From these, salts of most metals can be obtained. Iodic acid is most easily made by oxidation of an aqueous iodine suspension by electrolysis or fuming nitric acid. Iodate has the weakest oxidising power of the halates, but reacts the quickest. Many periodates are known, including not only the expected tetrahedral , but also square-pyramidal , octahedral orthoperiodate , [IO3(OH)3]2−, [I2O8(OH2)]4−, and . They are usually made by oxidising alkaline sodium iodate electrochemically (with lead(IV) oxide as the anode) or by chlorine gas: They are thermodymically and kinetically powerful oxidising agents, quickly oxidising Mn2+ to , and cleaving glycols, α-diketones, α-ketols, α-aminoalcohols, and α-diamines. Orthoperiodate especially stabilises high oxidation states among metals because of its very high negative charge of −5. Orthoperiodic acid, H5IO6, is stable, and dehydrates at 100 °C in a vacuum to Metaperiodic acid, HIO4. Attempting to go further does not result in the nonexistent iodine heptoxide (I2O7), but rather iodine pentoxide and oxygen. Periodic acid may be protonated by sulfuric acid to give the cation, isoelectronic to Te(OH)6 and , and giving salts with bisulfate and sulfate. Polyiodine compounds When iodine dissolves in strong acids, such as fuming sulfuric acid, a bright blue paramagnetic solution including cations is formed. A solid salt of the diiodine cation may be obtained by oxidising iodine with antimony pentafluoride: The salt I2Sb2F11 is dark blue, and the blue tantalum analogue I2Ta2F11 is also known. Whereas the I–I bond length in I2 is 267 pm, that in is only 256 pm as the missing electron in the latter has been removed from an antibonding orbital, making the bond stronger and hence shorter. In fluorosulfuric acid solution, deep-blue reversibly dimerises below −60 °C, forming red rectangular diamagnetic . Other polyiodine cations are not as well-characterised, including bent dark-brown or black and centrosymmetric C2h green or black , known in the and salts among others. The only important polyiodide anion in aqueous solution is linear triiodide, . Its formation explains why the solubility of iodine in water may be increased by the addition of potassium iodide solution: Many other polyiodides may be found when solutions containing iodine and iodide crystallise, such as , , , and , whose salts with large, weakly polarising cations such as Cs+ may be isolated. Organoiodine compounds Organoiodine compounds have been fundamental in the development of organic synthesis, such as in the Hofmann elimination of amines, the Williamson ether synthesis, the Wurtz coupling reaction, and in Grignard reagents. The carbon–iodine bond is a common functional group that forms part of core organic chemistry; formally, these compounds may be thought of as organic derivatives of the iodide anion. The simplest organoiodine compounds, alkyl iodides, may be synthesised by the reaction of alcohols with phosphorus triiodide; these may then be used in nucleophilic substitution reactions, or for preparing Grignard reagents. The C–I bond is the weakest of all the carbon–halogen bonds due to the minuscule difference in electronegativity between carbon (2.55) and iodine (2.66). As such, iodide is the best leaving group among the halogens, to such an extent that many organoiodine compounds turn yellow when stored over time due to decomposition into elemental iodine; as such, they are commonly used in organic synthesis, because of the easy formation and cleavage of the C–I bond. They are also significantly denser than the other organohalogen compounds thanks to the high atomic weight of iodine. A few organic oxidising agents like the iodanes contain iodine in a higher oxidation state than −1, such as 2-iodoxybenzoic acid, a common reagent for the oxidation of alcohols to aldehydes, and iodobenzene dichloride (PhICl2), used for the selective chlorination of alkenes and alkynes. One of the more well-known uses of organoiodine compounds is the so-called iodoform test, where iodoform (CHI3) is produced by the exhaustive iodination of a methyl ketone (or another compound capable of being oxidised to a methyl ketone), as follows: Some drawbacks of using organoiodine compounds as compared to organochlorine or organobromine compounds is the greater expense and toxicity of the iodine derivatives, since iodine is expensive and organoiodine compounds are stronger alkylating agents. For example, iodoacetamide and iodoacetic acid denature proteins by irreversibly alkylating cysteine residues and preventing the reformation of disulfide linkages. Halogen exchange to produce iodoalkanes by the Finkelstein reaction is slightly complicated by the fact that iodide is a better leaving group than chloride or bromide. The difference is nevertheless small enough that the reaction can be driven to completion by exploiting the differential solubility of halide salts, or by using a large excess of the halide salt. In the classic Finkelstein reaction, an alkyl chloride or an alkyl bromide is converted to an alkyl iodide by treatment with a solution of sodium iodide in acetone. Sodium iodide is soluble in acetone and sodium chloride and sodium bromide are not. The reaction is driven toward products by mass action due to the precipitation of the insoluble salt. Occurrence and production Iodine is the least abundant of the stable halogens, comprising only 0.46 parts per million of Earth's crustal rocks (compare: fluorine: 544 ppm, chlorine: 126 ppm, bromine: 2.5 ppm) making it the 60th most abundant element. Iodide minerals are rare, and most deposits that are concentrated enough for economical extraction are iodate minerals instead. Examples include lautarite, Ca(IO3)2, and dietzeite, 7Ca(IO3)2·8CaCrO4. These are the minerals that occur as trace impurities in the caliche, found in Chile, whose main product is sodium nitrate. In total, they can contain at least 0.02% and at most 1% iodine by mass. Sodium iodate is extracted from the caliche and reduced to iodide by sodium bisulfite. This solution is then reacted with freshly extracted iodate, resulting in comproportionation to iodine, which may be filtered off. The caliche was the main source of iodine in the 19th century and continues to be important today, replacing kelp (which is no longer an economically viable source), but in the late 20th century brines emerged as a comparable source. The Japanese Minami Kantō gas field east of Tokyo and the American Anadarko Basin gas field in northwest Oklahoma are the two largest such sources. The brine is hotter than 60 °C from the depth of the source. The brine is first purified and acidified using sulfuric acid, then the iodide present is oxidised to iodine with chlorine. An iodine solution is produced, but is dilute and must be concentrated. Air is blown into the solution to evaporate the iodine, which is passed into an absorbing tower, where sulfur dioxide reduces the iodine. The hydrogen iodide (HI) is reacted with chlorine to precipitate the iodine. After filtering and purification the iodine is packed. These sources ensure that Chile and Japan are the largest producers of iodine today. Alternatively, the brine may be treated with silver nitrate to precipitate out iodine as silver iodide, which is then decomposed by reaction with iron to form metallic silver and a solution of iron(II) iodide. The iodine is then liberated by displacement with chlorine. Applications About half of all produced iodine goes into various organoiodine compounds, another 15% remains as the pure element, another 15% is used to form potassium iodide, and another 15% for other inorganic iodine compounds. Among the major uses of iodine compounds are catalysts, animal feed supplements, stabilisers, dyes, colourants and pigments, pharmaceutical, sanitation (from tincture of iodine), and photography; minor uses include smog inhibition, cloud seeding, and various uses in analytical chemistry. X-ray imaging As an element with high electron density and atomic number, iodine efficiently absorbs X-rays. X-ray radiocontrast agents is the top application for iodine. In this application, Organoiodine compounds are injected intravenously. This application is often in conjunction with advanced X-ray techniques such as angiography and CT scanning. At present, all water-soluble radiocontrast agents rely on iodine-containing compounds. Iodine absorbs X-rays with energies lessthan 33.3 keV due to the photoelectric effect of the innermost electrons. Biocide Use of iodine as a biocide represents a major application of the element, ranked 2nd by weight. Elemental iodine (I2) is used as an antiseptic in medicine. A number of water-soluble compounds, from triiodide (I3−, generated in situ by adding iodide to poorly water-soluble elemental iodine) to various iodophors, slowly decompose to release I2 when applied. Optical polarising films Thin-film-transistor liquid crystal displays rely on polarisation. The liquid crystal transistor is sandwiched between two polarising films and illuminated from behind. The two films prevent light transmission unless the transistor in the middle of the sandwich rotates the light. Iodine-impregnated polymer films are used in polarising optical components with the highest transmission and degree of polarisation. Co-catalyst Another significant use of iodine is as a cocatalyst for the production of acetic acid by the Monsanto and Cativa processes. In these technologies, hydroiodic acid converts the methanol feedstock into methyl iodide, which undergoes carbonylation. Hydrolysis of the resulting acetyl iodide regenerates hydroiodic acid and gives acetic acid. The majority of acetic acid is produced by these approaches. Nutrition Salts of iodide and iodate are used extensively in human and animal nutrition. This application reflects the status of iodide as an essential element, being required for two hormones. The production of ethylenediamine dihydroiodide, provided as a nutritional supplement for livestock, consumes a large portion of available iodine. Iodine is a component of iodised salt. A saturated solution of potassium iodide is used to treat acute thyrotoxicosis. It is also used to block uptake of iodine-131 in the thyroid gland (see isotopes section above), when this isotope is used as part of radiopharmaceuticals (such as iobenguane) that are not targeted to the thyroid or thyroid-type tissues. Others Inorganic iodides find specialised uses. Titanium, zirconium, hafnium, and thorium are purified by the Van Arkel–de Boer process, which involves the reversible formation of the tetraiodides of these elements. Silver iodide is a major ingredient to traditional photographic film. Thousands of kilograms of silver iodide are used annually for cloud seeding to induce rain. The organoiodine compound erythrosine is an important food colouring agent. Perfluoroalkyl iodides are precursors to important surfactants, such as perfluorooctanesulfonic acid. I is used as the radiolabel in investigating which ligands go to which plant pattern recognition receptors (PRRs). An iodine based thermochemical cycle has been evaluated for hydrogen production using energy from nuclear paper. The cycle has three steps. At , iodine reacts with sulfur dioxide and water to give hydrogen iodide and sulfuric acid: I_2+SO_2+2H_2O \rightarrow 2HI+H_2SO_4 After a separation stage, at sulfuric acid splits in sulfur dioxide and oxygen: 2H_2SO_4 \rightarrow 2SO_2+2H_2O+O_2 Hydrogen iodide, at , gives hydrogen and the initial element, iodine: 2HI \rightarrow I_2+H_2 The yield of the cycle (ratio between lower heating value of the produced hydrogen and the consumed energy for its production, is approximately 38%. , the cycle is not a competitive means of producing hydrogen. Spectroscopy The spectrum of the iodine molecule, I2, consists of (not exclusively) tens of thousands of sharp spectral lines in the wavelength range 500–700 nm. It is therefore a commonly used wavelength reference (secondary standard). By measuring with a spectroscopic Doppler-free technique while focusing on one of these lines, the hyperfine structure of the iodine molecule reveals itself. A line is now resolved such that either 15 components (from even rotational quantum numbers, Jeven), or 21 components (from odd rotational quantum numbers, Jodd) are measurable. Caesium iodide and thallium-doped sodium iodide are used in crystal scintillators for the detection of gamma rays. The efficiency is high and energy dispersive spectroscopy is possible, but the resolution is rather poor. Chemical analysis The iodide and iodate anions can be used for quantitative volumetric analysis, for example in iodometry. Iodine and starch form a blue complex, and this reaction is often used to test for either starch or iodine and as an indicator in iodometry. The iodine test for starch is still used to detect counterfeit banknotes printed on starch-containing paper. The iodine value is the mass of iodine in grams that is consumed by 100 grams of a chemical substance typically fats or oils. Iodine numbers are often used to determine the amount of unsaturation in fatty acids. This unsaturation is in the form of double bonds, which react with iodine compounds. Potassium tetraiodomercurate(II), K2HgI4, is also known as Nessler's reagent. It is once was used as a sensitive spot test for ammonia. Similarly, Mayer's reagent (potassium tetraiodomercurate(II) solution) is used as a precipitating reagent to test for alkaloids. Aqueous alkaline iodine solution is used in the iodoform test for methyl ketones. Biological role Iodine is an essential element for life and, at atomic number Z = 53, is the heaviest element commonly needed by living organisms. (Lanthanum and the other lanthanides, as well as tungsten with Z = 74 and uranium with Z = 92, are used by a few microorganisms.) It is required for the synthesis of the growth-regulating thyroid hormones tetraiodothyronine and triiodothyronine (T4 and T3 respectively, named after their number of iodine atoms). A deficiency of iodine leads to decreased production of T3 and T4 and a concomitant enlargement of the thyroid tissue in an attempt to obtain more iodine, causing the disease goitre. The major form of thyroid hormone in the blood is tetraiodothyronine (T4), which has a longer life than triiodothyronine (T3). In humans, the ratio of T4 to T3 released into the blood is between 14:1 and 20:1. T4 is converted to the active T3 (three to four times more potent than T4) within cells by deiodinases (5'-iodinase). These are further processed by decarboxylation and deiodination to produce iodothyronamine (T1a) and thyronamine (T0a'). All three isoforms of the deiodinases are selenium-containing enzymes; thus metallic selenium is needed for triiodothyronine and tetraiodothyronine production. Iodine accounts for 65% of the molecular weight of T4 and 59% of T3. Fifteen to 20 mg of iodine is concentrated in thyroid tissue and hormones, but 70% of all iodine in the body is found in other tissues, including mammary glands, eyes, gastric mucosa, thymus, cerebrospinal fluid, choroid plexus, arteries, cervix, salivary glands. During pregnancy, the placenta is able to store and accumulate iodine. In the cells of those tissues, iodine enters directly by sodium-iodide symporter (NIS). The action of iodine in mammal tissues is related to fetal and neonatal development, and in the other tissues, it is known. Dietary recommendations and intake The daily levels of intake recommended by the United States National Academy of Medicine are between 110 and 130 μg for infants up to 12 months, 90 μg for children up to eight years, 130 μg for children up to 13 years, 150 μg for adults, 220 μg for pregnant women and 290 μg for lactating women. The Tolerable Upper Intake Level (TUIL) for adults is 1,100 μg/day. This upper limit was assessed by analysing the effect of supplementation on thyroid-stimulating hormone. The European Food Safety Authority (EFSA) refers to the collective set of information as Dietary Reference Values, with Population Reference Intake (PRI) instead of RDA, and Average Requirement instead of EAR; AI and UL are defined the same as in the United States. For women and men ages 18 and older, the PRI for iodine is set at 150 μg/day; the PRI during pregnancy and lactation is 200 μg/day. For children aged 1–17 years, the PRI increases with age from 90 to 130 μg/day. These PRIs are comparable to the U.S. RDAs with the exception of that for lactation. The thyroid gland needs 70 μg/day of iodine to synthesise the requisite daily amounts of T4 and T3. The higher recommended daily allowance levels of iodine seem necessary for optimal function of a number of body systems, including mammary glands, gastric mucosa, salivary glands, brain cells, choroid plexus, thymus, arteries. Natural food sources of iodine include seafood which contains fish, seaweeds, kelp, shellfish and other foods which contain dairy products, eggs, meats, vegetables, so long as the animals ate iodine richly, and the plants are grown on iodine-rich soil. Iodised salt is fortified with potassium iodate, a salt of iodine, potassium, oxygen. As of 2000, the median intake of iodine from food in the United States was 240 to 300 μg/day for men and 190 to 210 μg/day for women. The general US population has adequate iodine nutrition, with lactating women and pregnant women having a mild risk of deficiency. In Japan, consumption was considered much higher, ranging between 5,280 μg/day to 13,800 μg/day from wakame and kombu that are eaten, both in the form of kombu and wakame and kombu and wakame umami extracts for soup stock and potato chips. However, new studies suggest that Japan's consumption is closer to 1,000–3,000 μg/day. The adult UL in Japan was last revised to 3,000 μg/day in 2015. After iodine fortification programs such as iodisation of salt have been done, some cases of iodine-induced hyperthyroidism have been observed (so-called Jod-Basedow phenomenon). The condition occurs mainly in people above 40 years of age, and the risk is higher when iodine deficiency is high and the first rise in iodine consumption is high. Deficiency In areas where there is little iodine in the diet, which are remote inland areas and faraway mountainous areas where no iodine rich foods are eaten, iodine deficiency gives rise to hypothyroidism, symptoms of which are extreme fatigue, goitre, mental slowing, depression, low weight gain, and low basal body temperatures. Iodine deficiency is the leading cause of preventable intellectual disability, a result that occurs primarily when babies or small children are rendered hypothyroidic by no iodine. The addition of iodine to salt has largely destroyed this problem in wealthier areas, but iodine deficiency remains a serious public health problem in poorer areas today. Iodine deficiency is also a problem in certain areas of all continents of the world. Information processing, fine motor skills, and visual problem solving are normalised by iodine repletion in iodine-deficient people. Precautions Toxicity Elemental iodine (I2) is toxic if taken orally undiluted. The lethal dose for an adult human is 30 mg/kg, which is about 2.1–2.4 grams for a human weighing 70 to 80 kg (even when experiments on rats demonstrated that these animals could survive after eating a 14000 mg/kg dose and are still living after that). Excess iodine is more cytotoxic in the presence of selenium deficiency. Iodine supplementation in selenium-deficient populations is problematic for this reason. The toxicity derives from its oxidising properties, through which it denaturates proteins (including enzymes). Elemental iodine is also a skin irritant. Solutions with high elemental iodine concentration, such as tincture of iodine and Lugol's solution, are capable of causing tissue damage if used in prolonged cleaning or antisepsis; similarly, liquid Povidone-iodine (Betadine) trapped against the skin resulted in chemical burns in some reported cases. Occupational exposure The U.S. Occupational Safety and Health Administration (OSHA) has set the legal limit (Permissible exposure limit) for iodine exposure in the workplace at 0.1 ppm (1 mg/m3) during an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a Recommended exposure limit (REL) of 0.1 ppm (1 mg/m3) during an 8-hour workday. At levels of 2 ppm, iodine is immediately dangerous to life and health. Allergic reactions Some people develop a hypersensitivity to products and foods containing iodine. Applications of tincture of iodine or Betadine can cause rashes, sometimes severe. Parenteral use of iodine-based contrast agents (see above) can cause reactions ranging from a mild rash to fatal anaphylaxis. Such reactions have led to the misconception (widely held, even among physicians) that some people are allergic to iodine itself; even allergies to iodine-rich foods have been so construed. In fact, there has never been a confirmed report of a true iodine allergy, as an allergy to iodine or iodine salts is biologically impossible. Hypersensitivity reactions to products and foods containing iodine are apparently related to their other molecular components; thus, a person who has demonstrated an allergy to one food or product containing iodine may not have an allergic reaction to another. Patients with various food allergies (fishes, shellfishes, eggs, milk, seaweeds, kelp, meats, vegetables, kombu, wakame) do not have an increased risk for a contrast medium hypersensitivity. The patient's allergy history is relevant. US DEA List I status Phosphorus reduces iodine to hydroiodic acid, which is a reagent effective for reducing ephedrine and pseudoephedrine to methamphetamine. For this reason, iodine was designated by the United States Drug Enforcement Administration as a List I precursor chemical under 21 CFR 1310.02.
Physical sciences
Chemical elements_2
null
14752
https://en.wikipedia.org/wiki/Iridium
Iridium
Iridium is a chemical element; it has symbol Ir and atomic number 77. A very hard, brittle, silvery-white transition metal of the platinum group, it is considered the second-densest naturally occurring metal (after osmium) with a density of as defined by experimental X-ray crystallography. 191Ir and 193Ir are the only two naturally occurring isotopes of iridium, as well as the only stable isotopes; the latter is the more abundant. It is one of the most corrosion-resistant metals, even at temperatures as high as . Iridium was discovered in 1803 in the acid-insoluble residues of platinum ores by the English chemist Smithson Tennant. The name iridium, derived from the Greek word iris (rainbow), refers to the various colors of its compounds. Iridium is one of the rarest elements in Earth's crust, with an estimated annual production of only in 2023. The dominant uses of iridium are the metal itself and its alloys, as in high-performance spark plugs, crucibles for recrystallization of semiconductors at high temperatures, and electrodes for the production of chlorine in the chloralkali process. Important compounds of iridium are chlorides and iodides in industrial catalysis. Iridium is a component of some OLEDs. Iridium is found in meteorites in much higher abundance than in the Earth's crust. For this reason, the unusually high abundance of iridium in the clay layer at the Cretaceous–Paleogene boundary gave rise to the Alvarez hypothesis that the impact of a massive extraterrestrial object caused the extinction of non-avian dinosaurs and many other species 66 million years ago, now known to be produced by the impact that formed the Chicxulub crater. Similarly, an iridium anomaly in core samples from the Pacific Ocean suggested the Eltanin impact of about 2.5 million years ago. Characteristics Physical properties A member of the platinum group metals, iridium is white, resembling platinum, but with a slight yellowish cast. Because of its hardness, brittleness, and very high melting point, solid iridium is difficult to machine, form, or work; thus powder metallurgy is commonly employed instead. It is the only metal to maintain good mechanical properties in air at temperatures above . It has the 10th highest boiling point among all elements and becomes a superconductor at temperatures below . Iridium's modulus of elasticity is the second-highest among the metals, being surpassed only by osmium. This, together with a high shear modulus and a very low figure for Poisson's ratio (the relationship of longitudinal to lateral strain), indicate the high degree of stiffness and resistance to deformation that have rendered its fabrication into useful components a matter of great difficulty. Despite these limitations and iridium's high cost, a number of applications have developed where mechanical strength is an essential factor in some of the extremely severe conditions encountered in modern technology. The measured density of iridium is only slightly lower (by about 0.12%) than that of osmium, the densest metal known. Some ambiguity occurred regarding which of the two elements was denser, due to the small size of the difference in density and difficulties in measuring it accurately, but, with increased accuracy in factors used for calculating density, X-ray crystallographic data yielded densities of for iridium and for osmium. Iridium is extremely brittle, to the point of being hard to weld because the heat-affected zone cracks, but it can be made more ductile by addition of small quantities of titanium and zirconium (0.2% of each apparently works well). The Vickers hardness of pure platinum is 56 HV, whereas platinum with 50% of iridium can reach over 500 HV. Chemical properties Iridium is the most corrosion-resistant metal known. It is not attacked by acids, including aqua regia, but it can be dissolved in concentrated hydrochloric acid in the presence of sodium perchlorate. In the presence of oxygen, it reacts with cyanide salts. Traditional oxidants also react, including the halogens and oxygen at higher temperatures. Iridium also reacts directly with sulfur at atmospheric pressure to yield iridium disulfide. Isotopes Iridium has two naturally occurring stable isotopes, 191Ir and 193Ir, with natural abundances of 37.3% and 62.7%, respectively. At least 37 radioisotopes have also been synthesized, ranging in mass number from 164 to 202. 192Ir, which falls between the two stable isotopes, is the most stable radioisotope, with a half-life of 73.827 days, and finds application in brachytherapy and in industrial radiography, particularly for nondestructive testing of welds in steel in the oil and gas industries; iridium-192 sources have been involved in a number of radiological accidents. Three other isotopes have half-lives of at least a day—188Ir, 189Ir, and 190Ir. Isotopes with masses below 191 decay by some combination of β+ decay, α decay, and (rare) proton emission, with the exception of 189Ir, which decays by electron capture. Synthetic isotopes heavier than 191 decay by β− decay, although 192Ir also has a minor electron capture decay path. All known isotopes of iridium were discovered between 1934 and 2008, with the most recent discoveries being 200–202Ir. At least 32 metastable isomers have been characterized, ranging in mass number from 164 to 197. The most stable of these is 192m2Ir, which decays by isomeric transition with a half-life of 241 years, making it more stable than any of iridium's synthetic isotopes in their ground states. The least stable isomer is 190m3Ir with a half-life of only 2 μs. The isotope 191Ir was the first one of any element to be shown to present a Mössbauer effect. This renders it useful for Mössbauer spectroscopy for research in physics, chemistry, biochemistry, metallurgy, and mineralogy. Chemistry Oxidation states Iridium forms compounds in oxidation states between −3 and +9, but the most common oxidation states are +1, +2, +3, and +4. Well-characterized compounds containing iridium in the +6 oxidation state include and the oxides and . iridium(VIII) oxide () was generated under matrix isolation conditions at 6 K in argon. The highest oxidation state (+9), which is also the highest recorded for any element, is found in gaseous . Binary compounds Iridium does not form binary hydrides. Only one binary oxide is well-characterized: iridium dioxide, . It is a blue black solid that adopts the fluorite structure. A sesquioxide, , has been described as a blue-black powder, which is oxidized to by . The corresponding disulfides, diselenides, sesquisulfides, and sesquiselenides are known, as well as . Binary trihalides, , are known for all of the halogens. For oxidation states +4 and above, only the tetrafluoride, pentafluoride and hexafluoride are known. Iridium hexafluoride, , is a volatile yellow solid, composed of octahedral molecules. It decomposes in water and is reduced to . Iridium pentafluoride is also a strong oxidant, but it is a tetramer, , formed by four corner-sharing octahedra. Complexes Iridium has extensive coordination chemistry. Iridium in its complexes is always low-spin. Ir(III) and Ir(IV) generally form octahedral complexes. Polyhydride complexes are known for the +5 and +3 oxidation states. One example is (iPr = isopropyl). The ternary hydride is believed to contain both the and the 18-electron anion. Iridium also forms oxyanions with oxidation states +4 and +5. and can be prepared from the reaction of potassium oxide or potassium superoxide with iridium at high temperatures. Such solids are not soluble in conventional solvents. Just like many elements, iridium forms important chloride complexes. Hexachloroiridic (IV) acid, , and its ammonium salt are common iridium compounds from both industrial and preparative perspectives. They are intermediates in the purification of iridium and used as precursors for most other iridium compounds, as well as in the preparation of anode coatings. The ion has an intense dark brown color, and can be readily reduced to the lighter-colored and vice versa. Iridium trichloride, , which can be obtained in anhydrous form from direct oxidation of iridium powder by chlorine at 650 °C, or in hydrated form by dissolving in hydrochloric acid, is often used as a starting material for the synthesis of other Ir(III) compounds. Another compound used as a starting material is potassium hexachloroiridate(III), . Organoiridium chemistry Organoiridium compounds contain iridium–carbon bonds. Early studies identified the very stable tetrairidium dodecacarbonyl, . In this compound, each of the iridium atoms is bonded to the other three, forming a tetrahedral cluster. The discovery of Vaska's complex () opened the door for oxidative addition reactions, a process fundamental to useful reactions. For example, Crabtree's catalyst, a homogeneous catalyst for hydrogenation reactions. Iridium complexes played a pivotal role in the development of Carbon–hydrogen bond activation (C–H activation), which promises to allow functionalization of hydrocarbons, which are traditionally regarded as unreactive. History Platinum group The discovery of iridium is intertwined with that of platinum and the other metals of the platinum group. The first European reference to platinum appears in 1557 in the writings of the Italian humanist Julius Caesar Scaliger as a description of an unknown noble metal found between Darién and Mexico, "which no fire nor any Spanish artifice has yet been able to liquefy". From their first encounters with platinum, the Spanish generally saw the metal as a kind of impurity in gold, and it was treated as such. It was often simply thrown away, and there was an official decree forbidding the adulteration of gold with platinum impurities. In 1735, Antonio de Ulloa and Jorge Juan y Santacilia saw Native Americans mining platinum while the Spaniards were travelling through Colombia and Peru for eight years. Ulloa and Juan found mines with the whitish metal nuggets and took them home to Spain. Ulloa returned to Spain and established the first mineralogy lab in Spain and was the first to systematically study platinum, which was in 1748. His historical account of the expedition included a description of platinum as being neither separable nor calcinable. Ulloa also anticipated the discovery of platinum mines. After publishing the report in 1748, Ulloa did not continue to investigate the new metal. In 1758, he was sent to superintend mercury mining operations in Huancavelica. In 1741, Charles Wood, a British metallurgist, found various samples of Colombian platinum in Jamaica, which he sent to William Brownrigg for further investigation. In 1750, after studying the platinum sent to him by Wood, Brownrigg presented a detailed account of the metal to the Royal Society, stating that he had seen no mention of it in any previous accounts of known minerals. Brownrigg also made note of platinum's extremely high melting point and refractory metal-like behaviour toward borax. Other chemists across Europe soon began studying platinum, including Andreas Sigismund Marggraf, Torbern Bergman, Jöns Jakob Berzelius, William Lewis, and Pierre Macquer. In 1752, Henrik Scheffer published a detailed scientific description of the metal, which he referred to as "white gold", including an account of how he succeeded in fusing platinum ore with the aid of arsenic. Scheffer described platinum as being less pliable than gold, but with similar resistance to corrosion. Discovery Chemists who studied platinum dissolved it in aqua regia (a mixture of hydrochloric and nitric acids) to create soluble salts. They always observed a small amount of a dark, insoluble residue. Joseph Louis Proust thought that the residue was graphite. The French chemists Victor Collet-Descotils, Antoine François, comte de Fourcroy, and Louis Nicolas Vauquelin also observed the black residue in 1803, but did not obtain enough for further experiments. In 1803 British scientist Smithson Tennant (1761–1815) analyzed the insoluble residue and concluded that it must contain a new metal. Vauquelin treated the powder alternately with alkali and acids and obtained a volatile new oxide, which he believed to be of this new metal—which he named ptene, from the Greek word ptēnós, "winged". Tennant, who had the advantage of a much greater amount of residue, continued his research and identified the two previously undiscovered elements in the black residue, iridium and osmium. He obtained dark red crystals (probably of ]·n) by a sequence of reactions with sodium hydroxide and hydrochloric acid. He named iridium after Iris (), the Greek winged goddess of the rainbow and the messenger of the Olympian gods, because many of the salts he obtained were strongly colored. Discovery of the new elements was documented in a letter to the Royal Society on June 21, 1804. Metalworking and applications British scientist John George Children was the first to melt a sample of iridium in 1813 with the aid of "the greatest galvanic battery that has ever been constructed" (at that time). The first to obtain high-purity iridium was Robert Hare in 1842. He found it had a density of around and noted the metal is nearly immalleable and very hard. The first melting in appreciable quantity was done by Henri Sainte-Claire Deville and Jules Henri Debray in 1860. They required burning more than of pure and gas for each of iridium. These extreme difficulties in melting the metal limited the possibilities for handling iridium. John Isaac Hawkins was looking to obtain a fine and hard point for fountain pen nibs, and in 1834 managed to create an iridium-pointed gold pen. In 1880, John Holland and William Lofland Dudley were able to melt iridium by adding phosphorus and patented the process in the United States; British company Johnson Matthey later stated they had been using a similar process since 1837 and had already presented fused iridium at a number of World Fairs. The first use of an alloy of iridium with ruthenium in thermocouples was made by Otto Feussner in 1933. These allowed for the measurement of high temperatures in air up to . In Munich, Germany in 1957 Rudolf Mössbauer, in what has been called one of the "landmark experiments in twentieth-century physics", discovered the resonant and recoil-free emission and absorption of gamma rays by atoms in a solid metal sample containing only 191Ir. This phenomenon, known as the Mössbauer effect resulted in the awarding of the Nobel Prize in Physics in 1961, at the age 32, just three years after he published his discovery. Occurrence Along with many elements having atomic weights higher than that of iron, iridium is only naturally formed by the r-process (rapid neutron capture) in neutron star mergers and possibly rare types of supernovae. Iridium is one of the nine least abundant stable elements in Earth's crust, having an average mass fraction of 0.001 ppm in crustal rock; gold is 4 times more abundant, platinum is 10 times more abundant, silver and mercury are 80 times more abundant. Osmium, tellurium, ruthenium, rhodium and rhenium are about as abundant as iridium. In contrast to its low abundance in crustal rock, iridium is relatively common in meteorites, with concentrations of 0.5 ppm or more. The overall concentration of iridium on Earth is thought to be much higher than what is observed in crustal rocks, but because of the density and siderophilic ("iron-loving") character of iridium, it descended below the crust and into Earth's core when the planet was still molten. Iridium is found in nature as an uncombined element or in natural alloys, especially the iridium–osmium alloys osmiridium (osmium-rich) and iridosmium (iridium-rich). In nickel and copper deposits, the platinum group metals occur as sulfides, tellurides, antimonides, and arsenides. In all of these compounds, platinum can be exchanged with a small amount of iridium or osmium. As with all of the platinum group metals, iridium can be found naturally in alloys with raw nickel or raw copper. A number of iridium-dominant minerals, with iridium as the species-forming element, are known. They are exceedingly rare and often represent the iridium analogues of the above-given ones. The examples are irarsite and cuproiridsite, to mention some. Within Earth's crust, iridium is found at highest concentrations in three types of geologic structure: igneous deposits (crustal intrusions from below), impact craters, and deposits reworked from one of the former structures. The largest known primary reserves are in the Bushveld igneous complex in South Africa, (near the largest known impact structure, the Vredefort impact structure) though the large copper–nickel deposits near Norilsk in Russia, and the Sudbury Basin (also an impact crater) in Canada are also significant sources of iridium. Smaller reserves are found in the United States. Iridium is also found in secondary deposits, combined with platinum and other platinum group metals in alluvial deposits. The alluvial deposits used by pre-Columbian people in the Chocó Department of Colombia are still a source for platinum-group metals. As of 2003, world reserves have not been estimated. Marine oceanography Iridium is found within marine organisms, sediments, and the water column. The abundance of iridium in seawater and organisms is relatively low, as it does not readily form chloride complexes. The abundance in organisms is about 20 parts per trillion, or about five orders of magnitude less than in sedimentary rocks at the Cretaceous–Paleogene (K–T) boundary. The concentration of iridium in seawater and marine sediment is sensitive to marine oxygenation, seawater temperature, and various geological and biological processes. Iridium in sediments can come from cosmic dust, volcanoes, precipitation from seawater, microbial processes, or hydrothermal vents, and its abundance can be strongly indicative of the source. It tends to associate with other ferrous metals in manganese nodules. Iridium is one of the characteristic elements of extraterrestrial rocks, and, along with osmium, can be used as a tracer element for meteoritic material in sediment. For example, core samples from the Pacific Ocean with elevated iridium levels suggested the Eltanin impact of about 2.5 million years ago. Some of the mass extinctions, such as the Cretaceous extinction, can be identified by anomalously high concentrations of iridium in sediment, and these can be linked to major asteroid impacts. Cretaceous–Paleogene boundary presence The Cretaceous–Paleogene boundary of 66 million years ago, marking the temporal border between the Cretaceous and Paleogene periods of geological time, was identified by a thin stratum of iridium-rich clay. A team led by Luis Alvarez proposed in 1980 an extraterrestrial origin for this iridium, attributing it to an asteroid or comet impact. Their theory, known as the Alvarez hypothesis, is now widely accepted to explain the extinction of the non-avian dinosaurs. A large buried impact crater structure with an estimated age of about 66 million years was later identified under what is now the Yucatán Peninsula (the Chicxulub crater). Dewey M. McLean and others argue that the iridium may have been of volcanic origin instead, because Earth's core is rich in iridium, and active volcanoes such as Piton de la Fournaise, in the island of Réunion, are still releasing iridium. Production Worldwide production of iridium was about in 2018. The price is high and varying (see table). Illustrative factors that affect the price include oversupply of Ir crucibles and changes in LED technology. Platinum metals occur together as dilute ores. Iridium is one of the rarer platinum metals: for every 190 tonnes of platinum obtained from ores, only 7.5 tonnes of iridium is isolated. To separate the metals, they must first be brought into solution. Two methods for rendering Ir-containing ores soluble are (i) fusion of the solid with sodium peroxide followed by extraction of the resulting glass in aqua regia and (ii) extraction of the solid with a mixture of chlorine with hydrochloric acid. From soluble extracts, iridium is separated by precipitating solid ammonium hexachloroiridate () or by extracting with organic amines. The first method is similar to the procedure Tennant and Wollaston used for their original separation. The second method can be planned as continuous liquid–liquid extraction and is therefore more suitable for industrial scale production. In either case, the product, an iridium chloride salt, is reduced with hydrogen, yielding the metal as a powder or sponge, which is amenable to powder metallurgy techniques. Iridium is also obtained commercially as a by-product from nickel and copper mining and processing. During electrorefining of copper and nickel, noble metals such as silver, gold and the platinum group metals as well as selenium and tellurium settle to the bottom of the cell as anode mud, which forms the starting point for their extraction. Applications Due to iridium's resistance to corrosion it has industrial applications. The main areas of use are electrodes for producing chlorine and other corrosive products, OLEDs, crucibles, catalysts (e.g. acetic acid), and ignition tips for spark plugs. Metal and alloys Resistance to heat and corrosion are the bases for several uses of iridium and its alloys. Owing to its high melting point, hardness, and corrosion resistance, iridium is used to make crucibles. Such crucibles are used in the Czochralski process to produce oxide single-crystals (such as sapphires) for use in computer memory devices and in solid state lasers. The crystals, such as gadolinium gallium garnet and yttrium gallium garnet, are grown by melting pre-sintered charges of mixed oxides under oxidizing conditions at temperatures up to . Certain long-life aircraft engine parts are made of an iridium alloy, and an iridium–titanium alloy is used for deep-water pipes because of its corrosion resistance. Iridium is used for multi-pored spinnerets, through which a plastic polymer melt is extruded to form fibers, such as rayon. Osmium–iridium is used for compass bearings and for balances. Because of their resistance to arc erosion, iridium alloys are used by some manufacturers for the centre electrodes of spark plugs, and iridium-based spark plugs are particularly used in aviation. Catalysis Iridium compounds are used as catalysts in the Cativa process for carbonylation of methanol to produce acetic acid. Iridium complexes are often active for asymmetric hydrogenation both by traditional hydrogenation. and transfer hydrogenation. This property is the basis of the industrial route to the chiral herbicide (S)-metolachlor. As practiced by Syngenta on the scale of 10,000 tons/year, the complex [Ir(COD)Cl]2 in the presence of Josiphos ligands. Medical imaging The radioisotope iridium-192 is one of the two most important sources of energy for use in industrial γ-radiography for non-destructive testing of metals. Additionally, is used as a source of gamma radiation for the treatment of cancer using brachytherapy, a form of radiotherapy where a sealed radioactive source is placed inside or next to the area requiring treatment. Specific treatments include high-dose-rate prostate brachytherapy, biliary duct brachytherapy, and intracavitary cervix brachytherapy. Iridium-192 is normally produced by neutron activation of isotope iridium-191 in natural-abundance iridium metal. Photocatalysis and OLEDs Iridium complexes are key components of white OLEDs. Similar complexes are used in photocatalysis. Scientific An alloy of 90% platinum and 10% iridium was used in 1889 to construct the International Prototype Meter and kilogram mass, kept by the International Bureau of Weights and Measures near Paris. The meter bar was replaced as the definition of the fundamental unit of length in 1960 by a line in the atomic spectrum of krypton, but the kilogram prototype remained the international standard of mass until 20 May 2019, when the kilogram was redefined in terms of the Planck constant. Historical Iridium–osmium alloys were used in fountain pen nib tips. The first major use of iridium was in 1834 in nibs mounted on gold. Starting in 1944, the Parker 51 fountain pen was fitted with a nib tipped by a ruthenium and iridium alloy (with 3.8% iridium). The tip material in modern fountain pens is still conventionally called "iridium", although there is seldom any iridium in it; other metals such as ruthenium, osmium, and tungsten have taken its place. An iridium–platinum alloy was used for the touch holes or vent pieces of cannon. According to a report of the Paris Exhibition of 1867, one of the pieces being exhibited by Johnson and Matthey "has been used in a Whitworth gun for more than 3000 rounds, and scarcely shows signs of wear yet. Those who know the constant trouble and expense which are occasioned by the wearing of the vent-pieces of cannon when in active service, will appreciate this important adaptation". The pigment iridium black, which consists of very finely divided iridium, is used for painting porcelain an intense black; it was said that "all other porcelain black colors appear grey by the side of it". Precautions and hazards Iridium in bulk metallic form is not biologically important or hazardous to health due to its lack of reactivity with tissues; there are only about 20 parts per trillion of iridium in human tissue. Like most metals, finely divided iridium powder can be hazardous to handle, as it is an irritant and may ignite in air. Iridium is relatively unhazardous otherwise, with the only effect of Iridium ingestion being irritation of the digestive tract. However, soluble salts, such as the iridium halides, could be hazardous due to elements other than iridium or due to iridium itself. At the same time, most iridium compounds are insoluble, which makes absorption into the body difficult. A radioisotope of iridium, , is dangerous, like other radioactive isotopes. The only reported injuries related to iridium concern accidental exposure to radiation from used in brachytherapy. High-energy gamma radiation from can increase the risk of cancer. External exposure can cause burns, radiation poisoning, and death. Ingestion of 192Ir can burn the linings of the stomach and the intestines. 192Ir, 192mIr, and 194mIr tend to deposit in the liver, and can pose health hazards from both gamma and beta radiation.
Physical sciences
Chemical elements_2
null
14773
https://en.wikipedia.org/wiki/Information%20theory
Information theory
Information theory is the mathematical study of the quantification, storage, and communication of information. The field was established and formalized by Claude Shannon in the 1940s, though early contributions were made in the 1920s through the works of Harry Nyquist and Ralph Hartley. It is at the intersection of electronic engineering, mathematics, statistics, computer science, neurobiology, physics, and electrical engineering. A key measure in information theory is entropy. Entropy quantifies the amount of uncertainty involved in the value of a random variable or the outcome of a random process. For example, identifying the outcome of a fair coin flip (which has two equally likely outcomes) provides less information (lower entropy, less uncertainty) than identifying the outcome from a roll of a die (which has six equally likely outcomes). Some other important measures in information theory are mutual information, channel capacity, error exponents, and relative entropy. Important sub-fields of information theory include source coding, algorithmic complexity theory, algorithmic information theory and information-theoretic security. Applications of fundamental topics of information theory include source coding/data compression (e.g. for ZIP files), and channel coding/error detection and correction (e.g. for DSL). Its impact has been crucial to the success of the Voyager missions to deep space, the invention of the compact disc, the feasibility of mobile phones and the development of the Internet and artificial intelligence. The theory has also found applications in other areas, including statistical inference, cryptography, neurobiology, perception, signal processing, linguistics, the evolution and function of molecular codes (bioinformatics), thermal physics, molecular dynamics, black holes, quantum computing, information retrieval, intelligence gathering, plagiarism detection, pattern recognition, anomaly detection, the analysis of music, art creation, imaging system design, study of outer space, the dimensionality of space, and epistemology. Overview Information theory studies the transmission, processing, extraction, and utilization of information. Abstractly, information can be thought of as the resolution of uncertainty. In the case of communication of information over a noisy channel, this abstract concept was formalized in 1948 by Claude Shannon in a paper entitled A Mathematical Theory of Communication, in which information is thought of as a set of possible messages, and the goal is to send these messages over a noisy channel, and to have the receiver reconstruct the message with low probability of error, in spite of the channel noise. Shannon's main result, the noisy-channel coding theorem, showed that, in the limit of many channel uses, the rate of information that is asymptotically achievable is equal to the channel capacity, a quantity dependent merely on the statistics of the channel over which the messages are sent. Coding theory is concerned with finding explicit methods, called codes, for increasing the efficiency and reducing the error rate of data communication over noisy channels to near the channel capacity. These codes can be roughly subdivided into data compression (source coding) and error-correction (channel coding) techniques. In the latter case, it took many years to find the methods Shannon's work proved were possible. A third class of information theory codes are cryptographic algorithms (both codes and ciphers). Concepts, methods and results from coding theory and information theory are widely used in cryptography and cryptanalysis, such as the unit ban. Historical background The landmark event establishing the discipline of information theory and bringing it to immediate worldwide attention was the publication of Claude E. Shannon's classic paper "A Mathematical Theory of Communication" in the Bell System Technical Journal in July and October 1948. Historian James Gleick rated the paper as the most important development of 1948, noting that the paper was "even more profound and more fundamental" than the transistor. He came to be known as the "father of information theory". Shannon outlined some of his initial ideas of information theory as early as 1939 in a letter to Vannevar Bush. Prior to this paper, limited information-theoretic ideas had been developed at Bell Labs, all implicitly assuming events of equal probability. Harry Nyquist's 1924 paper, Certain Factors Affecting Telegraph Speed, contains a theoretical section quantifying "intelligence" and the "line speed" at which it can be transmitted by a communication system, giving the relation (recalling the Boltzmann constant), where W is the speed of transmission of intelligence, m is the number of different voltage levels to choose from at each time step, and K is a constant. Ralph Hartley's 1928 paper, Transmission of Information, uses the word information as a measurable quantity, reflecting the receiver's ability to distinguish one sequence of symbols from any other, thus quantifying information as , where S was the number of possible symbols, and n the number of symbols in a transmission. The unit of information was therefore the decimal digit, which since has sometimes been called the hartley in his honor as a unit or scale or measure of information. Alan Turing in 1940 used similar ideas as part of the statistical analysis of the breaking of the German second world war Enigma ciphers. Much of the mathematics behind information theory with events of different probabilities were developed for the field of thermodynamics by Ludwig Boltzmann and J. Willard Gibbs. Connections between information-theoretic entropy and thermodynamic entropy, including the important contributions by Rolf Landauer in the 1960s, are explored in Entropy in thermodynamics and information theory. In Shannon's revolutionary and groundbreaking paper, the work for which had been substantially completed at Bell Labs by the end of 1944, Shannon for the first time introduced the qualitative and quantitative model of communication as a statistical process underlying information theory, opening with the assertion: "The fundamental problem of communication is that of reproducing at one point, either exactly or approximately, a message selected at another point." With it came the ideas of: the information entropy and redundancy of a source, and its relevance through the source coding theorem; the mutual information, and the channel capacity of a noisy channel, including the promise of perfect loss-free communication given by the noisy-channel coding theorem; the practical result of the Shannon–Hartley law for the channel capacity of a Gaussian channel; as well as the bit—a new way of seeing the most fundamental unit of information. Quantities of information Information theory is based on probability theory and statistics, where quantified information is usually described in terms of bits. Information theory often concerns itself with measures of information of the distributions associated with random variables. One of the most important measures is called entropy, which forms the building block of many other measures. Entropy allows quantification of measure of information in a single random variable. Another useful concept is mutual information defined on two random variables, which describes the measure of information in common between those variables, which can be used to describe their correlation. The former quantity is a property of the probability distribution of a random variable and gives a limit on the rate at which data generated by independent samples with the given distribution can be reliably compressed. The latter is a property of the joint distribution of two random variables, and is the maximum rate of reliable communication across a noisy channel in the limit of long block lengths, when the channel statistics are determined by the joint distribution. The choice of logarithmic base in the following formulae determines the unit of information entropy that is used. A common unit of information is the bit or shannon, based on the binary logarithm. Other units include the nat, which is based on the natural logarithm, and the decimal digit, which is based on the common logarithm. In what follows, an expression of the form is considered by convention to be equal to zero whenever . This is justified because for any logarithmic base. Entropy of an information source Based on the probability mass function of each source symbol to be communicated, the Shannon entropy , in units of bits (per symbol), is given by where is the probability of occurrence of the -th possible value of the source symbol. This equation gives the entropy in the units of "bits" (per symbol) because it uses a logarithm of base 2, and this base-2 measure of entropy has sometimes been called the shannon in his honor. Entropy is also commonly computed using the natural logarithm (base , where is Euler's number), which produces a measurement of entropy in nats per symbol and sometimes simplifies the analysis by avoiding the need to include extra constants in the formulas. Other bases are also possible, but less commonly used. For example, a logarithm of base will produce a measurement in bytes per symbol, and a logarithm of base 10 will produce a measurement in decimal digits (or hartleys) per symbol. Intuitively, the entropy of a discrete random variable is a measure of the amount of uncertainty associated with the value of when only its distribution is known. The entropy of a source that emits a sequence of symbols that are independent and identically distributed (iid) is bits (per message of symbols). If the source data symbols are identically distributed but not independent, the entropy of a message of length will be less than . If one transmits 1000 bits (0s and 1s), and the value of each of these bits is known to the receiver (has a specific value with certainty) ahead of transmission, it is clear that no information is transmitted. If, however, each bit is independently equally likely to be 0 or 1, 1000 shannons of information (more often called bits) have been transmitted. Between these two extremes, information can be quantified as follows. If is the set of all messages that could be, and is the probability of some , then the entropy, , of is defined: (Here, is the self-information, which is the entropy contribution of an individual message, and is the expected value.) A property of entropy is that it is maximized when all the messages in the message space are equiprobable ; i.e., most unpredictable, in which case . The special case of information entropy for a random variable with two outcomes is the binary entropy function, usually taken to the logarithmic base 2, thus having the shannon (Sh) as unit: Joint entropy The of two discrete random variables and is merely the entropy of their pairing: . This implies that if and are independent, then their joint entropy is the sum of their individual entropies. For example, if represents the position of a chess piece— the row and the column, then the joint entropy of the row of the piece and the column of the piece will be the entropy of the position of the piece. Despite similar notation, joint entropy should not be confused with . Conditional entropy (equivocation) The or conditional uncertainty of given random variable (also called the equivocation of about ) is the average conditional entropy over : Because entropy can be conditioned on a random variable or on that random variable being a certain value, care should be taken not to confuse these two definitions of conditional entropy, the former of which is in more common use. A basic property of this form of conditional entropy is that: Mutual information (transinformation) Mutual information measures the amount of information that can be obtained about one random variable by observing another. It is important in communication where it can be used to maximize the amount of information shared between sent and received signals. The mutual information of relative to is given by: where (Specific mutual Information) is the pointwise mutual information. A basic property of the mutual information is that That is, knowing Y, we can save an average of bits in encoding X compared to not knowing Y. Mutual information is symmetric: Mutual information can be expressed as the average Kullback–Leibler divergence (information gain) between the posterior probability distribution of X given the value of Y and the prior distribution on X: In other words, this is a measure of how much, on the average, the probability distribution on X will change if we are given the value of Y. This is often recalculated as the divergence from the product of the marginal distributions to the actual joint distribution: Mutual information is closely related to the log-likelihood ratio test in the context of contingency tables and the multinomial distribution and to Pearson's χ2 test: mutual information can be considered a statistic for assessing independence between a pair of variables, and has a well-specified asymptotic distribution. Kullback–Leibler divergence (information gain) The Kullback–Leibler divergence (or information divergence, information gain, or relative entropy) is a way of comparing two distributions: a "true" probability distribution , and an arbitrary probability distribution . If we compress data in a manner that assumes is the distribution underlying some data, when, in reality, is the correct distribution, the Kullback–Leibler divergence is the number of average additional bits per datum necessary for compression. It is thus defined Although it is sometimes used as a 'distance metric', KL divergence is not a true metric since it is not symmetric and does not satisfy the triangle inequality (making it a semi-quasimetric). Another interpretation of the KL divergence is the "unnecessary surprise" introduced by a prior from the truth: suppose a number X is about to be drawn randomly from a discrete set with probability distribution . If Alice knows the true distribution , while Bob believes (has a prior) that the distribution is , then Bob will be more surprised than Alice, on average, upon seeing the value of X. The KL divergence is the (objective) expected value of Bob's (subjective) surprisal minus Alice's surprisal, measured in bits if the log is in base 2. In this way, the extent to which Bob's prior is "wrong" can be quantified in terms of how "unnecessarily surprised" it is expected to make him. Directed Information Directed information, , is an information theory measure that quantifies the information flow from the random process to the random process . The term directed information was coined by James Massey and is defined as , where is the conditional mutual information . In contrast to mutual information, directed information is not symmetric. The measures the information bits that are transmitted causally from to . The Directed information has many applications in problems where causality plays an important role such as capacity of channel with feedback, capacity of discrete memoryless networks with feedback, gambling with causal side information, compression with causal side information, real-time control communication settings, and in statistical physics. Other quantities Other important information theoretic quantities include the Rényi entropy and the Tsallis entropy (generalizations of the concept of entropy), differential entropy (a generalization of quantities of information to continuous distributions), and the conditional mutual information. Also, pragmatic information has been proposed as a measure of how much information has been used in making a decision. Coding theory Coding theory is one of the most important and direct applications of information theory. It can be subdivided into source coding theory and channel coding theory. Using a statistical description for data, information theory quantifies the number of bits needed to describe the data, which is the information entropy of the source. Data compression (source coding): There are two formulations for the compression problem: lossless data compression: the data must be reconstructed exactly; lossy data compression: allocates bits needed to reconstruct the data, within a specified fidelity level measured by a distortion function. This subset of information theory is called rate–distortion theory. Error-correcting codes (channel coding): While data compression removes as much redundancy as possible, an error-correcting code adds just the right kind of redundancy (i.e., error correction) needed to transmit the data efficiently and faithfully across a noisy channel. This division of coding theory into compression and transmission is justified by the information transmission theorems, or source–channel separation theorems that justify the use of bits as the universal currency for information in many contexts. However, these theorems only hold in the situation where one transmitting user wishes to communicate to one receiving user. In scenarios with more than one transmitter (the multiple-access channel), more than one receiver (the broadcast channel) or intermediary "helpers" (the relay channel), or more general networks, compression followed by transmission may no longer be optimal. Source theory Any process that generates successive messages can be considered a of information. A memoryless source is one in which each message is an independent identically distributed random variable, whereas the properties of ergodicity and stationarity impose less restrictive constraints. All such sources are stochastic. These terms are well studied in their own right outside information theory. Rate Information rate is the average entropy per symbol. For memoryless sources, this is merely the entropy of each symbol, while, in the case of a stationary stochastic process, it is: that is, the conditional entropy of a symbol given all the previous symbols generated. For the more general case of a process that is not necessarily stationary, the average rate is: that is, the limit of the joint entropy per symbol. For stationary sources, these two expressions give the same result. The information rate is defined as: It is common in information theory to speak of the "rate" or "entropy" of a language. This is appropriate, for example, when the source of information is English prose. The rate of a source of information is related to its redundancy and how well it can be compressed, the subject of . Channel capacity Communications over a channel is the primary motivation of information theory. However, channels often fail to produce exact reconstruction of a signal; noise, periods of silence, and other forms of signal corruption often degrade quality. Consider the communications process over a discrete channel. A simple model of the process is shown below: Here X represents the space of messages transmitted, and Y the space of messages received during a unit time over our channel. Let be the conditional probability distribution function of Y given X. We will consider to be an inherent fixed property of our communications channel (representing the nature of the noise of our channel). Then the joint distribution of X and Y is completely determined by our channel and by our choice of , the marginal distribution of messages we choose to send over the channel. Under these constraints, we would like to maximize the rate of information, or the signal, we can communicate over the channel. The appropriate measure for this is the mutual information, and this maximum mutual information is called the and is given by: This capacity has the following property related to communicating at information rate R (where R is usually bits per symbol). For any information rate R < C and coding error ε > 0, for large enough N, there exists a code of length N and rate ≥ R and a decoding algorithm, such that the maximal probability of block error is ≤ ε; that is, it is always possible to transmit with arbitrarily small block error. In addition, for any rate R > C, it is impossible to transmit with arbitrarily small block error. Channel coding is concerned with finding such nearly optimal codes that can be used to transmit data over a noisy channel with a small coding error at a rate near the channel capacity. Capacity of particular channel models A continuous-time analog communications channel subject to Gaussian noise—see Shannon–Hartley theorem. A binary symmetric channel (BSC) with crossover probability p is a binary input, binary output channel that flips the input bit with probability p. The BSC has a capacity of bits per channel use, where is the binary entropy function to the base-2 logarithm: A binary erasure channel (BEC) with erasure probability p is a binary input, ternary output channel. The possible channel outputs are 0, 1, and a third symbol 'e' called an erasure. The erasure represents complete loss of information about an input bit. The capacity of the BEC is bits per channel use. Channels with memory and directed information In practice many channels have memory. Namely, at time the channel is given by the conditional probability. It is often more comfortable to use the notation and the channel become . In such a case the capacity is given by the mutual information rate when there is no feedback available and the Directed information rate in the case that either there is feedback or not (if there is no feedback the directed information equals the mutual information). Fungible information Fungible information is the information for which the means of encoding is not important. Classical information theorists and computer scientists are mainly concerned with information of this sort. It is sometimes referred as speakable information. Applications to other fields Intelligence uses and secrecy applications Information theoretic concepts apply to cryptography and cryptanalysis. Turing's information unit, the ban, was used in the Ultra project, breaking the German Enigma machine code and hastening the end of World War II in Europe. Shannon himself defined an important concept now called the unicity distance. Based on the redundancy of the plaintext, it attempts to give a minimum amount of ciphertext necessary to ensure unique decipherability. Information theory leads us to believe it is much more difficult to keep secrets than it might first appear. A brute force attack can break systems based on asymmetric key algorithms or on most commonly used methods of symmetric key algorithms (sometimes called secret key algorithms), such as block ciphers. The security of all such methods comes from the assumption that no known attack can break them in a practical amount of time. Information theoretic security refers to methods such as the one-time pad that are not vulnerable to such brute force attacks. In such cases, the positive conditional mutual information between the plaintext and ciphertext (conditioned on the key) can ensure proper transmission, while the unconditional mutual information between the plaintext and ciphertext remains zero, resulting in absolutely secure communications. In other words, an eavesdropper would not be able to improve his or her guess of the plaintext by gaining knowledge of the ciphertext but not of the key. However, as in any other cryptographic system, care must be used to correctly apply even information-theoretically secure methods; the Venona project was able to crack the one-time pads of the Soviet Union due to their improper reuse of key material. Pseudorandom number generation Pseudorandom number generators are widely available in computer language libraries and application programs. They are, almost universally, unsuited to cryptographic use as they do not evade the deterministic nature of modern computer equipment and software. A class of improved random number generators is termed cryptographically secure pseudorandom number generators, but even they require random seeds external to the software to work as intended. These can be obtained via extractors, if done carefully. The measure of sufficient randomness in extractors is min-entropy, a value related to Shannon entropy through Rényi entropy; Rényi entropy is also used in evaluating randomness in cryptographic systems. Although related, the distinctions among these measures mean that a random variable with high Shannon entropy is not necessarily satisfactory for use in an extractor and so for cryptography uses. Seismic exploration One early commercial application of information theory was in the field of seismic oil exploration. Work in this field made it possible to strip off and separate the unwanted noise from the desired seismic signal. Information theory and digital signal processing offer a major improvement of resolution and image clarity over previous analog methods. Semiotics Semioticians and Winfried Nöth both considered Charles Sanders Peirce as having created a theory of information in his works on semiotics. Nauta defined semiotic information theory as the study of "the internal processes of coding, filtering, and information processing." Concepts from information theory such as redundancy and code control have been used by semioticians such as Umberto Eco and to explain ideology as a form of message transmission whereby a dominant social class emits its message by using signs that exhibit a high degree of redundancy such that only one message is decoded among a selection of competing ones. Integrated process organization of neural information Quantitative information theoretic methods have been applied in cognitive science to analyze the integrated process organization of neural information in the context of the binding problem in cognitive neuroscience. In this context, either an information-theoretical measure, such as (Gerald Edelman and Giulio Tononi's functional clustering model and dynamic core hypothesis (DCH)) or (Tononi's integrated information theory (IIT) of consciousness), is defined (on the basis of a reentrant process organization, i.e. the synchronization of neurophysiological activity between groups of neuronal populations), or the measure of the minimization of free energy on the basis of statistical methods (Karl J. Friston's free energy principle (FEP), an information-theoretical measure which states that every adaptive change in a self-organized system leads to a minimization of free energy, and the Bayesian brain hypothesis). Miscellaneous applications Information theory also has applications in the search for extraterrestrial intelligence, black holes, bioinformatics, and gambling.
Mathematics
Other
null
14775
https://en.wikipedia.org/wiki/Inch
Inch
The inch (symbol: in or ) is a unit of length in the British Imperial and the United States customary systems of measurement. It is equal to yard or of a foot. Derived from the Roman uncia ("twelfth"), the word inch is also sometimes used to translate similar units in other measurement systems, usually understood as deriving from the width of the human thumb. Standards for the exact length of an inch have varied in the past, but since the adoption of the international yard during the 1950s and 1960s the inch has been based on the metric system and defined as exactly 25.4mm. Name The English word "inch" () was an early borrowing from Latin ("one-twelfth; Roman inch; Roman ounce"). The vowel change from Latin to Old English (which became Modern English ) is known as umlaut. The consonant change from the Latin (spelled c) to English is palatalisation. Both were features of Old English phonology; see and for more information. "Inch" is cognate with "ounce" (), whose separate pronunciation and spelling reflect its reborrowing in Middle English from Anglo-Norman unce and ounce. In many other European languages, the word for "inch" is the same as or derived from the word for "thumb", as a man's thumb is about an inch wide (and this was even sometimes used to define the inch). In the Dutch language a term for inch is engelse duim (english thumb). Examples include ("inch") and ("thumb"); ("thumb"); Danish and ("inch") ("thumb"); (whence and ); ; , ; ; ("inch") and ("thumb"); ("duim"); ("thumb"); ("inch") and ("thumb"); and ("inch") and tumme ("thumb"). Usage Imperial or hybrid countries The inch is a commonly used customary unit of length in the United States, Canada, and the United Kingdom. For the United Kingdom, guidance on public sector use states that, since 1 October 1995, without time limit, the inch (along with the foot) is to be used as a primary unit for road signs and related measurements of distance (with the possible exception of clearance heights and widths) and may continue to be used as a secondary or supplementary indication following a metric measurement for other purposes. Worldwide Inches are used for display screens (e.g. televisions and computer monitors) worldwide. It is the official Japanese standard for electronic parts, especially display screens, and is the industry standard throughout continental Europe for display screens (Germany being one of few countries to supplement it with centimetres in most stores). Inches are commonly used to specify the diameter of vehicle wheel rims, and the corresponding inner diameter of tyres in tyre codes. SI countries Both inch-based and millimeter-based hex keys are widely available for sale in Europe. Technical details The international standard symbol for inch is in (see ISO 31-1, Annex A) but traditionally the inch is denoted by a double prime, which is often approximated by a double quote symbol, and the foot by a prime, which is often approximated by an apostrophe. For example; can be written as 3 2. (This is akin to how the first and second "cuts" of the hour are likewise indicated by prime and double prime symbols, and also the first and second cuts of the degree.) Subdivisions of an inch are typically written using dyadic fractions with odd number numerators; for example, would be written as and not as 2.375 nor as . However, for engineering purposes fractions are commonly given to three or four places of decimals and have been for many years. Equivalents international inch is equal to: centimeters (1 inch is exactly 2.54 cm) millimetres (1 inch is exactly 25.4 mm) or feet or yards 'tenths' thou or mil points or gries PostScript points , , or lines computer picas barleycorns US Survey inches or palms or hands History The earliest known reference to the inch in England is from the Laws of Æthelberht dating to the early 7th century, surviving in a single manuscript, the Textus Roffensis from 1120. Paragraph LXVII sets out the fine for wounds of various depths: one inch, one shilling; two inches, two shillings, etc. An Anglo-Saxon unit of length was the barleycorn. After 1066, 1 inch was equal to 3 barleycorns, which continued to be its legal definition for several centuries, with the barleycorn being the base unit. One of the earliest such definitions is that of 1324, where the legal definition of the inch was set out in a statute of Edward II of England, defining it as "three grains of barley, dry and round, placed end to end, lengthwise". Similar definitions are recorded in both English and Welsh medieval law tracts. One, dating from the first half of the 10th century, is contained in the Laws of Hywel Dda which superseded those of Dyfnwal, an even earlier definition of the inch in Wales. Both definitions, as recorded in Ancient Laws and Institutes of Wales (vol i., pp. 184, 187, 189), are that "three lengths of a barleycorn is the inch". King David I of Scotland in his Assize of Weights and Measures (c. 1150) is said to have defined the Scottish inch as the width of an average man's thumb at the base of the nail, even including the requirement to calculate the average of a small, a medium, and a large man's measures. However, the oldest surviving manuscripts date from the early 14th century and appear to have been altered with the inclusion of newer material. In 1814, Charles Butler, a mathematics teacher at Cheam School, recorded the old legal definition of the inch to be "three grains of sound ripe barley being taken out the middle of the ear, well dried, and laid end to end in a row", and placed the barleycorn, not the inch, as the base unit of the English Long Measure system, from which all other units were derived. John Bouvier similarly recorded in his 1843 law dictionary that the barleycorn was the fundamental measure. Butler observed, however, that "[a]s the length of the barley-corn cannot be fixed, so the inch according to this method will be uncertain", noting that a standard inch measure was now [i.e. by 1843] kept in the Exchequer chamber, Guildhall, and that was the legal definition of the inch. This was a point also made by George Long in his 1842 Penny Cyclopædia, observing that standard measures had since surpassed the barleycorn definition of the inch, and that to recover the inch measure from its original definition, in case the standard measure were destroyed, would involve the measurement of large numbers of barleycorns and taking their average lengths. He noted that this process would not perfectly recover the standard, since it might introduce errors of anywhere between one hundredth and one tenth of an inch in the definition of a yard. Before the adoption of the international yard and pound, various definitions were in use. In the United Kingdom and most countries of the British Commonwealth, the inch was defined in terms of the Imperial Standard Yard. The United States adopted the conversion factor 1 metre = 39.37 inches by an act in 1866. In 1893, Mendenhall ordered the physical realization of the inch to be based on the international prototype metres numbers 21 and 27, which had been received from the CGPM, together with the previously adopted conversion factor. As a result of the definitions above, the U.S. inch was effectively defined as 25.4000508 mm (with a reference temperature of 68 degrees Fahrenheit) and the UK inch at 25.399977 mm (with a reference temperature of 62 degrees Fahrenheit). When Carl Edvard Johansson started manufacturing gauge blocks in inch sizes in 1912, Johansson's compromise was to manufacture gauge blocks with a nominal size of 25.4mm, with a reference temperature of 20 degrees Celsius, accurate to within a few parts per million of both official definitions. Because Johansson's blocks were so popular, his blocks became the de facto standard for manufacturers internationally, with other manufacturers of gauge blocks following Johansson's definition by producing blocks designed to be equivalent to his. In 1930, the British Standards Institution adopted an inch of exactly 25.4 mm. The American Standards Association followed suit in 1933. By 1935, industry in 16 countries had adopted the "industrial inch" as it came to be known, effectively endorsing Johansson's pragmatic choice of conversion ratio. In 1946, the Commonwealth Science Congress recommended a yard of exactly 0.9144 metres for adoption throughout the British Commonwealth. This was adopted by Canada in 1951; the United States on 1 July 1959; Australia in 1961, effective 1 January 1964; and the United Kingdom in 1963, effective on 1 January 1964. The new standards gave an inch of exactly 25.4 mm, 1.7 millionths of an inch longer than the old imperial inch and 2 millionths of an inch shorter than the old US inch. Related units US survey inches The United States retained the -metre definition for surveying, producing a 2 millionth part difference between standard and US survey inches. This is approximately  inch per mile; 12.7 kilometres is exactly standard inches and exactly survey inches. This difference is substantial when doing calculations in State Plane Coordinate Systems with coordinate values in the hundreds of thousands or millions of feet. In 2020, the National Institute of Standards and Technology announced that the U.S. survey foot would "be phased out" on 1 January 2023 and be superseded by the international foot (also known as the foot) equal to 0.3048 metres exactly, for all further applications. This implies that the survey inch was replaced by the international inch. Continental inches Before the adoption of the metric system, several European countries had customary units whose name translates into "inch". The French pouce measured roughly 27.0 mm, at least when applied to describe the calibre of artillery pieces. The Amsterdam foot (voet) consisted of 11 Amsterdam inches (duim). The Amsterdam foot is about 8% shorter than an English foot. Scottish inch The now obsolete Scottish inch (), of a Scottish foot, was about 1.0016 imperial inches (about ).
Physical sciences
Length and distance
null
14783
https://en.wikipedia.org/wiki/Erectile%20dysfunction
Erectile dysfunction
Erectile dysfunction (ED), also referred to as impotence, is a form of sexual dysfunction in males characterized by the persistent or recurring inability to achieve or maintain a penile erection with sufficient rigidity and duration for satisfactory sexual activity. It is the most common sexual problem in males and can cause psychological distress due to its impact on self-image and sexual relationships. The majority of ED cases are attributed to physical risk factors and predictive factors. These factors can be categorized as vascular, neurological, local penile, hormonal, and drug-induced. Notable predictors of ED include aging, cardiovascular disease, diabetes mellitus, high blood pressure, obesity, abnormal lipid levels in the blood, hypogonadism, smoking, depression, and medication use. Approximately 10% of cases are linked to psychosocial factors, encompassing conditions like depression, stress, and problems within relationships. The term erectile dysfunction does not encompass other erection-related disorders, such as priapism. Treatment of ED encompasses addressing the underlying causes, lifestyle modification, and addressing psychosocial issues. In many instances, medication-based therapies are used, specifically PDE5 inhibitors like sildenafil. These drugs function by dilating blood vessels, facilitating increased blood flow into the spongy tissue of the penis, analogous to opening a valve wider to enhance water flow in a fire hose. Less frequently employed treatments encompass prostaglandin pellets inserted into the urethra, the injection of smooth-muscle relaxants and vasodilators directly into the penis, penile implants, the use of penis pumps, and vascular surgery. ED is reported in 18% of males aged 50 to 59 years, and 37% in males aged 70 to 75. Signs and symptoms ED is characterized by the persistent or recurring inability to achieve or maintain an erection of the penis with sufficient rigidity and duration for satisfactory sexual activity. It is defined as the "persistent or recurrent inability to achieve and maintain a penile erection of sufficient rigidity to permit satisfactory sexual activity for at least 3 months." Psychological impact ED often has an impact on the emotional well-being of both males and their partners. Many males do not seek treatment due to feelings of embarrassment. About 75% of diagnosed cases of ED go untreated. Causes Causes of or contributors to ED include the following: Diets high in saturated fat are linked to heart diseases, and males with heart diseases are more likely to experience ED. By contrast, plant-based diets show a lower risk for ED. Prescription drugs (e.g., SSRIs, beta blockers, antihistamines, alpha-2 adrenergic receptor agonists, thiazides, hormone modulators, and 5α-reductase inhibitors) Neurogenic disorders (e.g., diabetic neuropathy, temporal lobe epilepsy, multiple sclerosis, Parkinson's disease, multiple system atrophy) Cavernosal disorders (e.g., Peyronie's disease) Hyperprolactinemia (e.g., due to a prolactinoma) Psychological causes: performance anxiety, stress, and mental disorders Surgery (e.g., radical prostatectomy) Ageing: after age 40 years, ageing itself is a risk factor for ED, although numerous other pathologies that may occur with ageing, such as testosterone deficiency, cardiovascular diseases, or diabetes, among others, appear to have interacting effects Kidney disease: ED and chronic kidney disease have pathological mechanisms in common, including vascular and hormonal dysfunction, and may share other comorbidities, such as hypertension and diabetes mellitus that can contribute to ED Lifestyle habits, particularly smoking, which is a key risk factor for ED as it promotes arterial narrowing. Due to its propensity for causing detumescence and erectile dysfunction, some studies have described tobacco as an anaphrodisiacal substance. COVID-19: preliminary research indicates that COVID-19 viral infection may affect sexual and reproductive health. Surgical intervention for a number of conditions may remove anatomical structures necessary to erection, damage nerves, or impair blood supply. ED is a common complication of treatments for prostate cancer, including prostatectomy and destruction of the prostate by external beam radiation, although the prostate gland itself is not necessary to achieve an erection. As far as inguinal hernia surgery is concerned, in most cases, and in the absence of postoperative complications, the operative repair can lead to a recovery of the sexual life of people with preoperative sexual dysfunction, while, in most cases, it does not affect people with a preoperative normal sexual life. ED can also be associated with bicycling due to both neurological and vascular problems due to compression. The increased risk appears to be about 1.7-fold. Concerns that use of pornography can cause ED have little support in epidemiological studies, according to a 2015 literature review. According to Gunter de Win, a Belgian professor and sex researcher, "Put simply, respondents who watch 60 minutes a week and think they're addicted were more likely to report sexual dysfunction than those who watch a care-free 160 minutes weekly." In seemingly rare cases, medications such as SSRIs, isotretinoin (Accutane) and finasteride (Propecia) are reported to induce long-lasting iatrogenic disorders characterized by sexual dysfunction symptoms, including erectile dysfunction in males; these disorders are known as post-SSRI sexual dysfunction (PSSD), post-retinoid sexual dysfunction/post-Accutane syndrome (PRSD/PAS), and post-finasteride syndrome (PFS). These conditions remain poorly understood and lack effective treatments, although they have been suggested to share a common etiology. Rarely impotence can be caused by aromatase being active. See Androgen replacement therapy. Pathophysiology Penile erection is managed by two mechanisms: the reflex erection, which is achieved by directly touching the penile shaft, and the psychogenic erection, which is achieved by erotic or emotional stimuli. The former involves the peripheral nerves and the lower parts of the spinal cord, whereas the latter involves the limbic system of the brain. In both cases, an intact neural system is required for a successful and complete erection. Stimulation of the penile shaft by the nervous system leads to the secretion of nitric oxide (NO), which causes the relaxation of the smooth muscles of the corpora cavernosa (the main erectile tissue of the penis), and subsequently penile erection. Additionally, adequate levels of testosterone (produced by the testes) and an intact pituitary gland are required for the development of a healthy erectile system. As can be understood from the mechanisms of a normal erection, impotence may develop due to hormonal deficiency, disorders of the neural system, lack of adequate penile blood supply or psychological problems. Spinal cord injury causes sexual dysfunction, including ED. Restriction of blood flow can arise from impaired endothelial function due to the usual causes associated with coronary artery disease, but can also be caused by prolonged exposure to bright light. Diagnosis In many cases, the diagnosis can be made based on the person's history of symptoms. In other cases, a physical examination and laboratory investigations are done to rule out more serious causes such as hypogonadism or prolactinoma. One of the first steps is to distinguish between physiological and psychological ED. Determining whether involuntary erections are present is important in eliminating the possibility of psychogenic causes for ED. Obtaining full erections occasionally, such as nocturnal penile tumescence when asleep (that is, when the mind and psychological issues, if any, are less present), tends to suggest that the physical structures are functionally working. Similarly, performance with manual stimulation, as well as any performance anxiety or acute situational ED, may indicate a psychogenic component to ED. Another factor leading to ED is diabetes mellitus, a well known cause of neuropathy. ED is also related to generally poor physical health, poor dietary habits, obesity, and most specifically cardiovascular disease, such as coronary artery disease and peripheral vascular disease. Screening for cardiovascular risk factors, such as smoking, dyslipidemia, hypertension, and alcoholism, is helpful. In some cases, the simple search for a previously undetected groin hernia can prove useful since it can affect sexual functions in males and is relatively easily curable. The current diagnostic and statistical manual of mental diseases (DSM-IV) lists ED. Ultrasonography Penile ultrasonography with doppler can be used to examine the erect penis. Most cases of ED of organic causes are related to changes in blood flow in the corpora cavernosa, represented by occlusive artery disease (in which less blood is allowed to enter the penis), most often of atherosclerotic origin, or due to failure of the veno-occlusive mechanism (in which too much blood circulates back out of the penis). Before the Doppler sonogram, the penis should be examined in B mode, in order to identify possible tumors, fibrotic plaques, calcifications, or hematomas, and to evaluate the appearance of the cavernous arteries, which can be tortuous or atheromatous. Erection can be induced by injecting 10–20 μg of prostaglandin E1, with evaluations of the arterial flow every five minutes for 25–30 min (see image). The use of prostaglandin E1 is contraindicated in patients with predisposition to priapism (e.g., those with sickle cell anemia), anatomical deformity of the penis, or penile implants. Phentolamine (2 mg) is often added. Visual and tactile stimulation produces better results. Some authors recommend the use of sildenafil by mouth to replace the injectable drugs in cases of contraindications, although the efficacy of such medication is controversial. Before the injection of the chosen drug, the flow pattern is monophasic, with low systolic velocities and an absence of diastolic flow. After injection, systolic and diastolic peak velocities should increase, decreasing progressively with vein occlusion and becoming negative when the penis becomes rigid (see image below). The reference values vary across studies, ranging from > 25 cm/s to > 35 cm/s. Values above 35 cm/s indicate the absence of arterial disease, values below 25 cm/s indicate arterial insufficiency, and values of 25–35 cm/s are indeterminate because they are less specific (see image below). The data obtained should be correlated with the degree of erection observed. If the peak systolic velocities are normal, the final diastolic velocities should be evaluated, those above 5 cm/s being associated with venogenic ED. Other workup methods Penile nerves functionTests such as the bulbocavernosus reflex test are used to ascertain whether there is enough nerve sensation in the penis. The physician squeezes the glans (head) of the penis, which immediately causes the anus to contract if nerve function is normal. A physician measures the latency between squeeze and contraction by observing the anal sphincter or by feeling it with a gloved finger in the anus. Nocturnal penile tumescence (NPT)It is normal for a man to have five to six erections during sleep, especially during rapid eye movement (REM). Their absence may indicate a problem with nerve function or blood supply in the penis. There are two methods for measuring changes in penile rigidity and circumference during nocturnal erection: snap gauge and strain gauge. A significant proportion of males who have no sexual dysfunction nonetheless do not have regular nocturnal erections. Penile biothesiometryThis test uses electromagnetic vibration to evaluate sensitivity and nerve function in the glans and shaft of the penis. Dynamic infusion cavernosometry (DICC)Technique in which fluid is pumped into the penis at a known rate and pressure. It gives a measurement of the vascular pressure in the corpus cavernosum during an erection. Corpus cavernosometryCavernosography measurement of the vascular pressure in the corpus cavernosum. Saline is infused under pressure into the corpus cavernosum with a butterfly needle, and the flow rate needed to maintain an erection indicates the degree of venous leakage. The leaking veins responsible may be visualized by infusing a mixture of saline and x-ray contrast medium and performing a cavernosogram. In Digital Subtraction Angiography (DSA), the images are acquired digitally. Magnetic resonance angiography (MRA) This is similar to magnetic resonance imaging. Magnetic resonance angiography uses magnetic fields and radio waves to provide detailed images of the blood vessels. The doctor may inject into the patient's bloodstream a contrast agent, which causes vascular tissues to stand out against other tissues, so that information about blood supply and vascular anomalies is easier to gather. Erection Hardness Score Treatment Treatment depends on the underlying cause. In general, exercise, particularly of the aerobic type, is effective for preventing ED during midlife. Counseling can be used if the underlying cause is psychological, including how to lower stress or anxiety related to sex. Medications by mouth and vacuum erection devices are first-line treatments, followed by injections of drugs into the penis, as well as penile implants. Vascular reconstructive surgeries are beneficial in certain groups. Treatments, other than surgery, do not fix the underlying physiological problem, but are used as needed before sex. Medications The PDE5 inhibitors sildenafil (Viagra), vardenafil (Levitra) and tadalafil (Cialis) are prescription drugs which are taken by mouth. As of 2018, sildenafil is available in the UK without a prescription. Additionally, a cream combining alprostadil with the permeation enhancer DDAIP has been approved in Canada as a first line treatment for ED. Penile injections, on the other hand, can involve one of the following medications: papaverine, phentolamine, and prostaglandin E1, also known as alprostadil. In addition to injections, there is an alprostadil suppository that can be inserted into the urethra. Once inserted, an erection can begin within 10 minutes and last up to an hour. Medications to treat ED may cause a side effect called priapism. Prevalence of medical diagnosis In a study published in 2016, based on US health insurance claims data, out of 19,833,939 US males aged ≥18 years, only 1,108,842 (5.6%), were medically diagnosed with erectile dysfunction or on a PDE5I prescription (μ age 55.2 years, σ 11.2 years). Prevalence of diagnosis or prescription was the highest for age group 60–69 at 11.5%, lowest for age group 18–29 at 0.4%, and 2.1% for 30–39, 5.7% for 40–49, 10% for 50–59, 11% for 70–79, 4.6% for 80–89, 0.9% for ≥90, respectively. Focused shockwave therapy Focused shockwave therapy involves passing short, high frequency acoustic pulses through the skin and into the penis. These waves break down any plaques within the blood vessels, encourage the formation of new vessels, and stimulate repair and tissue regeneration. Focused shockwave therapy appears to work best for males with vasculogenic ED, which is a blood vessel disorder that affects blood flow to tissue in the penis. The treatment is painless and has no known side effects. Treatment with shockwave therapy can lead to a significant improvement of the IIEF (International Index of Erectile Function). Testosterone Men with low levels of testosterone can experience ED. Taking testosterone may help maintain an erection. Males with type 2 diabetes are twice as likely to have lower levels of testosterone, and are three times more likely to experience ED than non-diabetic men. Pumps A vacuum erection device helps draw blood into the penis by applying negative pressure. This type of device is sometimes referred to as penis pump and may be used just prior to sexual intercourse. Several types of FDA approved vacuum therapy devices are available under prescription. When pharmacological methods fail, a purpose-designed external vacuum pump can be used to attain erection, with a separate compression ring fitted to the base of the penis to maintain it. These pumps should be distinguished from other penis pumps (supplied without compression rings) which, rather than being used for temporary treatment of impotence, are claimed to increase penis length if used frequently, or vibrate as an aid to masturbation. More drastically, inflatable or rigid penile implants may be fitted surgically. Vibrators The vibrator was invented in the late 19th century as a medical instrument for pain relief and the treatment of various ailments. Sometimes described as a massager, the vibrator is used on the body to produce sexual stimulation. Several clinical studies have found vibrators to be an effective solution for Erectile Dysfunction. Examples of FDA registered vibrators for erectile dysfunction include MysteryVibe's Tenuto and Reflexonic's Viberect. Surgery Often, as a last resort, if other treatments have failed, the most common procedure is prosthetic implants which involves the insertion of artificial rods into the penis. Some sources show that vascular reconstructive surgeries are viable options for some people. Alternative medicine The Food and Drug Administration (FDA) does not recommend alternative therapies to treat sexual dysfunction. Many products are advertised as "herbal viagra" or "natural" sexual enhancement products, but no clinical trials or scientific studies support the effectiveness of these products for the treatment of ED, and synthetic chemical compounds similar to sildenafil have been found as adulterants in many of these products. The FDA has warned consumers that any sexual enhancement product that claims to work as well as prescription products is likely to contain such a contaminant. A 2021 review indicated that ginseng had "only trivial effects on erectile function or satisfaction with intercourse compared to placebo". History Attempts to treat the symptoms described by ED date back well over 1,000 years. In the 8th century, males of Ancient Rome and Greece wore talismans of rooster and goat genitalia, believing these talismans would serve as an aphrodisiac and promote sexual function. In the 13th century, Albertus Magnus recommended ingesting roasted wolf penis as a remedy for impotence. During the late 16th and 17th centuries in France, male impotence was considered a crime, as well as legal grounds for a divorce. The practice, which involved inspection of the complainants by court experts, was declared obscene in 1677. The first major publication describing a broad medicalization of sexual disorders was the first edition of the Diagnostic and Statistical Manual of Mental Disorders in 1952. In the early 20th century, medical folklore held that 90-95% of cases of ED were psychological in origin, but around the 1980s research took the opposite direction of searching for physical causes of sexual dysfunction, which also happened in the 1920s and 30s. Physical causes as explanations continue to dominate literature when compared with psychological explanations . Treatments in the 80s for ED included penile implants and intracavernosal injections. The first successful vacuum erection device, or penis pump, was developed by Vincent Marie Mondat in the early 1800s. A more advanced device based on a bicycle pump was developed by Geddings Osbon, a Pentecostal preacher, in the 1970s. In 1982, he received FDA approval to market the product. John R. Brinkley initiated a boom in male impotence treatments in the U.S. in the 1920s and 1930s, with radio programs that recommended expensive goat gland implants and "mercurochrome" injections as the path to restored male virility, including operations by surgeon Serge Voronoff. Modern drug therapy for ED made a significant advance in 1983, when British physiologist Giles Brindley dropped his trousers and demonstrated to a shocked Urodynamics Society audience showing his papaverine-induced erection. The current most common treatment for ED, the oral PDE5 inhibitor known as sildenafil (Viagra) was approved for use for Pfizer by the FDA in 1998, which at the time of release was the fastest selling drug in history. Sildenafil largely replaced SSRI treatments for ED at the time and proliferated new types of specialised pharmaceutical marketing which emphasised social connotations of ED and Viagra rather than its physical effects. Anthropology Anthropological research presents ED not as a disorder but, as a normal, and sometimes even welcome sign of healthy aging. Wentzell's study of 250 Mexican males in their 50s and 60s found that "most simply did not see decreasing erectile function as a biological pathology". The males interviewed described the decrease in erectile function "as an aid for aging in socially appropriate ways". A common theme amongst the interviewees showed that respectable older males shifted their focus toward the domestic sphere into a "second stage of life". The Mexican males of this generation often pursued sex outside of marriage; decreasing erectile function acted as an aid to overcoming infidelity thus helping to attain the ideal "second stage" of life. A 56-year-old about to retire from the public health service said he would now "dedicate myself to my wife, the house, gardening, caring for the grandchildren—the Mexican classic". Wentzell found that treating ED as a pathology was antithetical to the social view these males held of themselves, and their purpose at this stage of their lives. In the 20th and 21st centuries, anthropologists investigated how common treatments for ED are built upon assumptions of institutionalized social norms. In offering a range of clinical treatments to 'correct' a person's ability to produce an erection, biomedical institutions encourage the public to strive for prolonged sexual function. Anthropologists argue that a biomedical focus places emphasis on the biological processes of fixing the body thereby disregarding holistic ideals of health and aging. By relying on a wholly medical approach, Western biomedicine can become blindsided by bodily dysfunctions which can be understood as appropriate functions of age, and not as a medical problem. Anthropologists understand that a biosocial approach to ED considers a person's decision to undergo clinical treatment more likely a result of "society, political economy, history, and culture" than a matter of personal choice. In rejecting biomedical treatment for ED, males can challenge common forms of medicalized social control by deviating from what is considered the normal approach to dysfunction. Lexicology The Latin term impotentia coeundi describes simple inability to insert the penis into the vagina; it is now mostly replaced by more precise terms, such as erectile dysfunction (ED). The study of ED within medicine is covered by andrology, a sub-field within urology. Research indicates that ED is common, and it is suggested that approximately 40% of males experience symptoms compatible with ED, at least occasionally. The condition is also on occasion called phallic impotence. Its antonym, or opposite condition, is priapism.
Biology and health sciences
Specific diseases
Health
14810
https://en.wikipedia.org/wiki/Islamic%20calendar
Islamic calendar
The Hijri calendar (), or Arabic calendar, also known in English as the Muslim calendar and Islamic calendar, is a lunar calendar consisting of 12 lunar months in a year of 354 or 355 days. It is used to determine the proper days of Islamic holidays and rituals, such as the annual fasting and the annual season for the great pilgrimage. In almost all countries where the predominant religion is Islam, the civil calendar is the Gregorian calendar, with Syriac month-names used in the Levant and Mesopotamia (Iraq, Syria, Jordan, Lebanon and Palestine), but the religious calendar is the Hijri one. This calendar enumerates the Hijri era, whose epoch was established as the Islamic New Year in 622 CE. During that year, Muhammad and his followers migrated from Mecca to Medina and established the first Muslim community (ummah), an event commemorated as the Hijrah. In the West, dates in this era are usually denoted AH (). In Muslim countries, it is also sometimes denoted as H from its Arabic form (, abbreviated ). In English, years prior to the Hijra are denoted as BH ("Before the Hijra"). Since 7 July 2024 CE, . In the Gregorian calendar reckoning, 1446 AH runs from 7 July 2024 to approximately 26 June 2025. History Pre-Islamic calendar For central Arabia, especially Mecca, there is a lack of epigraphical evidence but details are found in the writings of Muslim authors of the Abbasid era. Inscriptions of the ancient South Arabian calendars reveal the use of a number of local calendars. At least some of these South Arabian calendars followed the lunisolar system. Both al-Biruni and al-Mas'udi suggest that the ancient Arabs used the same month names as the Muslims, though they also record other month names used by the pre-Islamic Arabs. The Islamic tradition is unanimous in stating that Arabs of Tihamah, Hejaz, and Najd distinguished between two types of months, permitted (ḥalāl) and forbidden (ḥarām) months. The forbidden months were four months during which fighting is forbidden, listed as Rajab and the three months around the pilgrimage season, Dhu al-Qa‘dah, Dhu al-Hijjah, and Muharram. A similar if not identical concept to the forbidden months is also attested by Procopius, where he describes an armistice that the Eastern Arabs of the Lakhmid al-Mundhir respected for two months in the summer solstice of 541 CE. However, Muslim historians do not link these months to a particular season. The Qur'an links the four forbidden months with Nasī, a word that literally means "postponement". According to Muslim tradition, the decision of postponement was administered by the tribe of Kinanah, by a man known as the al-Qalammas of Kinanah and his descendants (pl. qalāmisa). Different interpretations of the concept of Nasī have been proposed. Some scholars, both Muslim and Western, maintain that the pre-Islamic calendar used in central Arabia was a purely lunar calendar similar to the modern Islamic calendar. According to this view, Nasī is related to the pre-Islamic practices of the Meccan Arabs, where they would alter the distribution of the forbidden months within a given year without implying a calendar manipulation. This interpretation is supported by Arab historians and lexicographers, like Ibn Hisham, Ibn Manzur, and the corpus of Qur'anic exegesis. This is corroborated by an early Sabaic inscription, where a religious ritual was "postponed" (ns'w) due to war. According to the context of this inscription, the verb ns'’ has nothing to do with intercalation, but only with moving religious events within the calendar itself. The similarity between the religious concept of this ancient inscription and the Qur'an suggests that non-calendaring postponement is also the Qur'anic meaning of Nasī. The Encyclopaedia of Islam concludes "The Arabic system of [Nasī'] can only have been intended to move the Hajj and the fairs associated with it in the vicinity of Mecca to a suitable season of the year. It was not intended to establish a fixed calendar to be generally observed." The term "fixed calendar" is generally understood to refer to the non-intercalated calendar. Others concur that it was originally a lunar calendar, but suggest that about 200 years before the Hijra it was transformed into a lunisolar calendar containing an intercalary month added from time to time to keep the pilgrimage within the season of the year when merchandise was most abundant. This interpretation was first proposed by the medieval Muslim astrologer and astronomer Abu Ma'shar al-Balkhi, and later by al-Biruni, al-Mas'udi, and some western scholars. This interpretation considers Nasī to be a synonym to the Arabic word for "intercalation" (kabīsa). The Arabs, according to one explanation mentioned by Abu Ma'shar, learned of this type of intercalation from the Jews. The Jewish Nasi was the official who decided when to intercalate the Jewish calendar. Some sources say that the Arabs followed the Jewish practice and intercalated seven months over nineteen years, or else that they intercalated nine months over 24 years; there is, however, no consensus among scholars on this issue. Prohibiting Nasī' Nasi' is interpreted to signify either the postponement of the pre-Islamic month of Hajj, or the (also pre-Islamic) practice of intercalation periodic insertion of an additional month to reset the calendar into accordance with the seasons. In the tenth year of the Hijra, as documented in the Qur'an (Surah At-Tawbah (9):36–37), Muslims believe God revealed the "prohibition of the Nasī. The prohibition of Nasī' would presumably have been announced when the intercalated month had returned to its position just before the month of Nasi' began. If Nasī' meant intercalation, then the number and the position of the intercalary months between AH 1 and AH 10 are uncertain; western calendar dates commonly cited for key events in early Islam such as the Hijra, the Battle of Badr, the Battle of Uhud and the Battle of the Trench should be viewed with caution as they might be in error by one, two, three or even four lunar months. This prohibition was mentioned by Muhammad during the farewell sermon which was delivered on 9 Dhu al-Hijjah AH 10 (Julian date Friday 6 March 632 CE) on Mount Arafat during the farewell pilgrimage to Mecca. The three successive sacred (forbidden) months mentioned by Muhammad (months in which battles are forbidden) are Dhu al-Qa'dah, Dhu al-Hijjah, and Muharram, months 11, 12, and 1 respectively. The single forbidden month is Rajab, month 7. These months were considered forbidden both within the new Islamic calendar and within the old pagan Meccan calendar. Days of the week Traditionally, the Islamic day begins at sunset and ends at the next sunset. Each Islamic day thus begins at nightfall and ends at the end of daylight. The days in the seven-day week are, with the exception of the last two days, named after their ordinal place in the week. On the sixth day of the week, the "gathering day" (), Muslims assemble for the Friday-prayer at a local mosque at noon. The "gathering day" is often regarded as the weekly day off. This is frequently made official, with many Muslim countries adopting Friday and Saturday (e.g., Egypt, Saudi Arabia) or Thursday and Friday as official weekends, during which offices are closed; other countries (e.g., Iran) choose to make Friday alone a day of rest. A few others (e.g., Turkey, Pakistan, Morocco, Nigeria, Malaysia) have adopted the Saturday-Sunday weekend while making Friday a working day with a long midday break to allow time off for worship. Months Each month of the Islamic calendar commences on the birth of the new lunar cycle. Traditionally, this is based on actual observation of the moon's crescent () marking the end of the previous lunar cycle and hence the previous month, thereby beginning the new month. Consequently, each month can have 29 or 30 days depending on the visibility of the Moon, astronomical positioning of the Earth and weather conditions. Four of the twelve Hijri months are considered sacred: (7), and the three consecutive months of (11), (12) and (1), in which battles are forbidden. Alternative names Afghan lunar calendar The "Afghan lunar calendar" refers to two distinct naming systems for the months of the Hijri calendar, one of which was used by the Pashtuns and the other by the Hazaras. They were in use until the time of Amanullah Khan's reign, when the usage of the Solar Hijri Calendar was formalized across Afghanistan. Turki lunar calendar In Xinjiang, the Uyghur Muslims traditionally had different names for the months of the Hijri calendar, which were in use until the adoption of the Gregorian calendar in the 20th century. These names were collectively referred to as the "Turki lunar year" or "Turki lunar calendar". Alternative order Twelver Shia Muslims believe the Islamic new year and first month of the Hijri calendar is Rabi' al-Awwal rather than Muharram, due to it being the month in which the Hijrah took place. This has led to difference regarding description of the years in which some events took place, such as the Muharram-occurring battle of Karbala, which Shias say took place in 60 AH, while Sunnis say it took place in 61 AH. Length of year The mean duration of a tropical year is 365.24219 days, while the long-term average duration of a synodic month is 29.530587981 days. Thus the average lunar year (twelve new moons) is 10.87513 days shorter than the average solar year (365.24219 − (12 × 29.530587981)), causing months of the Hijri calendar to advance about eleven days earlier each year, relative to the equinoxes. "As a result," says the Astronomical Almanac, "the cycle of twelve lunar months regresses through the seasons over a period of about 33 [solar] years". Year numbering In pre-Islamic Arabia, it was customary to identify a year after a major event which took place in it. Thus, according to Islamic tradition, Abraha, governor of Yemen, then a province of the Christian Kingdom of Aksum of Northeast Africa and South Arabia, attempted to destroy the Kaaba with an army which included several elephants. The raid was unsuccessful, but that year became known as the Year of the Elephant, during which Muhammad was born (surah al-Fil). Most equate this to the year 570 CE, but a minority use 571 CE. The first ten years of the Hijra were not numbered, but were named after events in the life of Muhammad according to al-Biruni: The year of permission. The year of the order of fighting. The year of the trial. The year of congratulation on marriage. The year of the earthquake. The year of enquiring. The year of gaining victory. The year of equality. The year of exemption. The year of farewell. In (17 AH), Abu Musa al-Ash'ari, one of the officials of the Rashid Caliph Umar () in Basra, complained about the absence of any years on the correspondence he received from Umar, making it difficult for him to determine which instructions were most recent. This report convinced Umar of the need to introduce an era for Muslims. After debating the issue with his counsellors, he decided that the first year should be the year of Muhammad's arrival at Medina (known as Yathrib, before Muhammad's arrival). Uthman then suggested that the months begin with Muharram, in line with the established custom of the Arabs at that time. The years of the Islamic calendar thus began with the month of Muharram in the year of Muhammad's arrival at the city of Medina, even though the actual emigration took place in Safar and Rabi' I of the intercalated calendar, two months before the commencement of Muharram in the new fixed calendar. Because of the Hijra, the calendar was named the Hijri calendar. F A Shamsi (1984) postulated that the Arabic calendar was never intercalated. According to him, the first day of the first month of the new fixed Islamic calendar (1 Muharram AH 1) was no different from what was observed at the time. The day the Prophet moved from Quba' to Medina was originally 26 Rabi' I on the pre-Islamic calendar. 1 Muharram of the new fixed calendar corresponded to Friday, 16 July 622 CE, the equivalent civil tabular date (same daylight period) in the Julian calendar. The Islamic day began at the preceding sunset on the evening of 15 July. This Julian date (16 July) was determined by medieval Muslim astronomers by projecting back in time their own tabular Islamic calendar, which had alternating 30- and 29-day months in each lunar year plus eleven leap days every 30 years. For example, al-Biruni mentioned this Julian date in the year 1000 CE. Although not used by either medieval Muslim astronomers or modern scholars to determine the Islamic epoch, the thin crescent moon would have also first become visible (assuming clouds did not obscure it) shortly after the preceding sunset on the evening of 15 July, 1.5 days after the associated dark moon (astronomical new moon) on the morning of 14 July. Though Michael Cook and Patricia Crone in their book Hagarism cite a coin from AH 17, the first surviving attested use of a Hijri calendar date alongside a date in another calendar (Coptic) is on a papyrus from Egypt in AH 22, PERF 558. Astronomical considerations Due to the Islamic calendar's reliance on certain variable methods of observation to determine its month-start-dates, these dates sometimes vary slightly from the month-start-dates of the astronomical lunar calendar, which are based directly on astronomical calculations. Still, the Islamic calendar roughly approximates the astronomical-lunar-calendar system, seldom varying by more than three days from it. Both the Islamic calendar and the astronomical-lunar-calendar take no account of the solar year in their calculations, and thus both of these strictly lunar based calendar systems have no ability to reckon the timing of the four seasons of the year. In the astronomical-lunar-calendar system, a year of 12 lunar months is 354.37 days long. In this calendar system, lunar months begin precisely at the time of the monthly "conjunction", when the Moon is located most directly between the Earth and the Sun. The month is defined as the average duration of a revolution of the Moon around the Earth (29.53 days). By convention, months of 30 days and 29 days succeed each other, adding up over two successive months to 59 full days. This leaves only a small monthly variation of 44 minutes to account for, which adds up to a total of 24 hours (i.e., the equivalent of one full day) in 2.73 years. To settle accounts, it is sufficient to add one day every three years to the lunar calendar, in the same way that one adds one day to the Gregorian calendar every four years. The technical details of the adjustment are described in Tabular Islamic calendar. The Islamic calendar, however, is based on a different set of conventions being used for the determination of the month-start-dates. Each month still has either 29 or 30 days, but due to the variable method of observations employed, there is usually no discernible order in the sequencing of either 29 or 30-day month lengths. Traditionally, the first day of each month is the day (beginning at sunset) of the first sighting of the hilal (crescent moon) shortly after sunset. If the hilal is not observed immediately after the 29th day of a month (either because clouds block its view or because the western sky is still too bright when the moon sets), then the day that begins at that sunset is the 30th. Such a sighting has to be made by one or more trustworthy men testifying before a committee of Muslim leaders. Determining the most likely day that the hilal could be observed was a motivation for Muslim interest in astronomy, which put Islam in the forefront of that science for many centuries. Still, due to the fact that both lunar reckoning systems are ultimately based on the lunar cycle itself, both systems still do roughly correspond to one another, never being more than three days out of synchronisation with one another. This traditional practice for the determination of the start-date of the month is still followed in the overwhelming majority of Muslim countries. For instance, Saudi Arabia uses the sighting method to determine the beginning of each month of the Hijri calendar. Since AH 1419 (1998/99), several official hilal sighting committees have been set up by the government to determine the first visual sighting of the lunar crescent at the beginning of each lunar month. Nevertheless, the religious authorities also allow the testimony of less experienced observers and thus often announce the sighting of the lunar crescent on a date when none of the official committees could see it. Each Islamic state proceeds with its own monthly observation of the new moon (or, failing that, awaits the completion of 30 days) before declaring the beginning of a new month on its territory. However, the lunar crescent becomes visible only some 17 hours after the conjunction, and only subject to the existence of a number of favourable conditions relative to weather, time, geographic location, as well as various astronomical parameters. Given the fact that the moon sets progressively later than the sun as one goes west, with a corresponding increase in its "age" since conjunction, Western Muslim countries may, under favorable conditions, observe the new moon one day earlier than eastern Muslim countries. Due to the interplay of all these factors, the beginning of each month differs from one Muslim country to another, during the 48-hour period following the conjunction. The information provided by the calendar in any country does not extend beyond the current month. A number of Muslim countries try to overcome some of these difficulties by applying different astronomy-related rules to determine the beginning of months. Thus, Malaysia, Indonesia, and a few others begin each month at sunset on the first day that the moon sets after the sun (moonset after sunset). In Egypt, the month begins at sunset on the first day that the moon sets at least five minutes after the sun. A detailed analysis of the available data shows, however, that there are major discrepancies between what countries say they do on this subject, and what they actually do. In some instances, what a country says it does is impossible. Due to the somewhat variable nature of the Islamic calendar, in most Muslim countries, the Islamic calendar is used primarily for religious purposes, while the Solar-based Gregorian calendar is still used primarily for matters of commerce and agriculture. Theological considerations If the Islamic calendar were prepared using astronomical calculations, Muslims throughout the Muslim world could use it to meet all their needs, the way they use the Gregorian calendar today. But, there are divergent views on whether it is licit to do so. A majority of theologians oppose the use of calculations (beyond the constraint that each month must be not less than 29 nor more than 30 days) on the grounds that the latter would not conform with Muhammad's recommendation to observe the new moon of Ramadan and Shawal in order to determine the beginning of these months. However, some Islamic jurists see no contradiction between Muhammad's teachings and the use of calculations to determine the beginnings of lunar months. They consider that Muhammad's recommendation was adapted to the culture of the times, and should not be confused with the acts of worship. Thus the jurists Ahmad Muhammad Shakir and Yusuf al-Qaradawi both endorsed the use of calculations to determine the beginning of all months of the Islamic calendar, in 1939 and 2004 respectively. So did the Fiqh Council of North America (FCNA) in 2006 and the European Council for Fatwa and Research (ECFR) in 2007. The major Muslim associations of France also announced in 2012 that they would henceforth use a calendar based on astronomical calculations, taking into account the criteria of the possibility of crescent sighting in any place on Earth. But, shortly after the official adoption of this rule by the French Council of the Muslim Faith (CFCM) in 2013, the new leadership of the association decided, on the eve of Ramadan 2013, to follow the Saudi announcement rather than to apply the rule just adopted. This resulted in a division of the Muslim community of France, with some members following the new rule, and others following the Saudi announcement. Isma'ili-Taiyebi Bohras having the institution of da'i al-mutlaq follow the tabular Islamic calendar (see section below) prepared on the basis of astronomical calculations from the days of Fatimid imams. Calculated Islamic calendars Islamic calendar of Turkey Turkish Muslims use an Islamic calendar which is calculated several years in advance by the Turkish Presidency of Religious Affairs (Diyanet İşleri Başkanlığı). From 1 Muharrem 1400 AH (21 November 1979) until 29 Zilhicce 1435 (24 October 2014) the computed Turkish lunar calendar was based on the following rule: "The lunar month is assumed to begin on the evening when, within some region of the terrestrial globe, the computed centre of the lunar crescent at local sunset is more than 5° above the local horizon and (geocentrically) more than 8° from the Sun." In the current rule the (computed) lunar crescent has to be above the local horizon of Ankara at sunset. Saudi Arabia's Umm al-Qura calendar Saudi Arabia has traditionally used the Umm al-Qura calendar, which is based on astronomical calculations, for administrative purposes. The parameters used in the establishment of this calendar underwent significant changes during the decade to AH 1423. Before AH 1420 (before 18 April 1999), if the moon's age at sunset in Riyadh was at least 12 hours, then the day ending at that sunset was the first day of the month. This often caused the Saudis to celebrate holy days one or even two days before other predominantly Muslim countries, including the dates for the Hajj, which can only be dated using Saudi dates because it is performed in Mecca. From AH 1420–22, if moonset occurred after sunset at Mecca, then the day beginning at that sunset was the first day of a Saudi month, essentially the same rule used by Malaysia, Indonesia, and others (except for the location from which the hilal was observed). Since the beginning of AH 1423 (16 March 2002), the rule has been clarified a little by requiring the geocentric conjunction of the sun and moon to occur before sunset, in addition to requiring moonset to occur after sunset at Mecca. This ensures that the moon has moved past the sun by sunset, even though the sky may still be too bright immediately before moonset to actually see the crescent. In 2007, the Islamic Society of North America, the Fiqh Council of North America and the European Council for Fatwa and Research announced that they would henceforth use a calendar based on calculations using the same parameters as the Umm al-Qura calendar to determine (well in advance) the beginning of all lunar months (and therefore the days associated with all religious observances). This was intended as a first step on the way to unify, at some future time, Muslims' calendars throughout the world. On 14 February 2016, Saudi Arabia adopted the Gregorian calendar for payment of the monthly salaries of government employees (as a cost cutting measure), while retaining the Islamic calendar for religious purposes. Other calendars using the Islamic era The Solar Hijri calendar is a solar calendar used in Iran which counts its years from the Hijra or migration of Muhammad from Mecca to Medina in 622 CE. Tabular Islamic calendar The Tabular Islamic calendar is a rule-based variation of the Islamic calendar, in which months are worked out by arithmetic rules rather than by observation or astronomical calculation. It has a 30-year cycle with 11 leap years of 355 days and 19 years of 354 days. In the long term, it is accurate to one day in about 2,500 solar years or 2,570 lunar years. It also deviates up to about one or two days in the short term. Kuwaiti algorithm Microsoft uses the "Kuwaiti algorithm", a variant of the tabular Islamic calendar, to convert Gregorian dates to the Islamic ones. Microsoft claimed that the variant is based on a statistical analysis of historical data from Kuwait, however it matches a known tabular calendar. Notable dates Important dates in the Islamic (Hijri) year are: 1 Muharram: the Islamic New Year. 10 Muharram: Day of Ashura. For both Shias and Sunnis, the martyrdom of Husayn ibn Ali, the grandson of Muhammad, and his followers. For Sunnis, the crossing of the Red Sea by Moses occurred on this day, along with many other significant events in the lives of prophets and that have to do with Creation. 12 Rabi al-Awwal: Mawlid or Birth of the Prophet for Sunnis. 17 Rabi al-Awwal: Mawlid for Shias. 27 Rajab: Isra and Mi'raj for the majority of Muslims. 15 Sha'ban: Mid-Sha'ban, or Night of Forgiveness. For Shiites, also the birthday of Muhammad al-Mahdi, the Twelfth Imam. 1 Ramadan: The first day of fasting in Islam 27 Ramadan: Start of the Revelation of the Qur’an. The most probable day Muhammad received the first verses of the Quran (17 Ramadan in Indonesia and Malaysia). Last third of Ramadan which includes Laylat al-Qadr. Last Friday of Ramadan: Jumu'atul-Wida 1 Shawwal: Eid ul-Fitr. 8–13 Dhu al-Hijjah: The Hajj pilgrimage to Mecca. 9 Dhu al-Hijjah: Day of Arafa. 10 Dhu al-Hijjah: Eid al-Adha. Days considered important predominantly for Shia Muslims: 9 Rabi' al-Awwal: Omar Koshan (Mukhtar al-Thaqafi avenges the events of Ashura). 13 Rajab: Birthday of Ali ibn Abi Talib 3 Sha'ban: Birthday of Husayn ibn Ali. 21 Ramadan: Martyrdom of Ali ibn Abi Talib. 18 Dhu al-Hijjah: the Eid al-Ghadir Uses The Islamic calendar is now used primarily for religious purposes, and for official dating of public events and documents in Muslim countries. Because of its nature as a purely lunar calendar, it cannot be used for agricultural purposes and historically Islamic communities have used other calendars for this purpose: the Egyptian calendar was formerly widespread in Islamic countries, and the Iranian calendar, the Akbar's calendar (from where the Bengali calendar originated), the 1789 Ottoman calendar (a modified Julian calendar) were also used for agriculture in their countries. In the Levant and Iraq the Aramaic names of the Babylonian calendar are still used for all secular matters. In the Maghreb, Berber farmers in the countryside still use the Julian calendar for agrarian purposes. These local solar calendars have receded in importance with the near-universal adoption of the Gregorian calendar for civil purposes. Saudi Arabia uses the lunar Islamic calendar. In Indonesia, the Javanese calendar combines elements of the Islamic and pre-Islamic Saka calendars. British author Nicholas Hagger writes that after seizing control of Libya, Muammar Gaddafi "declared" on 1 December 1978 "that the Muslim calendar should start with the death of the prophet Mohammed in 632 rather than the hijra (Mohammed's 'emigration' from Mecca to Medina) in 622". This put the country ten solar years behind the standard Muslim calendar. However, according to the 2006 Encyclopedia of the Developing World, "More confusing still is Qaddafi's unique Libyan calendar, which counts the years from the Prophet's birth, or sometimes from his death. The months July and August, named after Julius and Augustus Caesar, are now Nasser and Hannibal respectively." Reflecting on a 2001 visit to the country, American reporter Neil MacFarquhar observed, "Life in Libya was so unpredictable that people weren't even sure what year it was. The year of my visit was officially 1369. But just two years earlier Libyans had been living through 1429. No one could quite name for me the day the count changed, especially since both remained in play. ... Event organizers threw up their hands and put the Western year in parentheses somewhere in their announcements." Computer support Hijri support was available in later versions of traditional Visual Basic, and is also available in the .NET Framework. Since the release of Java 8, the Islamic calendar is supported in the new Date and Time API.
Technology
Timekeeping
null
14828
https://en.wikipedia.org/wiki/Isomorphism
Isomorphism
In mathematics, an isomorphism is a structure-preserving mapping (a morphism) between two structures of the same type that can be reversed by an inverse mapping. Two mathematical structures are isomorphic if an isomorphism exists between them. The word is derived . The interest in isomorphisms lies in the fact that two isomorphic objects have the same properties (excluding further information such as additional structure or names of objects). Thus isomorphic structures cannot be distinguished from the point of view of structure only, and may be identified. In mathematical jargon, one says that two objects are . An automorphism is an isomorphism from a structure to itself. An isomorphism between two structures is a canonical isomorphism (a canonical map that is an isomorphism) if there is only one isomorphism between the two structures (as is the case for solutions of a universal property), or if the isomorphism is much more natural (in some sense) than other isomorphisms. For example, for every prime number , all fields with elements are canonically isomorphic, with a unique isomorphism. The isomorphism theorems provide canonical isomorphisms that are not unique. The term is mainly used for algebraic structures. In this case, mappings are called homomorphisms, and a homomorphism is an isomorphism if and only if it is bijective. In various areas of mathematics, isomorphisms have received specialized names, depending on the type of structure under consideration. For example: An isometry is an isomorphism of metric spaces. A homeomorphism is an isomorphism of topological spaces. A diffeomorphism is an isomorphism of spaces equipped with a differential structure, typically differentiable manifolds. A symplectomorphism is an isomorphism of symplectic manifolds. A permutation is an automorphism of a set. In geometry, isomorphisms and automorphisms are often called transformations, for example rigid transformations, affine transformations, projective transformations. Category theory, which can be viewed as a formalization of the concept of mapping between structures, provides a language that may be used to unify the approach to these different aspects of the basic idea. Examples Logarithm and exponential Let be the multiplicative group of positive real numbers, and let be the additive group of real numbers. The logarithm function satisfies for all so it is a group homomorphism. The exponential function satisfies for all so it too is a homomorphism. The identities and show that and are inverses of each other. Since is a homomorphism that has an inverse that is also a homomorphism, is an isomorphism of groups, i.e., via the isomorphism . The function is an isomorphism which translates multiplication of positive real numbers into addition of real numbers. This facility makes it possible to multiply real numbers using a ruler and a table of logarithms, or using a slide rule with a logarithmic scale. Integers modulo 6 Consider the group the integers from 0 to 5 with addition modulo 6. Also consider the group the ordered pairs where the x coordinates can be 0 or 1, and the y coordinates can be 0, 1, or 2, where addition in the x-coordinate is modulo 2 and addition in the y-coordinate is modulo 3. These structures are isomorphic under addition, under the following scheme: or in general For example, which translates in the other system as Even though these two groups "look" different in that the sets contain different elements, they are indeed isomorphic: their structures are exactly the same. More generally, the direct product of two cyclic groups and is isomorphic to if and only if m and n are coprime, per the Chinese remainder theorem. Relation-preserving isomorphism If one object consists of a set X with a binary relation R and the other object consists of a set Y with a binary relation S then an isomorphism from X to Y is a bijective function such that: S is reflexive, irreflexive, symmetric, antisymmetric, asymmetric, transitive, total, trichotomous, a partial order, total order, well-order, strict weak order, total preorder (weak order), an equivalence relation, or a relation with any other special properties, if and only if R is. For example, R is an ordering ≤ and S an ordering then an isomorphism from X to Y is a bijective function such that Such an isomorphism is called an or (less commonly) an . If then this is a relation-preserving automorphism. Applications In algebra, isomorphisms are defined for all algebraic structures. Some are more specifically studied; for example: Linear isomorphisms between vector spaces; they are specified by invertible matrices. Group isomorphisms between groups; the classification of isomorphism classes of finite groups is an open problem. Ring isomorphism between rings. Field isomorphisms are the same as ring isomorphism between fields; their study, and more specifically the study of field automorphisms is an important part of Galois theory. Just as the automorphisms of an algebraic structure form a group, the isomorphisms between two algebras sharing a common structure form a heap. Letting a particular isomorphism identify the two structures turns this heap into a group. In mathematical analysis, the Laplace transform is an isomorphism mapping hard differential equations into easier algebraic equations. In graph theory, an isomorphism between two graphs G and H is a bijective map f from the vertices of G to the vertices of H that preserves the "edge structure" in the sense that there is an edge from vertex u to vertex v in G if and only if there is an edge from to in H. See graph isomorphism. In order theory, an isomorphism between two partially ordered sets P and Q is a bijective map from P to Q that preserves the order structure in the sense that for any elements and of P we have less than in P if and only if is less than in Q. As an example, the set {1,2,3,6} of whole numbers ordered by the is-a-factor-of relation is isomorphic to the set {O, A, B, AB} of blood types ordered by the can-donate-to relation. See order isomorphism. In mathematical analysis, an isomorphism between two Hilbert spaces is a bijection preserving addition, scalar multiplication, and inner product. In early theories of logical atomism, the formal relationship between facts and true propositions was theorized by Bertrand Russell and Ludwig Wittgenstein to be isomorphic. An example of this line of thinking can be found in Russell's Introduction to Mathematical Philosophy. In cybernetics, the good regulator or Conant–Ashby theorem is stated "Every good regulator of a system must be a model of that system". Whether regulated or self-regulating, an isomorphism is required between the regulator and processing parts of the system. Category theoretic view In category theory, given a category C, an isomorphism is a morphism that has an inverse morphism that is, and Two categories and are isomorphic if there exist functors and which are mutually inverse to each other, that is, (the identity functor on ) and (the identity functor on ). Isomorphism vs. bijective morphism In a concrete category (roughly, a category whose objects are sets (perhaps with extra structure) and whose morphisms are structure-preserving functions), such as the category of topological spaces or categories of algebraic objects (like the category of groups, the category of rings, and the category of modules), an isomorphism must be bijective on the underlying sets. In algebraic categories (specifically, categories of varieties in the sense of universal algebra), an isomorphism is the same as a homomorphism which is bijective on underlying sets. However, there are concrete categories in which bijective morphisms are not necessarily isomorphisms (such as the category of topological spaces). Isomorphism class Since a composition of isomorphisms is an isomorphism, since the identity is an isomorphism and since the inverse of an isomorphism is an isomorphism, the relation that two mathematical objects are isomorphic is an equivalence relation. An equivalence class given by isomorphisms is commonly called an isomorphism class. Examples Examples of isomorphism classes are plentiful in mathematics. Two sets are isomorphic if there is a bijection between them. The isomorphism class of a finite set can be identified with the non-negative integer representing the number of elements it contains. The isomorphism class of a finite-dimensional vector space can be identified with the non-negative integer representing its dimension. The classification of finite simple groups enumerates the isomorphism classes of all finite simple groups. The classification of closed surfaces enumerates the isomorphism classes of all connected closed surfaces. Ordinals are essentially defined as isomorphism classes of well-ordered sets (though there are technical issues involved). However, there are circumstances in which the isomorphism class of an object conceals vital information about it. Given a mathematical structure, it is common that two substructures belong to the same isomorphism class. However, the way they are included in the whole structure can not be studied if they are identified. For example, in a finite-dimensional vector space, all subspaces of the same dimension are isomorphic, but must be distinguished to consider their intersection, sum, etc. The associative algebras consisting of coquaternions and 2 × 2 real matrices are isomorphic as rings. Yet they appear in different contexts for application (plane mapping and kinematics) so the isomorphism is insufficient to merge the concepts. In homotopy theory, the fundamental group of a space at a point , though technically denoted to emphasize the dependence on the base point, is often written lazily as simply if is path connected. The reason for this is that the existence of a path between two points allows one to identify loops at one with loops at the other; however, unless is abelian this isomorphism is non-unique. Furthermore, the classification of covering spaces makes strict reference to particular subgroups of , specifically distinguishing between isomorphic but conjugate subgroups, and therefore amalgamating the elements of an isomorphism class into a single featureless object seriously decreases the level of detail provided by the theory. Relation to equality Although there are cases where isomorphic objects can be considered equal, one must distinguish and . Equality is when two objects are the same, and therefore everything that is true about one object is true about the other. On the other hand, isomorphisms are related to some structure, and two isomorphic objects share only the properties that are related to this structure. For example, the sets are ; they are merely different representations—the first an intensional one (in set builder notation), and the second extensional (by explicit enumeration)—of the same subset of the integers. By contrast, the sets and are not since they do not have the same elements. They are isomorphic as sets, but there are many choices (in fact 6) of an isomorphism between them: one isomorphism is while another is and no one isomorphism is intrinsically better than any other. On this view and in this sense, these two sets are not equal because one cannot consider them : one can choose an isomorphism between them, but that is a weaker claim than identity—and valid only in the context of the chosen isomorphism. Also, integers and even numbers are isomorphic as ordered sets and abelian groups (for addition), but cannot be considered equal sets, since one is a proper subset of the other. On the other hand, when sets (or other mathematical objects) are defined only by their properties, without considering the nature of their elements, one often considers them to be equal. This is generally the case with solutions of universal properties. For example, the rational numbers are usually defined as equivalence classes of pairs of integers, although nobody thinks of a rational number as a set (equivalence class). The universal property of the rational numbers is essentially that they form a field that contains the integers and does not contain any proper subfield. It results that given two fields with these properties, there is a unique field isomorphism between them. This allows identifying these two fields, since every property of one of them can be transferred to the other through the isomorphism. For example, the real numbers that are obtained by dividing two integers (inside the real numbers) form the smallest subfield of the real numbers. There is thus a unique isomorphism from the rational numbers (defined as equivalence classes of pairs) to the quotients of two real numbers that are integers. This allows identifying these two sorts of rational numbers.
Mathematics
Algebra
null
14837
https://en.wikipedia.org/wiki/Internet%20Message%20Access%20Protocol
Internet Message Access Protocol
In computing, the Internet Message Access Protocol (IMAP) is an Internet standard protocol used by email clients to retrieve email messages from a mail server over a TCP/IP connection. IMAP is defined by . IMAP was designed with the goal of permitting complete management of an email box by multiple email clients, therefore clients generally leave messages on the server until the user explicitly deletes them. An IMAP server typically listens on port number 143. IMAP over SSL/TLS (IMAPS) is assigned the port number 993. Virtually all modern e-mail clients and servers support IMAP, which along with the earlier POP3 (Post Office Protocol) are the two most prevalent standard protocols for email retrieval. Many webmail service providers such as Gmail and Outlook.com also provide support for both IMAP and POP3. Email protocols The Internet Message Access Protocol is an application layer Internet protocol that allows an e-mail client to access email on a remote mail server. The current version is defined by . An IMAP server typically listens on well-known port 143, while IMAP over SSL/TLS (IMAPS) uses 993. Incoming email messages are sent to an email server that stores messages in the recipient's email box. The user retrieves the messages with an email client that uses one of a number of email retrieval protocols. While some clients and servers preferentially use vendor-specific, proprietary protocols, almost all support POP and IMAP for retrieving email – allowing free choice between many e-mail clients such as Pegasus Mail or Mozilla Thunderbird to access these servers, and allows the clients to be used with other servers. Email clients using IMAP generally leave messages on the server until the user explicitly deletes them. This and other characteristics of IMAP operation allow multiple clients to manage the same mailbox. Most email clients support IMAP in addition to Post Office Protocol (POP) to retrieve messages. IMAP offers access to the mail storage. Clients may store local copies of the messages, but these are considered to be a temporary cache. History IMAP was designed by Mark Crispin in 1986 as a remote access mailbox protocol, in contrast to the widely used POP, a protocol for simply retrieving the contents of a mailbox. It went through a number of iterations before the current VERSION 4rev2 (IMAP4), as detailed below: Original IMAP The original Interim Mail Access Protocol was implemented as a Xerox Lisp Machine client and a TOPS-20 server. No copies of the original interim protocol specification or its software exist. Although some of its commands and responses were similar to IMAP2, the interim protocol lacked command/response tagging and thus its syntax was incompatible with all other versions of IMAP. IMAP2 The interim protocol was quickly replaced by the Interactive Mail Access Protocol (IMAP2), defined in (in 1988) and later updated by (in 1990). IMAP2 introduced the command/response tagging and was the first publicly distributed version. IMAP3 IMAP3 is an extremely rare variant of IMAP. It was published as in 1991. It was written specifically as a counter proposal to , which itself proposed modifications to IMAP2. IMAP3 was never accepted by the marketplace. The IESG reclassified RFC1203 "Interactive Mail Access Protocol – Version 3" as a Historic protocol in 1993. The IMAP Working Group used RFC 1176 (IMAP2) rather than RFC 1203 (IMAP3) as its starting point. IMAP2bis With the advent of MIME, IMAP2 was extended to support MIME body structures and add mailbox management functionality (create, delete, rename, message upload) that was absent from IMAP2. This experimental revision was called IMAP2bis; its specification was never published in non-draft form. An internet draft of IMAP2bis was published by the IETF IMAP Working Group in October 1993. This draft was based upon the following earlier specifications: unpublished IMAP2bis.TXT document, , and (IMAP2). The IMAP2bis.TXT draft documented the state of extensions to IMAP2 as of December 1992. Early versions of Pine were widely distributed with IMAP2bis support (Pine 4.00 and later supports IMAP4rev1). IMAP4 An IMAP Working Group formed in the IETF in the early 1990s took over responsibility for the IMAP2bis design. The IMAP WG decided to rename IMAP2bis to IMAP4 to avoid confusion. Advantages over POP Connected and disconnected modes When using POP, clients typically connect to the e-mail server briefly, only as long as it takes to download new messages. When using IMAP4, clients often stay connected as long as the user interface is active and download message content on demand. For users with many or large messages, this IMAP4 usage pattern can result in faster response times. Reporting of external changes After successful authentication, the POP protocol provides a completely static view of the current state of the mailbox, and does not provide a mechanism to show any external changes in state during the session (the POP client must reconnect and re-authenticate to get an updated view). In contrast, the IMAP protocol provides a dynamic view, and requires that external changes in state, including newly arrived messages, as well as changes made to the mailbox by other concurrently connected clients, are detected and appropriate responses are sent between commands as well as during an IDLE command, as described in .
Technology
Networks
null
14838
https://en.wikipedia.org/wiki/Inertial%20frame%20of%20reference
Inertial frame of reference
In classical physics and special relativity, an inertial frame of reference (also called an inertial space or a Galilean reference frame) is a frame of reference in which objects exhibit inertia: they remain at rest or in uniform motion relative to the frame until acted upon by external forces. In such a frame, the laws of nature can be observed without the need to correct for acceleration. All frames of reference with zero acceleration are in a state of constant rectilinear motion (straight-line motion) with respect to one another. In such a frame, an object with zero net force acting on it, is perceived to move with a constant velocity, or, equivalently, Newton's first law of motion holds. Such frames are known as inertial. Some physicists, like Isaac Newton, originally thought that one of these frames was absolute — the one approximated by the fixed stars. However, this is not required for the definition, and it is now known that those stars are in fact moving. According to the principle of special relativity, all physical laws look the same in all inertial reference frames, and no inertial frame is privileged over another. Measurements of objects in one inertial frame can be converted to measurements in another by a simple transformation — the Galilean transformation in Newtonian physics or the Lorentz transformation (combined with a translation) in special relativity; these approximately match when the relative speed of the frames is low, but differ as it approaches the speed of light. By contrast, a non-inertial reference frame has non-zero acceleration. In such a frame, the interactions between physical objects vary depending on the acceleration of that frame with respect to an inertial frame. Viewed from the perspective of classical mechanics and special relativity, the usual physical forces caused by the interaction of objects have to be supplemented by fictitious forces caused by inertia. Viewed from the perspective of general relativity theory, the fictitious (i.e. inertial) forces are attributed to geodesic motion in spacetime. Due to Earth's rotation, its surface is not an inertial frame of reference. The Coriolis effect can deflect certain forms of motion as seen from Earth, and the centrifugal force will reduce the effective gravity at the equator. Nevertheless, for many applications the Earth is an adequate approximation of an inertial reference frame. Introduction The motion of a body can only be described relative to something else—other bodies, observers, or a set of spacetime coordinates. These are called frames of reference. According to the first postulate of special relativity, all physical laws take their simplest form in an inertial frame, and there exist multiple inertial frames interrelated by uniform translation: This simplicity manifests itself in that inertial frames have self-contained physics without the need for external causes, while physics in non-inertial frames has external causes. The principle of simplicity can be used within Newtonian physics as well as in special relativity: However, this definition of inertial frames is understood to apply in the Newtonian realm and ignores relativistic effects. In practical terms, the equivalence of inertial reference frames means that scientists within a box moving with a constant absolute velocity cannot determine this velocity by any experiment. Otherwise, the differences would set up an absolute standard reference frame. According to this definition, supplemented with the constancy of the speed of light, inertial frames of reference transform among themselves according to the Poincaré group of symmetry transformations, of which the Lorentz transformations are a subgroup. In Newtonian mechanics, inertial frames of reference are related by the Galilean group of symmetries. Newton's inertial frame of reference Absolute space Newton posited an absolute space considered well-approximated by a frame of reference stationary relative to the fixed stars. An inertial frame was then one in uniform translation relative to absolute space. However, some "relativists", even at the time of Newton, felt that absolute space was a defect of the formulation, and should be replaced. The expression inertial frame of reference () was coined by Ludwig Lange in 1885, to replace Newton's definitions of "absolute space and time" with a more operational definition: The inadequacy of the notion of "absolute space" in Newtonian mechanics is spelled out by Blagojevich: The utility of operational definitions was carried much further in the special theory of relativity. Some historical background including Lange's definition is provided by DiSalle, who says in summary: Newtonian mechanics Classical theories that use the Galilean transformation postulate the equivalence of all inertial reference frames. The Galilean transformation transforms coordinates from one inertial reference frame, , to another, , by simple addition or subtraction of coordinates: where r0 and t0 represent shifts in the origin of space and time, and v is the relative velocity of the two inertial reference frames. Under Galilean transformations, the time t2 − t1 between two events is the same for all reference frames and the distance between two simultaneous events (or, equivalently, the length of any object, |r2 − r1|) is also the same. Within the realm of Newtonian mechanics, an inertial frame of reference, or inertial reference frame, is one in which Newton's first law of motion is valid. However, the principle of special relativity generalizes the notion of an inertial frame to include all physical laws, not simply Newton's first law. Newton viewed the first law as valid in any reference frame that is in uniform motion (neither rotating nor accelerating) relative to absolute space; as a practical matter, "absolute space" was considered to be the fixed stars In the theory of relativity the notion of absolute space or a privileged frame is abandoned, and an inertial frame in the field of classical mechanics is defined as: Hence, with respect to an inertial frame, an object or body accelerates only when a physical force is applied, and (following Newton's first law of motion), in the absence of a net force, a body at rest will remain at rest and a body in motion will continue to move uniformly—that is, in a straight line and at constant speed. Newtonian inertial frames transform among each other according to the Galilean group of symmetries. If this rule is interpreted as saying that straight-line motion is an indication of zero net force, the rule does not identify inertial reference frames because straight-line motion can be observed in a variety of frames. If the rule is interpreted as defining an inertial frame, then being able to determine when zero net force is applied is crucial. The problem was summarized by Einstein: There are several approaches to this issue. One approach is to argue that all real forces drop off with distance from their sources in a known manner, so it is only needed that a body is far enough away from all sources to ensure that no force is present. A possible issue with this approach is the historically long-lived view that the distant universe might affect matters (Mach's principle). Another approach is to identify all real sources for real forces and account for them. A possible issue with this approach is the possibility of missing something, or accounting inappropriately for their influence, perhaps, again, due to Mach's principle and an incomplete understanding of the universe. A third approach is to look at the way the forces transform when shifting reference frames. Fictitious forces, those that arise due to the acceleration of a frame, disappear in inertial frames and have complicated rules of transformation in general cases. Based on the universality of physical law and the request for frames where the laws are most simply expressed, inertial frames are distinguished by the absence of such fictitious forces. Newton enunciated a principle of relativity himself in one of his corollaries to the laws of motion: This principle differs from the special principle in two ways: first, it is restricted to mechanics, and second, it makes no mention of simplicity. It shares the special principle of the invariance of the form of the description among mutually translating reference frames. The role of fictitious forces in classifying reference frames is pursued further below. Special relativity Einstein's theory of special relativity, like Newtonian mechanics, postulates the equivalence of all inertial reference frames. However, because special relativity postulates that the speed of light in free space is invariant, the transformation between inertial frames is the Lorentz transformation, not the Galilean transformation which is used in Newtonian mechanics. The invariance of the speed of light leads to counter-intuitive phenomena, such as time dilation, length contraction, and the relativity of simultaneity. The predictions of special relativity have been extensively verified experimentally. The Lorentz transformation reduces to the Galilean transformation as the speed of light approaches infinity or as the relative velocity between frames approaches zero. Examples Simple example Consider a situation common in everyday life. Two cars travel along a road, both moving at constant velocities. See Figure 1. At some particular moment, they are separated by 200 meters. The car in front is traveling at 22 meters per second and the car behind is traveling at 30 meters per second. If we want to find out how long it will take the second car to catch up with the first, there are three obvious "frames of reference" that we could choose. First, we could observe the two cars from the side of the road. We define our "frame of reference" S as follows. We stand on the side of the road and start a stop-clock at the exact moment that the second car passes us, which happens to be when they are a distance apart. Since neither of the cars is accelerating, we can determine their positions by the following formulas, where is the position in meters of car one after time t in seconds and is the position of car two after time t. Notice that these formulas predict at t = 0 s the first car is 200m down the road and the second car is right beside us, as expected. We want to find the time at which . Therefore, we set and solve for , that is: Alternatively, we could choose a frame of reference S′ situated in the first car. In this case, the first car is stationary and the second car is approaching from behind at a speed of . To catch up to the first car, it will take a time of , that is, 25 seconds, as before. Note how much easier the problem becomes by choosing a suitable frame of reference. The third possible frame of reference would be attached to the second car. That example resembles the case just discussed, except the second car is stationary and the first car moves backward towards it at . It would have been possible to choose a rotating, accelerating frame of reference, moving in a complicated manner, but this would have served to complicate the problem unnecessarily. It is also necessary to note that one can convert measurements made in one coordinate system to another. For example, suppose that your watch is running five minutes fast compared to the local standard time. If you know that this is the case, when somebody asks you what time it is, you can deduct five minutes from the time displayed on your watch to obtain the correct time. The measurements that an observer makes about a system depend therefore on the observer's frame of reference (you might say that the bus arrived at 5 past three, when in fact it arrived at three). Additional example For a simple example involving only the orientation of two observers, consider two people standing, facing each other on either side of a north-south street. See Figure 2. A car drives past them heading south. For the person facing east, the car was moving to the right. However, for the person facing west, the car was moving to the left. This discrepancy is because the two people used two different frames of reference from which to investigate this system. For a more complex example involving observers in relative motion, consider Alfred, who is standing on the side of a road watching a car drive past him from left to right. In his frame of reference, Alfred defines the spot where he is standing as the origin, the road as the -axis, and the direction in front of him as the positive -axis. To him, the car moves along the axis with some velocity in the positive -direction. Alfred's frame of reference is considered an inertial frame because he is not accelerating, ignoring effects such as Earth's rotation and gravity. Now consider Betsy, the person driving the car. Betsy, in choosing her frame of reference, defines her location as the origin, the direction to her right as the positive -axis, and the direction in front of her as the positive -axis. In this frame of reference, it is Betsy who is stationary and the world around her that is moving – for instance, as she drives past Alfred, she observes him moving with velocity in the negative -direction. If she is driving north, then north is the positive -direction; if she turns east, east becomes the positive -direction. Finally, as an example of non-inertial observers, assume Candace is accelerating her car. As she passes by him, Alfred measures her acceleration and finds it to be in the negative -direction. Assuming Candace's acceleration is constant, what acceleration does Betsy measure? If Betsy's velocity is constant, she is in an inertial frame of reference, and she will find the acceleration to be the same as Alfred in her frame of reference, in the negative -direction. However, if she is accelerating at rate in the negative -direction (in other words, slowing down), she will find Candace's acceleration to be in the negative -direction—a smaller value than Alfred has measured. Similarly, if she is accelerating at rate A in the positive -direction (speeding up), she will observe Candace's acceleration as in the negative -direction—a larger value than Alfred's measurement. Non-inertial frames Here the relation between inertial and non-inertial observational frames of reference is considered. The basic difference between these frames is the need in non-inertial frames for fictitious forces, as described below. General relativity General relativity is based upon the principle of equivalence: This idea was introduced in Einstein's 1907 article "Principle of Relativity and Gravitation" and later developed in 1911. Support for this principle is found in the Eötvös experiment, which determines whether the ratio of inertial to gravitational mass is the same for all bodies, regardless of size or composition. To date no difference has been found to a few parts in 1011. For some discussion of the subtleties of the Eötvös experiment, such as the local mass distribution around the experimental site (including a quip about the mass of Eötvös himself), see Franklin. Einstein's general theory modifies the distinction between nominally "inertial" and "non-inertial" effects by replacing special relativity's "flat" Minkowski Space with a metric that produces non-zero curvature. In general relativity, the principle of inertia is replaced with the principle of geodesic motion, whereby objects move in a way dictated by the curvature of spacetime. As a consequence of this curvature, it is not a given in general relativity that inertial objects moving at a particular rate with respect to each other will continue to do so. This phenomenon of geodesic deviation means that inertial frames of reference do not exist globally as they do in Newtonian mechanics and special relativity. However, the general theory reduces to the special theory over sufficiently small regions of spacetime, where curvature effects become less important and the earlier inertial frame arguments can come back into play. Consequently, modern special relativity is now sometimes described as only a "local theory". "Local" can encompass, for example, the entire Milky Way galaxy: The astronomer Karl Schwarzschild observed the motion of pairs of stars orbiting each other. He found that the two orbits of the stars of such a system lie in a plane, and the perihelion of the orbits of the two stars remains pointing in the same direction with respect to the Solar System. Schwarzschild pointed out that that was invariably seen: the direction of the angular momentum of all observed double star systems remains fixed with respect to the direction of the angular momentum of the Solar System. These observations allowed him to conclude that inertial frames inside the galaxy do not rotate with respect to one another, and that the space of the Milky Way is approximately Galilean or Minkowskian. Inertial frames and rotation In an inertial frame, Newton's first law, the law of inertia, is satisfied: Any free motion has a constant magnitude and direction. Newton's second law for a particle takes the form: with F the net force (a vector), m the mass of a particle and a the acceleration of the particle (also a vector) which would be measured by an observer at rest in the frame. The force F is the vector sum of all "real" forces on the particle, such as contact forces, electromagnetic, gravitational, and nuclear forces. In contrast, Newton's second law in a rotating frame of reference (a non-inertial frame of reference), rotating at angular rate Ω about an axis, takes the form: which looks the same as in an inertial frame, but now the force F′ is the resultant of not only F, but also additional terms (the paragraph following this equation presents the main points without detailed mathematics): where the angular rotation of the frame is expressed by the vector Ω pointing in the direction of the axis of rotation, and with magnitude equal to the angular rate of rotation Ω, symbol × denotes the vector cross product, vector xB locates the body and vector vB is the velocity of the body according to a rotating observer (different from the velocity seen by the inertial observer). The extra terms in the force F′ are the "fictitious" forces for this frame, whose causes are external to the system in the frame. The first extra term is the Coriolis force, the second the centrifugal force, and the third the Euler force. These terms all have these properties: they vanish when Ω = 0; that is, they are zero for an inertial frame (which, of course, does not rotate); they take on a different magnitude and direction in every rotating frame, depending upon its particular value of Ω; they are ubiquitous in the rotating frame (affect every particle, regardless of circumstance); and they have no apparent source in identifiable physical sources, in particular, matter. Also, fictitious forces do not drop off with distance (unlike, for example, nuclear forces or electrical forces). For example, the centrifugal force that appears to emanate from the axis of rotation in a rotating frame increases with distance from the axis. All observers agree on the real forces, F; only non-inertial observers need fictitious forces. The laws of physics in the inertial frame are simpler because unnecessary forces are not present. In Newton's time the fixed stars were invoked as a reference frame, supposedly at rest relative to absolute space. In reference frames that were either at rest with respect to the fixed stars or in uniform translation relative to these stars, Newton's laws of motion were supposed to hold. In contrast, in frames accelerating with respect to the fixed stars, an important case being frames rotating relative to the fixed stars, the laws of motion did not hold in their simplest form, but had to be supplemented by the addition of fictitious forces, for example, the Coriolis force and the centrifugal force. Two experiments were devised by Newton to demonstrate how these forces could be discovered, thereby revealing to an observer that they were not in an inertial frame: the example of the tension in the cord linking two spheres rotating about their center of gravity, and the example of the curvature of the surface of water in a rotating bucket. In both cases, application of Newton's second law would not work for the rotating observer without invoking centrifugal and Coriolis forces to account for their observations (tension in the case of the spheres; parabolic water surface in the case of the rotating bucket). As now known, the fixed stars are not fixed. Those that reside in the Milky Way turn with the galaxy, exhibiting proper motions. Those that are outside our galaxy (such as nebulae once mistaken to be stars) participate in their own motion as well, partly due to expansion of the universe, and partly due to peculiar velocities. For instance, the Andromeda Galaxy is on collision course with the Milky Way at a speed of 117 km/s. The concept of inertial frames of reference is no longer tied to either the fixed stars or to absolute space. Rather, the identification of an inertial frame is based on the simplicity of the laws of physics in the frame. The laws of nature take a simpler form in inertial frames of reference because in these frames one did not have to introduce inertial forces when writing down Newton's law of motion. In practice, using a frame of reference based upon the fixed stars as though it were an inertial frame of reference introduces little discrepancy. For example, the centrifugal acceleration of the Earth because of its rotation about the Sun is about thirty million times greater than that of the Sun about the galactic center. To illustrate further, consider the question: "Does the Universe rotate?" An answer might explain the shape of the Milky Way galaxy using the laws of physics, although other observations might be more definitive; that is, provide larger discrepancies or less measurement uncertainty, like the anisotropy of the microwave background radiation or Big Bang nucleosynthesis. The flatness of the Milky Way depends on its rate of rotation in an inertial frame of reference. If its apparent rate of rotation is attributed entirely to rotation in an inertial frame, a different "flatness" is predicted than if it is supposed that part of this rotation is actually due to rotation of the universe and should not be included in the rotation of the galaxy itself. Based upon the laws of physics, a model is set up in which one parameter is the rate of rotation of the Universe. If the laws of physics agree more accurately with observations in a model with rotation than without it, we are inclined to select the best-fit value for rotation, subject to all other pertinent experimental observations. If no value of the rotation parameter is successful and theory is not within observational error, a modification of physical law is considered, for example, dark matter is invoked to explain the galactic rotation curve. So far, observations show any rotation of the universe is very slow, no faster than once every years (10−13 rad/yr), and debate persists over whether there is any rotation. However, if rotation were found, interpretation of observations in a frame tied to the universe would have to be corrected for the fictitious forces inherent in such rotation in classical physics and special relativity, or interpreted as the curvature of spacetime and the motion of matter along the geodesics in general relativity. When quantum effects are important, there are additional conceptual complications that arise in quantum reference frames. Primed frames An accelerated frame of reference is often delineated as being the "primed" frame, and all variables that are dependent on that frame are notated with primes, e.g. x′, y′, a′. The vector from the origin of an inertial reference frame to the origin of an accelerated reference frame is commonly notated as R. Given a point of interest that exists in both frames, the vector from the inertial origin to the point is called r, and the vector from the accelerated origin to the point is called r′. From the geometry of the situation Taking the first and second derivatives of this with respect to time where V and A are the velocity and acceleration of the accelerated system with respect to the inertial system and v and a are the velocity and acceleration of the point of interest with respect to the inertial frame. These equations allow transformations between the two coordinate systems; for example, Newton's second law can be written as When there is accelerated motion due to a force being exerted there is manifestation of inertia. If an electric car designed to recharge its battery system when decelerating is switched to braking, the batteries are recharged, illustrating the physical strength of manifestation of inertia. However, the manifestation of inertia does not prevent acceleration (or deceleration), for manifestation of inertia occurs in response to change in velocity due to a force. Seen from the perspective of a rotating frame of reference the manifestation of inertia appears to exert a force (either in centrifugal direction, or in a direction orthogonal to an object's motion, the Coriolis effect). A common sort of accelerated reference frame is a frame that is both rotating and translating (an example is a frame of reference attached to a CD which is playing while the player is carried). This arrangement leads to the equation (see Fictitious force for a derivation): or, to solve for the acceleration in the accelerated frame, Multiplying through by the mass m gives where (Euler force), (Coriolis force), (centrifugal force). Separating non-inertial from inertial reference frames Theory Inertial and non-inertial reference frames can be distinguished by the absence or presence of fictitious forces. The presence of fictitious forces indicates the physical laws are not the simplest laws available, in terms of the special principle of relativity, a frame where fictitious forces are present is not an inertial frame: Bodies in non-inertial reference frames are subject to so-called fictitious forces (pseudo-forces); that is, forces that result from the acceleration of the reference frame itself and not from any physical force acting on the body. Examples of fictitious forces are the centrifugal force and the Coriolis force in rotating reference frames. To apply the Newtonian definition of an inertial frame, the understanding of separation between "fictitious" forces and "real" forces must be made clear. For example, consider a stationary object in an inertial frame. Being at rest, no net force is applied. But in a frame rotating about a fixed axis, the object appears to move in a circle, and is subject to centripetal force. How can it be decided that the rotating frame is a non-inertial frame? There are two approaches to this resolution: one approach is to look for the origin of the fictitious forces (the Coriolis force and the centrifugal force). It will be found there are no sources for these forces, no associated force carriers, no originating bodies. A second approach is to look at a variety of frames of reference. For any inertial frame, the Coriolis force and the centrifugal force disappear, so application of the principle of special relativity would identify these frames where the forces disappear as sharing the same and the simplest physical laws, and hence rule that the rotating frame is not an inertial frame. Newton examined this problem himself using rotating spheres, as shown in Figure 2 and Figure 3. He pointed out that if the spheres are not rotating, the tension in the tying string is measured as zero in every frame of reference. If the spheres only appear to rotate (that is, we are watching stationary spheres from a rotating frame), the zero tension in the string is accounted for by observing that the centripetal force is supplied by the centrifugal and Coriolis forces in combination, so no tension is needed. If the spheres really are rotating, the tension observed is exactly the centripetal force required by the circular motion. Thus, measurement of the tension in the string identifies the inertial frame: it is the one where the tension in the string provides exactly the centripetal force demanded by the motion as it is observed in that frame, and not a different value. That is, the inertial frame is the one where the fictitious forces vanish. For linear acceleration, Newton expressed the idea of undetectability of straight-line accelerations held in common: This principle generalizes the notion of an inertial frame. For example, an observer confined in a free-falling lift will assert that he himself is a valid inertial frame, even if he is accelerating under gravity, so long as he has no knowledge about anything outside the lift. So, strictly speaking, inertial frame is a relative concept. With this in mind, inertial frames can collectively be defined as a set of frames which are stationary or moving at constant velocity with respect to each other, so that a single inertial frame is defined as an element of this set. For these ideas to apply, everything observed in the frame has to be subject to a base-line, common acceleration shared by the frame itself. That situation would apply, for example, to the elevator example, where all objects are subject to the same gravitational acceleration, and the elevator itself accelerates at the same rate. Applications Inertial navigation systems used a cluster of gyroscopes and accelerometers to determine accelerations relative to inertial space. After a gyroscope is spun up in a particular orientation in inertial space, the law of conservation of angular momentum requires that it retain that orientation as long as no external forces are applied to it. Three orthogonal gyroscopes establish an inertial reference frame, and the accelerators measure acceleration relative to that frame. The accelerations, along with a clock, can then be used to calculate the change in position. Thus, inertial navigation is a form of dead reckoning that requires no external input, and therefore cannot be jammed by any external or internal signal source. A gyrocompass, employed for navigation of seagoing vessels, finds the geometric north. It does so, not by sensing the Earth's magnetic field, but by using inertial space as its reference. The outer casing of the gyrocompass device is held in such a way that it remains aligned with the local plumb line. When the gyroscope wheel inside the gyrocompass device is spun up, the way the gyroscope wheel is suspended causes the gyroscope wheel to gradually align its spinning axis with the Earth's axis. Alignment with the Earth's axis is the only direction for which the gyroscope's spinning axis can be stationary with respect to the Earth and not be required to change direction with respect to inertial space. After being spun up, a gyrocompass can reach the direction of alignment with the Earth's axis in as little as a quarter of an hour.
Physical sciences
Classical mechanics
Physics
14843
https://en.wikipedia.org/wiki/Interstellar%20travel
Interstellar travel
Interstellar travel is the hypothetical travel of spacecraft between star systems. Due to the vast distances between the Solar System and nearby stars, interstellar travel is not practicable with current propulsion technologies. To travel between stars within a reasonable amount of time (decades or centuries), an interstellar spacecraft must reach a significant fraction of the speed of light, requiring enormous energy. Communication with such interstellar craft will experience years of delay due to the speed of light. Collisions with cosmic dust and gas at such speeds can be catastrophic for such spacecrafts. Crewed interstellar travel could possibly be conducted more slowly (far beyond the scale of a human lifetime) by making a generation ship. Hypothetical interstellar propulsion systems include nuclear pulse propulsion, fission-fragment rocket, fusion rocket, beamed solar sail, and antimatter rocket. The benefits of interstellar travel include detailed surveys of habitable exoplanets and distant stars, comprehensive search for extraterrestrial intelligence and space colonization. Even though five uncrewed spacecraft have left our Solar System, they are not "interstellar craft" because they are not purposefully designed to explore other star systems. Thus, as of the 2020s, interstellar spaceflight remains a popular trope in speculative future studies and science fiction. A civilization that has mastered interstellar travel is called an interstellar species. Challenges Interstellar distances Distances between the planets in the Solar System are often measured in astronomical units (AU), defined as the average distance between the Sun and Earth, some . Venus, the closest planet to Earth is (at closest approach) 0.28 AU away. Neptune, the farthest planet from the Sun, is 29.8 AU away. As of January 20, 2023, Voyager 1, the farthest human-made object from Earth, is 163 AU away, exiting the Solar System at a speed of 17 km/s (0.006% of the speed of light). The closest known star, Proxima Centauri, is approximately away, or over 9,000 times farther away than Neptune. Because of this, distances between stars are usually expressed in light-years (defined as the distance that light travels in vacuum in one Julian year) or in parsecs (one parsec is 3.26 ly, the distance at which stellar parallax is exactly one arcsecond, hence the name). Light in a vacuum travels around per second, so 1 light-year is about or AU. Hence, Proxima Centauri is approximately 4.243 light-years from Earth. Another way of understanding the vastness of interstellar distances is by scaling: One of the closest stars to the Sun, Alpha Centauri A (a Sun-like star that is one of two companions of Proxima Centauri), can be pictured by scaling down the Earth–Sun distance to . On this scale, the distance to Alpha Centauri A would be . The fastest outward-bound spacecraft yet sent, Voyager 1, has covered 1/390 of a light-year in 46 years and is currently moving at 1/17,600 the speed of light. At this rate, a journey to Proxima Centauri would take 75,000 years. Required energy A significant factor contributing to the difficulty is the energy that must be supplied to obtain a reasonable travel time. A lower bound for the required energy is the kinetic energy where is the final mass. If deceleration on arrival is desired and cannot be achieved by any means other than the engines of the ship, then the lower bound for the required energy is doubled to . The velocity for a crewed round trip of a few decades to even the nearest star is several thousand times greater than those of present space vehicles. This means that due to the term in the kinetic energy formula, millions of times as much energy is required. Accelerating one ton to one-tenth of the speed of light requires at least (world energy consumption 2008 was 143,851 terawatt-hours), without factoring in efficiency of the propulsion mechanism. This energy has to be generated onboard from stored fuel, harvested from the interstellar medium, or projected over immense distances. Interstellar medium A knowledge of the properties of the interstellar gas and dust through which the vehicle must pass is essential for the design of any interstellar space mission. A major issue with traveling at extremely high speeds is that, due to the requisite high relative speeds and large kinetic energies, collisions with interstellar dust could cause considerable damage to the craft. Various shielding methods to mitigate this problem have been proposed. Larger objects (such as macroscopic dust grains) are far less common, but would be much more destructive. The risks of impacting such objects and mitigation methods have been discussed in literature, but many unknowns remain. An additional consideration is that, due to the non-homogeneous distribution of interstellar matter around the Sun, these risks would vary between different trajectories. Although a high density interstellar medium may cause difficulties for many interstellar travel concepts, interstellar ramjets, and some proposed concepts for decelerating interstellar spacecraft, would actually benefit from a denser interstellar medium. Hazards The crew of an interstellar ship would face several significant hazards, including the psychological effects of long-term isolation, the physiological effects of extreme acceleration, the effects of exposure to ionising radiation, and the physiological effects of weightlessness to the muscles, joints, bones, immune system, and eyes. There also exists the risk of impact by micrometeoroids and other space debris. These risks represent challenges that have yet to be overcome. Wait calculation The speculative fiction writer and physicist Robert L. Forward has argued that an interstellar mission that cannot be completed within 50 years should not be started at all. Instead, assuming that a civilization is still on an increasing curve of propulsion system velocity and not yet having reached the limit, the resources should be invested in designing a better propulsion system. This is because a slow spacecraft would probably be passed by another mission sent later with more advanced propulsion (the incessant obsolescence postulate). In 2006, Andrew Kennedy calculated ideal departure dates for a trip to Barnard's Star using a more precise concept of the wait calculation where for a given destination and growth rate in propulsion capacity there is a departure point that overtakes earlier launches and will not be overtaken by later ones and concluded "an interstellar journey of 6 light years can best be made in about 635 years from now if growth continues at about 1.4% per annum", or approximately 2641 AD. It may be the most significant calculation for competing cultures occupying the galaxy. Prime targets for interstellar travel There are 59 known stellar systems within 40 light years of the Sun, containing 81 visible stars. The following could be considered prime targets for interstellar missions: Existing astronomical technology is capable of finding planetary systems around these objects, increasing their potential for exploration. Proposed methods Slow, uncrewed probes "Slow" interstellar missions (still fast by other standards) based on current and near-future propulsion technologies are associated with trip times starting from about several decades to thousands of years. These missions consist of sending a robotic probe to a nearby star for exploration, similar to interplanetary probes like those used in the Voyager program. By taking along no crew, the cost and complexity of the mission is significantly reduced, as is the mass that needs to be accelerated, although technology lifetime is still a significant issue next to obtaining a reasonable speed of travel. Proposed concepts include Project Daedalus, Project Icarus, Project Dragonfly, Project Longshot, and more recently Breakthrough Starshot. Fast, uncrewed probes Nanoprobes Near-lightspeed nano spacecraft might be possible within the near future built on existing microchip technology with a newly developed nanoscale thruster. Researchers at the University of Michigan are developing thrusters that use nanoparticles as propellant. Their technology is called "nanoparticle field extraction thruster", or nanoFET. These devices act like small particle accelerators shooting conductive nanoparticles out into space. Michio Kaku, a theoretical physicist, has suggested that clouds of "smart dust" be sent to the stars, which may become possible with advances in nanotechnology. Kaku also notes that a large number of nanoprobes would need to be sent due to the vulnerability of very small probes to be easily deflected by magnetic fields, micrometeorites and other dangers to ensure the chances that at least one nanoprobe will survive the journey and reach the destination. As a near-term solution, small, laser-propelled interstellar probes, based on current CubeSat technology were proposed in the context of Project Dragonfly. Slow, crewed missions In crewed missions, the duration of a slow interstellar journey presents a major obstacle and existing concepts deal with this problem in different ways. They can be distinguished by the "state" in which humans are transported on-board of the spacecraft. Generation ships A generation ship (or world ship) is a type of interstellar ark in which the crew that arrives at the destination is descended from those who started the journey. Generation ships are not currently feasible because of the difficulty of constructing a ship of the enormous required scale and the great biological and sociological problems that life aboard such a ship raises. Suspended animation Scientists and writers have postulated various techniques for suspended animation. These include human hibernation and cryonic preservation. Although neither is currently practical, they offer the possibility of sleeper ships in which the passengers lie inert for the long duration of the voyage. Frozen embryos A robotic interstellar mission carrying some number of frozen early stage human embryos is another theoretical possibility. This method of space colonization requires, among other things, the development of an artificial uterus, the prior detection of a habitable terrestrial planet, and advances in the field of fully autonomous mobile robots and educational robots that would replace human parents. Island hopping through interstellar space Interstellar space is not completely empty; it contains trillions of icy bodies ranging from small asteroids (Oort cloud) to possible rogue planets. There may be ways to take advantage of these resources for a good part of an interstellar trip, slowly hopping from body to body or setting up waystations along the way. Fast, crewed missions If a spaceship could average 10 percent of light speed (and decelerate at the destination, for human crewed missions), this would be enough to reach Proxima Centauri in forty years. Several propulsion concepts have been proposed that might be eventually developed to accomplish this (see § Propulsion below), but none of them are ready for near-term (few decades) developments at acceptable cost. Time dilation Physicists generally believe faster-than-light travel is impossible. Relativistic time dilation allows a traveler to experience time more slowly, the closer their speed is to the speed of light. This apparent slowing becomes noticeable when velocities above 80% of the speed of light are attained. Clocks aboard an interstellar ship would run slower than Earth clocks, so if a ship's engines were capable of continuously generating around 1 g of acceleration (which is comfortable for humans), the ship could reach almost anywhere in the galaxy and return to Earth within 40 years ship-time (see diagram). Upon return, there would be a difference between the time elapsed on the astronaut's ship and the time elapsed on Earth. For example, a spaceship could travel to a star 32 light-years away, initially accelerating at a constant 1.03g (i.e. 10.1 m/s2) for 1.32 years (ship time), then stopping its engines and coasting for the next 17.3 years (ship time) at a constant speed, then decelerating again for 1.32 ship-years, and coming to a stop at the destination. After a short visit, the astronaut could return to Earth the same way. After the full round-trip, the clocks on board the ship show that 40 years have passed, but according to those on Earth, the ship comes back 76 years after launch. From the viewpoint of the astronaut, onboard clocks seem to be running normally. The star ahead seems to be approaching at a speed of 0.87 light years per ship-year. The universe would appear contracted along the direction of travel to half the size it had when the ship was at rest; the distance between that star and the Sun would seem to be 16 light years as measured by the astronaut. At higher speeds, the time on board will run even slower, so the astronaut could travel to the center of the Milky Way (30,000 light years from Earth) and back in 40 years ship-time. But the speed according to Earth clocks will always be less than 1 light year per Earth year, so, when back home, the astronaut will find that more than 60 thousand years will have passed on Earth. Constant acceleration Regardless of how it is achieved, a propulsion system that could produce acceleration continuously from departure to arrival would be the fastest method of travel. A constant acceleration journey is one where the propulsion system accelerates the ship at a constant rate for the first half of the journey, and then decelerates for the second half, so that it arrives at the destination stationary relative to where it began. If this were performed with an acceleration similar to that experienced at the Earth's surface, it would have the added advantage of producing artificial "gravity" for the crew. Supplying the energy required, however, would be prohibitively expensive with current technology. From the perspective of a planetary observer, the ship will appear to accelerate steadily at first, but then more gradually as it approaches the speed of light (which it cannot exceed). It will undergo hyperbolic motion. The ship will be close to the speed of light after about a year of accelerating and remain at that speed until it brakes for the end of the journey. From the perspective of an onboard observer, the crew will feel a gravitational field opposite the engine's acceleration, and the universe ahead will appear to fall in that field, undergoing hyperbolic motion. As part of this, distances between objects in the direction of the ship's motion will gradually contract until the ship begins to decelerate, at which time an onboard observer's experience of the gravitational field will be reversed. When the ship reaches its destination, if it were to exchange a message with its origin planet, it would find that less time had elapsed on board than had elapsed for the planetary observer, due to time dilation and length contraction. The result is an impressively fast journey for the crew. Propulsion Rocket concepts All rocket concepts are limited by the rocket equation, which sets the characteristic velocity available as a function of exhaust velocity and mass ratio, the ratio of initial (M0, including fuel) to final (M1, fuel depleted) mass. Very high specific power, the ratio of thrust to total vehicle mass, is required to reach interstellar targets within sub-century time-frames. Some heat transfer is inevitable, resulting in an extreme thermal load. Thus, for interstellar rocket concepts of all technologies, a key engineering problem (seldom explicitly discussed) is limiting the heat transfer from the exhaust stream back into the vehicle. Ion engine A type of electric propulsion, spacecraft such as Dawn use an ion engine. In an ion engine, electric power is used to create charged particles of the propellant, usually the gas xenon, and accelerate them to extremely high velocities. The exhaust velocity of conventional rockets is limited to about 5 km/s by the chemical energy stored in the fuel's molecular bonds. They produce a high thrust (about 106 N), but they have a low specific impulse, and that limits their top speed. By contrast, ion engines have low force, but the top speed in principle is limited only by the electrical power available on the spacecraft and on the gas ions being accelerated. The exhaust speed of the charged particles range from 15 km/s to 35 km/s. Nuclear fission powered Fission-electric Nuclear-electric or plasma engines, operating for long periods at low thrust and powered by fission reactors, have the potential to reach speeds much greater than chemically powered vehicles or nuclear-thermal rockets. Such vehicles probably have the potential to power solar system exploration with reasonable trip times within the current century. Because of their low-thrust propulsion, they would be limited to off-planet, deep-space operation. Electrically powered spacecraft propulsion powered by a portable power-source, say a nuclear reactor, producing only small accelerations, would take centuries to reach for example 15% of the velocity of light, thus unsuitable for interstellar flight during a single human lifetime. Fission-fragment Fission-fragment rockets use nuclear fission to create high-speed jets of fission fragments, which are ejected at speeds of up to . With fission, the energy output is approximately 0.1% of the total mass-energy of the reactor fuel and limits the effective exhaust velocity to about 5% of the velocity of light. For maximum velocity, the reaction mass should optimally consist of fission products, the "ash" of the primary energy source, so no extra reaction mass need be bookkept in the mass ratio. Nuclear pulse Based on work in the late 1950s to the early 1960s, it has been technically possible to build spaceships with nuclear pulse propulsion engines, i.e. driven by a series of nuclear explosions. This propulsion system contains the prospect of very high specific impulse and high specific power. Project Orion team member Freeman Dyson proposed in 1968 an interstellar spacecraft using nuclear pulse propulsion that used pure deuterium fusion detonations with a very high fuel-burnup fraction. He computed an exhaust velocity of 15,000 km/s and a 100,000-tonne space vehicle able to achieve a 20,000 km/s delta-v allowing a flight-time to Alpha Centauri of 130 years. Later studies indicate that the top cruise velocity that can theoretically be achieved by a Teller-Ulam thermonuclear unit powered Orion starship, assuming no fuel is saved for slowing back down, is about 8% to 10% of the speed of light (0.08-0.1c). An atomic (fission) Orion can achieve perhaps 3%-5% of the speed of light. A nuclear pulse drive starship powered by fusion-antimatter catalyzed nuclear pulse propulsion units would be similarly in the 10% range and pure matter-antimatter annihilation rockets would be theoretically capable of obtaining a velocity between 50% and 80% of the speed of light. In each case saving fuel for slowing down halves the maximum speed. The concept of using a magnetic sail to decelerate the spacecraft as it approaches its destination has been discussed as an alternative to using propellant, this would allow the ship to travel near the maximum theoretical velocity. Alternative designs utilizing similar principles include Project Longshot, Project Daedalus, and Mini-Mag Orion. The principle of external nuclear pulse propulsion to maximize survivable power has remained common among serious concepts for interstellar flight without external power beaming and for very high-performance interplanetary flight. In the 1970s the Nuclear Pulse Propulsion concept further was refined by Project Daedalus by use of externally triggered inertial confinement fusion, in this case producing fusion explosions via compressing fusion fuel pellets with high-powered electron beams. Since then, lasers, ion beams, neutral particle beams and hyper-kinetic projectiles have been suggested to produce nuclear pulses for propulsion purposes. A current impediment to the development of any nuclear-explosion-powered spacecraft is the 1963 Partial Test Ban Treaty, which includes a prohibition on the detonation of any nuclear devices (even non-weapon based) in outer space. This treaty would, therefore, need to be renegotiated, although a project on the scale of an interstellar mission using currently foreseeable technology would probably require international cooperation on at least the scale of the International Space Station. Another issue to be considered, would be the g-forces imparted to a rapidly accelerated spacecraft, cargo, and passengers inside (see Inertia negation). Nuclear fusion rockets Fusion rocket starships, powered by nuclear fusion reactions, should conceivably be able to reach speeds of the order of 10% of that of light, based on energy considerations alone. In theory, a large number of stages could push a vehicle arbitrarily close to the speed of light. These would "burn" such light element fuels as deuterium, tritium, 3He, 11B, and 7Li. Because fusion yields about 0.3–0.9% of the mass of the nuclear fuel as released energy, it is energetically more favorable than fission, which releases <0.1% of the fuel's mass-energy. The maximum exhaust velocities potentially energetically available are correspondingly higher than for fission, typically 4–10% of the speed of light. However, the most easily achievable fusion reactions release a large fraction of their energy as high-energy neutrons, which are a significant source of energy loss. Thus, although these concepts seem to offer the best (nearest-term) prospects for travel to the nearest stars within a (long) human lifetime, they still involve massive technological and engineering difficulties, which may turn out to be intractable for decades or centuries. Early studies include Project Daedalus, performed by the British Interplanetary Society in 1973–1978, and Project Longshot, a student project sponsored by NASA and the US Naval Academy, completed in 1988. Another fairly detailed vehicle system, "Discovery II", designed and optimized for crewed Solar System exploration, based on the D3He reaction but using hydrogen as reaction mass, has been described by a team from NASA's Glenn Research Center. It achieves characteristic velocities of >300 km/s with an acceleration of ~1.7•10−3 g, with a ship initial mass of ~1700 metric tons, and payload fraction above 10%. Although these are still far short of the requirements for interstellar travel on human timescales, the study seems to represent a reasonable benchmark towards what may be approachable within several decades, which is not impossibly beyond the current state-of-the-art. Based on the concept's 2.2% burnup fraction it could achieve a pure fusion product exhaust velocity of ~3,000 km/s. Antimatter rockets An antimatter rocket would have a far higher energy density and specific impulse than any other proposed class of rocket. If energy resources and efficient production methods are found to make antimatter in the quantities required and store it safely, it would be theoretically possible to reach speeds of several tens of percent that of light. Whether antimatter propulsion could lead to the higher speeds (>90% that of light) at which relativistic time dilation would become more noticeable, thus making time pass at a slower rate for the travelers as perceived by an outside observer, is doubtful owing to the large quantity of antimatter that would be required. Speculating that production and storage of antimatter should become feasible, two further issues need to be considered. First, in the annihilation of antimatter, much of the energy is lost as high-energy gamma radiation, and especially also as neutrinos, so that only about 40% of mc2 would actually be available if the antimatter were simply allowed to annihilate into radiations thermally. Even so, the energy available for propulsion would be substantially higher than the ~1% of mc2 yield of nuclear fusion, the next-best rival candidate. Second, heat transfer from the exhaust to the vehicle seems likely to transfer enormous wasted energy into the ship (e.g. for 0.1g ship acceleration, approaching 0.3 trillion watts per ton of ship mass), considering the large fraction of the energy that goes into penetrating gamma rays. Even assuming shielding was provided to protect the payload (and passengers on a crewed vehicle), some of the energy would inevitably heat the vehicle, and may thereby prove a limiting factor if useful accelerations are to be achieved. More recently, Friedwardt Winterberg proposed that a matter-antimatter GeV gamma ray laser photon rocket is possible by a relativistic proton-antiproton pinch discharge, where the recoil from the laser beam is transmitted by the Mössbauer effect to the spacecraft. Rockets with an external energy source Rockets deriving their power from external sources, such as a laser, could replace their internal energy source with an energy collector, potentially reducing the mass of the ship greatly and allowing much higher travel speeds. Geoffrey A. Landis proposed an interstellar probe propelled by an ion thruster powered by the energy beamed to it from a base station laser. Lenard and Andrews proposed using a base station laser to accelerate nuclear fuel pellets towards a Mini-Mag Orion spacecraft that ignites them for propulsion. Non-rocket concepts A problem with all traditional rocket propulsion methods is that the spacecraft would need to carry its fuel with it, thus making it very massive, in accordance with the rocket equation. Several concepts attempt to escape from this problem: RF resonant cavity thruster A radio frequency (RF) resonant cavity thruster is a device that is claimed to be a spacecraft thruster. In 2016, the Advanced Propulsion Physics Laboratory at NASA reported observing a small apparent thrust from one such test, a result not since replicated. One of the designs is called EMDrive. In December 2002, Satellite Propulsion Research Ltd described a working prototype with an alleged total thrust of about 0.02 newtons powered by an 850 W cavity magnetron. The device could operate for only a few dozen seconds before the magnetron failed, due to overheating. The latest test on the EMDrive concluded that it does not work. Helical engine Proposed in 2019 by NASA scientist Dr. David Burns, the helical engine concept would use a particle accelerator to accelerate particles to near the speed of light. Since particles traveling at such speeds acquire more mass, it is believed that this mass change could create acceleration. According to Burns, the spacecraft could theoretically reach 99% the speed of light. Interstellar ramjets In 1960, Robert W. Bussard proposed the Bussard ramjet, a fusion rocket in which a huge scoop would collect the diffuse hydrogen in interstellar space, "burn" it on the fly using a proton–proton chain reaction, and expel it out of the back. Later calculations with more accurate estimates suggest that the thrust generated would be less than the drag caused by any conceivable scoop design. Yet the idea is attractive because the fuel would be collected en route (commensurate with the concept of energy harvesting), so the craft could theoretically accelerate to near the speed of light. The limitation is due to the fact that the reaction can only accelerate the propellant to 0.12c. Thus the drag of catching interstellar dust and the thrust of accelerating that same dust to 0.12c would be the same when the speed is 0.12c, preventing further acceleration. Beamed propulsion A light sail or magnetic sail powered by a massive laser or particle accelerator in the home star system could potentially reach even greater speeds than rocket- or pulse propulsion methods, because it would not need to carry its own reaction mass and therefore would only need to accelerate the craft's payload. Robert L. Forward proposed a means for decelerating an interstellar craft with a light sail of 100 kilometers in the destination star system without requiring a laser array to be present in that system. In this scheme, a secondary sail of 30 kilometers is deployed to the rear of the spacecraft, while the large primary sail is detached from the craft to keep moving forward on its own. Light is reflected from the large primary sail to the secondary sail, which is used to decelerate the secondary sail and the spacecraft payload. In 2002, Geoffrey A. Landis of NASA's Glen Research center also proposed a laser-powered, propulsion, sail ship that would host a diamond sail (of a few nanometers thick) powered with the use of solar energy. With this proposal, this interstellar ship would, theoretically, be able to reach 10 percent the speed of light. It has also been proposed to use beamed-powered propulsion to accelerate a spacecraft, and electromagnetic propulsion to decelerate it; thus, eliminating the problem that the Bussard ramjet has with the drag produced during acceleration. A magnetic sail could also decelerate at its destination without depending on carried fuel or a driving beam in the destination system, by interacting with the plasma found in the solar wind of the destination star and the interstellar medium. The following table lists some example concepts using beamed laser propulsion as proposed by the physicist Robert L. Forward: Interstellar travel catalog to use photogravitational assists for a full stop The following table is based on work by Heller, Hippke and Kervella. Successive assists at α Cen A and B could allow travel times to 75 yr to both stars. Lightsail has a nominal mass-to-surface ratio (σnom) of 8.6×10−4 gram m−2 for a nominal graphene-class sail. Area of the Lightsail, about 105 m2 = (316 m)2 Velocity up to 37,300 km s−1 (12.5% c) Pre-accelerated fuel Achieving start-stop interstellar trip times of less than a human lifetime require mass-ratios of between 1,000 and 1,000,000, even for the nearer stars. This could be achieved by multi-staged vehicles on a vast scale. Alternatively large linear accelerators could propel fuel to fission propelled space-vehicles, avoiding the limitations of the Rocket equation. Dynamic soaring Dynamic soaring as a way to travel across interstellar space has been proposed. Theoretical concepts Transmission of minds with light Uploaded human minds or AI could be transmitted with laser or radio signals at the speed of light. This requires a receiver at the destination which would first have to be set up e.g. by humans, probes, self replicating machines (potentially along with AI or uploaded humans), or an alien civilization (which might also be in a different galaxy, perhaps a Kardashev type III civilization). Artificial black hole A theoretical idea for enabling interstellar travel is to propel a starship by creating an artificial black hole and using a parabolic reflector to reflect its Hawking radiation. Although beyond current technological capabilities, a black hole starship offers some advantages compared to other possible methods. Getting the black hole to act as a power source and engine also requires a way to convert the Hawking radiation into energy and thrust. One potential method involves placing the hole at the focal point of a parabolic reflector attached to the ship, creating forward thrust. A slightly easier, but less efficient method would involve simply absorbing all the gamma radiation heading towards the fore of the ship to push it onwards, and let the rest shoot out the back. Faster-than-light travel Scientists and authors have postulated a number of ways by which it might be possible to surpass the speed of light, but even the most serious-minded of these are highly speculative. It is also debatable whether faster-than-light travel is physically possible, in part because of causality concerns: travel faster than light may, under certain conditions, permit travel backwards in time within the context of special relativity. Proposed mechanisms for faster-than-light travel within the theory of general relativity require the existence of exotic matter and, it is not known if it could be produced in sufficient quantities, if at all. Alcubierre drive In physics, the Alcubierre drive is based on an argument, within the framework of general relativity and without the introduction of wormholes, that it is possible to modify spacetime in a way that allows a spaceship to travel with an arbitrarily large speed by a local expansion of spacetime behind the spaceship and an opposite contraction in front of it. Nevertheless, this concept would require the spaceship to incorporate a region of exotic matter, or the hypothetical concept of negative mass. Wormholes Wormholes are conjectural distortions in spacetime that theorists postulate could connect two arbitrary points in the universe, across an Einstein–Rosen Bridge. It is not known whether wormholes are possible in practice. Although there are solutions to the Einstein equation of general relativity that allow for wormholes, all of the currently known solutions involve some assumption, for example the existence of negative mass, which may be unphysical. However, Cramer et al. argue that such wormholes might have been created in the early universe, stabilized by cosmic strings. The general theory of wormholes is discussed by Visser in the book Lorentzian Wormholes. Designs and studies Project Hyperion Project Hyperion has looked into various feasibility issues of crewed interstellar travel. Notable results of the project include an assessment of world ship system architectures and adequate population size. Its members continue to publish on crewed interstellar travel in collaboration with the Initiative for Interstellar Studies. Enzmann starship The Enzmann starship, as detailed by G. Harry Stine in the October 1973 issue of Analog, was a design for a future starship, based on the ideas of Robert Duncan-Enzmann. The spacecraft itself as proposed used a 12,000,000 ton ball of frozen deuterium to power 12–24 thermonuclear pulse propulsion units. Twice as long as the Empire State Building is tall and assembled in-orbit, the spacecraft was part of a larger project preceded by interstellar probes and telescopic observation of target star systems. NASA research NASA has been researching interstellar travel since its formation, translating important foreign language papers and conducting early studies on applying fusion propulsion, in the 1960s, and laser propulsion, in the 1970s, to interstellar travel. In 1994, NASA and JPL cosponsored a "Workshop on Advanced Quantum/Relativity Theory Propulsion" to "establish and use new frames of reference for thinking about the faster-than-light (FTL) question". The NASA Breakthrough Propulsion Physics Program (terminated in FY 2003 after a 6-year, $1.2-million study, because "No breakthroughs appear imminent.") identified some breakthroughs that are needed for interstellar travel to be possible. Geoffrey A. Landis of NASA's Glenn Research Center states that a laser-powered interstellar sail ship could possibly be launched within 50 years, using new methods of space travel. "I think that ultimately we're going to do it, it's just a question of when and who," Landis said in an interview. Rockets are too slow to send humans on interstellar missions. Instead, he envisions interstellar craft with extensive sails, propelled by laser light to about one-tenth the speed of light. It would take such a ship about 43 years to reach Alpha Centauri if it passed through the system without stopping. Slowing down to stop at Alpha Centauri could increase the trip to 100 years, whereas a journey without slowing down raises the issue of making sufficiently accurate and useful observations and measurements during a fly-by. 100 Year Starship study The 100 Year Starship (100YSS) study was the name of a one-year project to assess the attributes of and lay the groundwork for an organization that can carry forward the 100 Year Starship vision. 100YSS-related symposia were organized between 2011 and 2015. Harold ("Sonny") White from NASA's Johnson Space Center is a member of Icarus Interstellar, the nonprofit foundation whose mission is to realize interstellar flight before the year 2100. At the 2012 meeting of 100YSS, he reported using a laser to try to warp spacetime by 1 part in 10 million with the aim of helping to make interstellar travel possible. Other designs Project Orion, human crewed interstellar ship (1958–1968). Project Daedalus, uncrewed interstellar probe (1973–1978). Starwisp, uncrewed interstellar probe (1985). Project Longshot, uncrewed interstellar probe (1987–1988). Starseed/launcher, fleet of uncrewed interstellar probes (1996). Project Valkyrie, human crewed interstellar ship (2009). Project Icarus, uncrewed interstellar probe (2009–2014). Sun-diver, uncrewed interstellar probe. Project Dragonfly, small laser-propelled interstellar probe (2013–2015). Breakthrough Starshot, fleet of uncrewed interstellar probes, announced on 12 April 2016. Solar One, crewed spacecraft that would combine beamed-powered propulsion, electromagnetic propulsion, and nuclear propulsion (2020). Non-profit organizations A few organisations dedicated to interstellar propulsion research and advocacy for the case exist worldwide. These are still in their infancy, but are already backed up by a membership of a wide variety of scientists, students and professionals. Initiative for Interstellar Studies (UK) Tau Zero Foundation (USA) Limitless Space Institute (USA) Tennessee Valley Interstellar Workshop (TVIW), business name Interstellar Research Group (IRG) (USA) Feasibility The energy requirements make interstellar travel very difficult. It has been reported that at the 2008 Joint Propulsion Conference, multiple experts opined that it was improbable that humans would ever explore beyond the Solar System. Brice N. Cassenti, an associate professor with the Department of Engineering and Science at Rensselaer Polytechnic Institute, stated that at least 100 times the total energy output of the entire world [in a given year] would be required to send a probe to the nearest star. Astrophysicist Sten Odenwald stated that the basic problem is that through intensive studies of thousands of detected exoplanets, most of the closest destinations within 50 light years do not yield Earth-like planets in the star's habitable zones. Given the multitrillion-dollar expense of some of the proposed technologies, travelers will have to spend up to 200 years traveling at 20% the speed of light to reach the best known destinations. Moreover, once the travelers arrive at their destination (by any means), they will not be able to travel down to the surface of the target world and set up a colony unless the atmosphere is non-lethal. The prospect of making such a journey, only to spend the rest of the colony's life inside a sealed habitat and venturing outside in a spacesuit, may eliminate many prospective targets from the list. Moving at a speed close to the speed of light and encountering even a tiny stationary object like a grain of sand will have fatal consequences. For example, a gram of matter moving at 90% of the speed of light contains a kinetic energy corresponding to a small nuclear bomb (around 30kt TNT). One of the major stumbling blocks is having enough Onboard Spares & Repairs facilities for such a lengthy time journey assuming all other considerations are solved, without access to all the resources available on Earth. Interstellar missions not for human benefit Explorative high-speed missions to Alpha Centauri, as planned for by the Breakthrough Starshot initiative, are projected to be realizable within the 21st century. It is alternatively possible to plan for uncrewed slow-cruising missions taking millennia to arrive. These probes would not be for human benefit in the sense that one can not foresee whether there would be anybody around on Earth interested in then back-transmitted science data. An example would be the Genesis mission, which aims to bring unicellular life, in the spirit of directed panspermia, to habitable but otherwise barren planets. Comparatively slow cruising Genesis probes, with a typical speed of , corresponding to about , can be decelerated using a magnetic sail. Uncrewed missions not for human benefit would hence be feasible. Discovery of Earth-like planets On August 24, 2016, Earth-size exoplanet Proxima Centauri b orbiting in the habitable zone of Proxima Centauri, 4.2 light-years away, was announced. This is the nearest known potentially-habitable exoplanet outside our Solar System. In February 2017, NASA announced that its Spitzer Space Telescope had revealed seven Earth-size planets in the TRAPPIST-1 system orbiting an ultra-cool dwarf star 40 light-years away from the Solar System. Three of these planets are firmly located in the habitable zone, the area around the parent star where a rocky planet is most likely to have liquid water. The discovery sets a new record for greatest number of habitable-zone planets found around a single star outside the Solar System. All of these seven planets could have liquid water – the key to life as we know it – under the right atmospheric conditions, but the chances are highest with the three in the habitable zone.
Technology
Basics_6
null
14856
https://en.wikipedia.org/wiki/Inner%20product%20space
Inner product space
In mathematics, an inner product space (or, rarely, a Hausdorff pre-Hilbert space) is a real vector space or a complex vector space with an operation called an inner product. The inner product of two vectors in the space is a scalar, often denoted with angle brackets such as in . Inner products allow formal definitions of intuitive geometric notions, such as lengths, angles, and orthogonality (zero inner product) of vectors. Inner product spaces generalize Euclidean vector spaces, in which the inner product is the dot product or scalar product of Cartesian coordinates. Inner product spaces of infinite dimension are widely used in functional analysis. Inner product spaces over the field of complex numbers are sometimes referred to as unitary spaces. The first usage of the concept of a vector space with an inner product is due to Giuseppe Peano, in 1898. An inner product naturally induces an associated norm, (denoted and in the picture); so, every inner product space is a normed vector space. If this normed space is also complete (that is, a Banach space) then the inner product space is a Hilbert space. If an inner product space is not a Hilbert space, it can be extended by completion to a Hilbert space This means that is a linear subspace of the inner product of is the restriction of that of and is dense in for the topology defined by the norm. Definition In this article, denotes a field that is either the real numbers or the complex numbers A scalar is thus an element of . A bar over an expression representing a scalar denotes the complex conjugate of this scalar. A zero vector is denoted for distinguishing it from the scalar . An inner product space is a vector space over the field together with an inner product, that is, a map that satisfies the following three properties for all vectors and all scalars Conjugate symmetry: As if and only if is real, conjugate symmetry implies that is always a real number. If is , conjugate symmetry is just symmetry. Linearity in the first argument: Positive-definiteness: if is not zero, then (conjugate symmetry implies that is real). If the positive-definiteness condition is replaced by merely requiring that for all , then one obtains the definition of positive semi-definite Hermitian form. A positive semi-definite Hermitian form is an inner product if and only if for all , if then . Basic properties In the following properties, which result almost immediately from the definition of an inner product, and are arbitrary vectors, and and are arbitrary scalars. is real and nonnegative. if and only if This implies that an inner product is a sesquilinear form. where denotes the real part of its argument. Over , conjugate-symmetry reduces to symmetry, and sesquilinearity reduces to bilinearity. Hence an inner product on a real vector space is a positive-definite symmetric bilinear form. The binomial expansion of a square becomes Notation Several notations are used for inner products, including , , and , as well as the usual dot product. Convention variant Some authors, especially in physics and matrix algebra, prefer to define inner products and sesquilinear forms with linearity in the second argument rather than the first. Then the first argument becomes conjugate linear, rather than the second. Bra-ket notation in quantum mechanics also uses slightly different notation, i.e. , where . Examples Real and complex numbers Among the simplest examples of inner product spaces are and The real numbers are a vector space over that becomes an inner product space with arithmetic multiplication as its inner product: The complex numbers are a vector space over that becomes an inner product space with the inner product Unlike with the real numbers, the assignment does define a complex inner product on Euclidean vector space More generally, the real -space with the dot product is an inner product space, an example of a Euclidean vector space. where is the transpose of A function is an inner product on if and only if there exists a symmetric positive-definite matrix such that for all If is the identity matrix then is the dot product. For another example, if and is positive-definite (which happens if and only if and one/both diagonal elements are positive) then for any As mentioned earlier, every inner product on is of this form (where and satisfy ). Complex coordinate space The general form of an inner product on is known as the Hermitian form and is given by where is any Hermitian positive-definite matrix and is the conjugate transpose of For the real case, this corresponds to the dot product of the results of directionally-different scaling of the two vectors, with positive scale factors and orthogonal directions of scaling. It is a weighted-sum version of the dot product with positive weights—up to an orthogonal transformation. Hilbert space The article on Hilbert spaces has several examples of inner product spaces, wherein the metric induced by the inner product yields a complete metric space. An example of an inner product space which induces an incomplete metric is the space of continuous complex valued functions and on the interval The inner product is This space is not complete; consider for example, for the interval the sequence of continuous "step" functions, defined by: This sequence is a Cauchy sequence for the norm induced by the preceding inner product, which does not converge to a function. Random variables For real random variables and the expected value of their product is an inner product. In this case, if and only if (that is, almost surely), where denotes the probability of the event. This definition of expectation as inner product can be extended to random vectors as well. Complex matrices The inner product for complex square matrices of the same size is the Frobenius inner product . Since trace and transposition are linear and the conjugation is on the second matrix, it is a sesquilinear operator. We further get Hermitian symmetry by, Finally, since for nonzero, , we get that the Frobenius inner product is positive definite too, and so is an inner product. Vector spaces with forms On an inner product space, or more generally a vector space with a nondegenerate form (hence an isomorphism ), vectors can be sent to covectors (in coordinates, via transpose), so that one can take the inner product and outer product of two vectors—not simply of a vector and a covector. Basic results, terminology, and definitions Norm properties Every inner product space induces a norm, called its , that is defined by With this norm, every inner product space becomes a normed vector space. So, every general property of normed vector spaces applies to inner product spaces. In particular, one has the following properties: Orthogonality Real and complex parts of inner products Suppose that is an inner product on (so it is antilinear in its second argument). The polarization identity shows that the real part of the inner product is If is a real vector space then and the imaginary part (also called the ) of is always Assume for the rest of this section that is a complex vector space. The polarization identity for complex vector spaces shows that The map defined by for all satisfies the axioms of the inner product except that it is antilinear in its , rather than its second, argument. The real part of both and are equal to but the inner products differ in their complex part: The last equality is similar to the formula expressing a linear functional in terms of its real part. These formulas show that every complex inner product is completely determined by its real part. Moreover, this real part defines an inner product on considered as a real vector space. There is thus a one-to-one correspondence between complex inner products on a complex vector space and real inner products on For example, suppose that for some integer When is considered as a real vector space in the usual way (meaning that it is identified with the dimensional real vector space with each identified with ), then the dot product defines a real inner product on this space. The unique complex inner product on induced by the dot product is the map that sends to (because the real part of this map is equal to the dot product). Real vs. complex inner products Let denote considered as a vector space over the real numbers rather than complex numbers. The real part of the complex inner product is the map which necessarily forms a real inner product on the real vector space Every inner product on a real vector space is a bilinear and symmetric map. For example, if with inner product where is a vector space over the field then is a vector space over and is the dot product where is identified with the point (and similarly for ); thus the standard inner product on is an "extension" the dot product . Also, had been instead defined to be the (rather than the usual ) then its real part would be the dot product; furthermore, without the complex conjugate, if but then so the assignment would not define a norm. The next examples show that although real and complex inner products have many properties and results in common, they are not entirely interchangeable. For instance, if then but the next example shows that the converse is in general true. Given any the vector (which is the vector rotated by 90°) belongs to and so also belongs to (although scalar multiplication of by is not defined in the vector in denoted by is nevertheless still also an element of ). For the complex inner product, whereas for the real inner product the value is always If is a complex inner product and is a continuous linear operator that satisfies for all then This statement is no longer true if is instead a real inner product, as this next example shows. Suppose that has the inner product mentioned above. Then the map defined by is a linear map (linear for both and ) that denotes rotation by in the plane. Because and are perpendicular vectors and is just the dot product, for all vectors nevertheless, this rotation map is certainly not identically In contrast, using the complex inner product gives which (as expected) is not identically zero. Orthonormal sequences Let be a finite dimensional inner product space of dimension Recall that every basis of consists of exactly linearly independent vectors. Using the Gram–Schmidt process we may start with an arbitrary basis and transform it into an orthonormal basis. That is, into a basis in which all the elements are orthogonal and have unit norm. In symbols, a basis is orthonormal if for every and for each index This definition of orthonormal basis generalizes to the case of infinite-dimensional inner product spaces in the following way. Let be any inner product space. Then a collection is a for if the subspace of generated by finite linear combinations of elements of is dense in (in the norm induced by the inner product). Say that is an for if it is a basis and if and for all Using an infinite-dimensional analog of the Gram-Schmidt process one may show: Theorem. Any separable inner product space has an orthonormal basis. Using the Hausdorff maximal principle and the fact that in a complete inner product space orthogonal projection onto linear subspaces is well-defined, one may also show that Theorem. Any complete inner product space has an orthonormal basis. The two previous theorems raise the question of whether all inner product spaces have an orthonormal basis. The answer, it turns out is negative. This is a non-trivial result, and is proved below. The following proof is taken from Halmos's A Hilbert Space Problem Book (see the references). {| class="toccolours collapsible collapsed" width="90%" style="text-align:left" !Proof |- | Recall that the dimension of an inner product space is the cardinality of a maximal orthonormal system that it contains (by Zorn's lemma it contains at least one, and any two have the same cardinality). An orthonormal basis is certainly a maximal orthonormal system but the converse need not hold in general. If is a dense subspace of an inner product space then any orthonormal basis for is automatically an orthonormal basis for Thus, it suffices to construct an inner product space with a dense subspace whose dimension is strictly smaller than that of Let be a Hilbert space of dimension (for instance, ). Let be an orthonormal basis of so Extend to a Hamel basis for where Since it is known that the Hamel dimension of is the cardinality of the continuum, it must be that Let be a Hilbert space of dimension (for instance, ). Let be an orthonormal basis for and let be a bijection. Then there is a linear transformation such that for and for Let and let be the graph of Let be the closure of in ; we will show Since for any we have it follows that Next, if then for some so ; since as well, we also have It follows that so and is dense in Finally, is a maximal orthonormal set in ; if for all then so is the zero vector in Hence the dimension of is whereas it is clear that the dimension of is This completes the proof. |} Parseval's identity leads immediately to the following theorem: Theorem. Let be a separable inner product space and an orthonormal basis of Then the map is an isometric linear map with a dense image. This theorem can be regarded as an abstract form of Fourier series, in which an arbitrary orthonormal basis plays the role of the sequence of trigonometric polynomials. Note that the underlying index set can be taken to be any countable set (and in fact any set whatsoever, provided is defined appropriately, as is explained in the article Hilbert space). In particular, we obtain the following result in the theory of Fourier series: Theorem. Let be the inner product space Then the sequence (indexed on set of all integers) of continuous functions is an orthonormal basis of the space with the inner product. The mapping is an isometric linear map with dense image. Orthogonality of the sequence follows immediately from the fact that if then Normality of the sequence is by design, that is, the coefficients are so chosen so that the norm comes out to 1. Finally the fact that the sequence has a dense algebraic span, in the , follows from the fact that the sequence has a dense algebraic span, this time in the space of continuous periodic functions on with the uniform norm. This is the content of the Weierstrass theorem on the uniform density of trigonometric polynomials. Operators on inner product spaces Several types of linear maps between inner product spaces and are of relevance: : is linear and continuous with respect to the metric defined above, or equivalently, is linear and the set of non-negative reals where ranges over the closed unit ball of is bounded. : is linear and for all : satisfies for all A (resp. an ) is an isometry that is also a linear map (resp. an antilinear map). For inner product spaces, the polarization identity can be used to show that is an isometry if and only if for all All isometries are injective. The Mazur–Ulam theorem establishes that every surjective isometry between two normed spaces is an affine transformation. Consequently, an isometry between real inner product spaces is a linear map if and only if Isometries are morphisms between inner product spaces, and morphisms of real inner product spaces are orthogonal transformations (compare with orthogonal matrix). : is an isometry which is surjective (and hence bijective). Isometrical isomorphisms are also known as unitary operators (compare with unitary matrix). From the point of view of inner product space theory, there is no need to distinguish between two spaces which are isometrically isomorphic. The spectral theorem provides a canonical form for symmetric, unitary and more generally normal operators on finite dimensional inner product spaces. A generalization of the spectral theorem holds for continuous normal operators in Hilbert spaces. Generalizations Any of the axioms of an inner product may be weakened, yielding generalized notions. The generalizations that are closest to inner products occur where bilinearity and conjugate symmetry are retained, but positive-definiteness is weakened. Degenerate inner products If is a vector space and a semi-definite sesquilinear form, then the function: makes sense and satisfies all the properties of norm except that does not imply (such a functional is then called a semi-norm). We can produce an inner product space by considering the quotient The sesquilinear form factors through This construction is used in numerous contexts. The Gelfand–Naimark–Segal construction is a particularly important example of the use of this technique. Another example is the representation of semi-definite kernels on arbitrary sets. Nondegenerate conjugate symmetric forms Alternatively, one may require that the pairing be a nondegenerate form, meaning that for all non-zero there exists some such that though need not equal ; in other words, the induced map to the dual space is injective. This generalization is important in differential geometry: a manifold whose tangent spaces have an inner product is a Riemannian manifold, while if this is related to nondegenerate conjugate symmetric form the manifold is a pseudo-Riemannian manifold. By Sylvester's law of inertia, just as every inner product is similar to the dot product with positive weights on a set of vectors, every nondegenerate conjugate symmetric form is similar to the dot product with weights on a set of vectors, and the number of positive and negative weights are called respectively the positive index and negative index. Product of vectors in Minkowski space is an example of indefinite inner product, although, technically speaking, it is not an inner product according to the standard definition above. Minkowski space has four dimensions and indices 3 and 1 (assignment of "+" and "−" to them differs depending on conventions). Purely algebraic statements (ones that do not use positivity) usually only rely on the nondegeneracy (the injective homomorphism ) and thus hold more generally. Related products The term "inner product" is opposed to outer product (tensor product), which is a slightly more general opposite. Simply, in coordinates, the inner product is the product of a with an vector, yielding a matrix (a scalar), while the outer product is the product of an vector with a covector, yielding an matrix. The outer product is defined for different dimensions, while the inner product requires the same dimension. If the dimensions are the same, then the inner product is the of the outer product (trace only being properly defined for square matrices). In an informal summary: "inner is horizontal times vertical and shrinks down, outer is vertical times horizontal and expands out". More abstractly, the outer product is the bilinear map sending a vector and a covector to a rank 1 linear transformation (simple tensor of type (1, 1)), while the inner product is the bilinear evaluation map given by evaluating a covector on a vector; the order of the domain vector spaces here reflects the covector/vector distinction. The inner product and outer product should not be confused with the interior product and exterior product, which are instead operations on vector fields and differential forms, or more generally on the exterior algebra. As a further complication, in geometric algebra the inner product and the (Grassmann) product are combined in the geometric product (the Clifford product in a Clifford algebra) – the inner product sends two vectors (1-vectors) to a scalar (a 0-vector), while the exterior product sends two vectors to a bivector (2-vector) – and in this context the exterior product is usually called the (alternatively, ). The inner product is more correctly called a product in this context, as the nondegenerate quadratic form in question need not be positive definite (need not be an inner product).
Mathematics
Linear algebra
null
14863
https://en.wikipedia.org/wiki/Incunable
Incunable
An incunable or incunabulum (: incunables or incunabula, respectively) is a book, pamphlet, or broadside that was printed in the earliest stages of printing in Europe, up to the year 1500. The specific date is essentially arbitrary, but the number of printed book editions exploded in the following century, so that all incunabula, produced before the printing press became widespread in Europe, are rare, where even some early 16th-century books are relatively common. They are distinct from manuscripts, which are documents written by hand. Some authorities on the history of printing include block books from the same time period as incunabula, whereas others limit the term to works printed using movable type. there are about 30,000 distinct incunable editions known. The probable number of surviving individual copies is much higher, estimated at 125,000 in Germany alone. Through statistical analysis, it is estimated that the number of lost editions is at least 20,000. Around 550,000 copies of around 27,500 different works have been preserved worldwide. Terminology Incunable is the anglicised form of incunabulum, reconstructed singular of Latin , which meant "swaddling clothes", or "cradle", which could metaphorically refer to "the earliest stages or first traces in the development". A former term for incunable is fifteener, meaning "fifteenth-century edition". The term incunabula was first used in the context of printing by the Dutch physician and humanist Hadrianus Junius (Adriaen de Jonghe, 1511–1575), in a passage in his work Batavia (written in 1569; published posthumously in 1588). He referred to a period "" ("in the first infancy of the typographic art"). The term has sometimes been incorrectly attributed to Bernhard von Mallinckrodt (1591–1664), in his Latin pamphlet ("On the rise and progress of the typographic art"; 1640), but he was quoting Junius. The term incunabula came to denote printed books themselves in the late 17th century. It is not found in English before the mid-19th century. Junius set an end-date of 1500 to his era of incunabula, which remains the convention in modern bibliographical scholarship. This convenient but arbitrary end-date for identifying a printed book as an incunable does not reflect changes in the printing process, and many books printed for some years after 1500 are visually indistinguishable from incunables. The term "post-incunable" is now used to refer to books printed after 1500 up to 1520 or 1540, without general agreement. From around this period the dating of any edition becomes easier, as the practice of printing the place and year of publication using a colophon or on the title page became more widespread. Types There are two types of printed incunabula: the block book, printed from a single carved or sculpted wooden block for each page (the same process as the woodcut in art, called xylographic); and the typographic book, made by individual cast-metal movable type pieces on a printing press. Many authors reserve the term "incunabula" for the latter. The spread of printing to cities both in the North and in Italy ensured that there was great variety in the texts and the styles which appeared. Many early typefaces were modelled on local writing or derived from various European Gothic scripts, but there were also some derived from documentary scripts like Caxton's, and, particularly in Italy, types modelled on handwritten scripts and calligraphy used by humanists. Printers congregated in urban centres where there were scholars, ecclesiastics, lawyers, and nobles and professionals who formed their major customer base. Standard works in Latin inherited from the medieval tradition formed the bulk of the earliest printed works, but as books became cheaper, vernacular works (or translations into vernaculars of standard works) began to appear. Famous examples Famous incunabula include two from Mainz, the Gutenberg Bible of 1455 and the Peregrinatio in terram sanctam of 1486, printed and illustrated by Erhard Reuwich; the Nuremberg Chronicle written by Hartmann Schedel and printed by Anton Koberger in 1493; and the Hypnerotomachia Poliphili printed by Aldus Manutius with important illustrations by an unknown artist. Other printers of incunabula were Günther Zainer of Augsburg, Johannes Mentelin and Heinrich Eggestein of Strasbourg, Heinrich Gran of Haguenau, Johann Amerbach of Basel, William Caxton of Bruges and London, and Nicolas Jenson of Venice. The first incunable to have woodcut illustrations was Ulrich Boner's Der Edelstein, printed by Albrecht Pfister in Bamberg in 1461. A finding in 2015 brought evidence of quires, as claimed by research, possibly printed in 1444–1446 and possibly assigned to Procopius Waldvogel of Avignon, France. Post-incunable Many incunabula are undated, needing complex bibliographical analysis to place them correctly. The post-incunabula period marks a time of development during which the printed book evolved fully as a mature artefact with a standard format. After about 1540 books tended to conform to a pattern that included the author, title-page, date, seller, and place of printing. This makes it much easier to identify any particular edition. As noted above, the end date for identifying a printed book as an incunable is convenient but was chosen arbitrarily; it does not reflect any notable developments in the printing process around the year 1500. Books printed for a number of years after 1500 continued to look much like incunables, with the notable exception of the small format books printed in italic type introduced by Aldus Manutius in 1501. The term post-incunable is sometimes used to refer to books printed "after 1500—how long after, the experts have not yet agreed." For books printed in England, the term generally covers 1501–1520, and for books printed in mainland Europe, 1501–1540. Statistical data The data in this section were derived from the Incunabula Short-Title Catalogue (ISTC). The number of printing towns and cities stands at 282. These are situated in some 18 countries in terms of present-day boundaries. In descending order of the number of editions printed in each, these are: Italy, Germany, France, Netherlands, Switzerland, Spain, Belgium, England, Austria, the Czech Republic, Portugal, Poland, Sweden, Denmark, Turkey, Croatia, Serbia, Montenegro, and Hungary (see diagram). The following table shows the 20 main 15th-century printing locations; as with all data in this section, exact figures are given, but should be treated as close estimates (the total editions recorded in ISTC at August 2016 is 30,518): The 18 languages that incunabula are printed in, in descending order, are: Latin, German, Italian, French, Dutch, Spanish, English, Hebrew, Catalan, Czech, Greek, Church Slavonic, Portuguese, Swedish, Breton, Danish, Frisian and Sardinian (see diagram). Only about one edition in ten (i.e. just over 3,000) has any illustrations, woodcuts or metalcuts. The "commonest" incunable is Schedel's Nuremberg Chronicle ("Liber Chronicarum") of 1493, with about 1,250 surviving copies (which is also the most heavily illustrated). Many incunabula are unique, but on average about 18 copies survive of each. This makes the Gutenberg Bible, at 48 or 49 known copies, a relatively common (though extremely valuable) edition. Counting extant incunabula is complicated by the fact that most libraries consider a single volume of a multi-volume work as a separate item, as well as fragments or copies lacking more than half the total leaves. A complete incunable may consist of a slip, or up to ten volumes. In terms of format, the 30,000-odd editions comprise: 2,000 broadsides, 9,000 folios, 15,000 quartos, 3,000 octavos, 18 12mos, 230 16mos, 20 32mos, and 3 64mos. ISTC at present cites 528 extant copies of books printed by Caxton, which together with 128 fragments makes 656 in total, though many are broadsides or very imperfect (incomplete). Apart from migration to mainly North American and Japanese universities, there has been little movement of incunabula in the last five centuries. None were printed in the Southern Hemisphere, and the latter appears to possess less than 2,000 copies, about 97.75% remain north of the equator. However, many incunabula are sold at auction or through the rare book trade every year. Major collections The British Library's Incunabula Short Title Catalogue now records over 29,000 titles, of which around 27,400 are incunabula editions (not all unique works). Studies of incunabula began in the 17th century. Michel Maittaire (1667–1747) and Georg Wolfgang Panzer (1729–1805) arranged printed material chronologically in annals format, and in the first half of the 19th century, Ludwig Hain published the Repertorium bibliographicum—a checklist of incunabula arranged alphabetically by author: "Hain numbers" are still a reference point. Hain was expanded in subsequent editions, by Walter A. Copinger and Dietrich Reichling, but it is being superseded by the authoritative modern listing, a German catalogue, the Gesamtkatalog der Wiegendrucke, which has been under way since 1925 and is still being compiled at the Staatsbibliothek zu Berlin. North American holdings were listed by Frederick R. Goff and a worldwide union catalogue is provided by the Incunabula Short Title Catalogue. Notable collections with more than 1,000 incunabula include:
Technology
Printing
null
14884
https://en.wikipedia.org/wiki/Intermediate%20value%20theorem
Intermediate value theorem
In mathematical analysis, the intermediate value theorem states that if is a continuous function whose domain contains the interval , then it takes on any given value between and at some point within the interval. This has two important corollaries: If a continuous function has values of opposite sign inside an interval, then it has a root in that interval (Bolzano's theorem). The image of a continuous function over an interval is itself an interval. Motivation This captures an intuitive property of continuous functions over the real numbers: given continuous on with the known values and , then the graph of must pass through the horizontal line while moves from to . It represents the idea that the graph of a continuous function on a closed interval can be drawn without lifting a pencil from the paper. Theorem The intermediate value theorem states the following: Consider an interval of real numbers and a continuous function . Then Version I. if is a number between and , that is, then there is a such that . Version II. the image set is also a closed interval, and it contains . Remark: Version II states that the set of function values has no gap. For any two function values with all points in the interval are also function values, A subset of the real numbers with no internal gap is an interval. Version I is naturally contained in Version II. Relation to completeness The theorem depends on, and is equivalent to, the completeness of the real numbers. The intermediate value theorem does not apply to the rational numbers Q because gaps exist between rational numbers; irrational numbers fill those gaps. For example, the function for satisfies and . However, there is no rational number such that , because is an irrational number. Despite the above, there is a version of the intermediate value theorem for polynomials over a real closed field; see the Weierstrass Nullstellensatz. Proof Proof version A The theorem may be proven as a consequence of the completeness property of the real numbers as follows: We shall prove the first case, . The second case is similar. Let be the set of all such that . Then is non-empty since is an element of . Since is non-empty and bounded above by , by completeness, the supremum exists. That is, is the smallest number that is greater than or equal to every member of . Note that, due to the continuity of at , we can keep within any of by keeping sufficiently close to . Since is a strict inequality, consider the implication when is the distance between and . No sufficiently close to can then make greater than or equal to , which means there are values greater than in . A more detailed proof goes like this: Choose . Then such that , Consider the interval . Notice that and every satisfies the condition . Therefore for every we have . Hence cannot be . Likewise, due to the continuity of at , we can keep within any of by keeping sufficiently close to . Since is a strict inequality, consider the similar implication when is the distance between and . Every sufficiently close to must then make greater than , which means there are values smaller than that are upper bounds of . A more detailed proof goes like this: Choose . Then such that , Consider the interval . Notice that and every satisfies the condition . Therefore for every we have . Hence cannot be . With and , it must be the case . Now we claim that . Fix some . Since is continuous at , such that , . Since and is open, such that . Set . Then we have for all . By the properties of the supremum, there exists some that is contained in , and so Picking , we know that because is the supremum of . This means that Both inequalities are valid for all , from which we deduce as the only possible value, as stated. Proof version B We will only prove the case of , as the case is similar. Define which is equivalent to and lets us rewrite as , and we have to prove, that for some , which is more intuitive. We further define the set . Because we know, that so, that is not empty. Moreover, as , we know that is bounded and non-empty, so by Completeness, the supremum exists. There are 3 cases for the value of , those being and . For contradiction, let us assume, that . Then, by the definition of continuity, for , there exists a such that implies, that , which is equivalent to . If we just chose , where , then as , , from which we get and , so . It follows that is an upper bound for . However, , contradicting the upper bound property of the least upper bound , so . Assume then, that . We similarly chose and know, that there exists a such that implies . We can rewrite this as which implies, that . If we now chose , then and . It follows that is an upper bound for . However, , which contradict the least property of the least upper bound , which means, that is impossible. If we combine both results, we get that or is the only remaining possibility. Remark: The intermediate value theorem can also be proved using the methods of non-standard analysis, which places "intuitive" arguments involving infinitesimals on a rigorous footing. History A form of the theorem was postulated as early as the 5th century BCE, in the work of Bryson of Heraclea on squaring the circle. Bryson argued that, as circles larger than and smaller than a given square both exist, there must exist a circle of equal area. The theorem was first proved by Bernard Bolzano in 1817. Bolzano used the following formulation of the theorem: Let be continuous functions on the interval between and such that and . Then there is an between and such that . The equivalence between this formulation and the modern one can be shown by setting to the appropriate constant function. Augustin-Louis Cauchy provided the modern formulation and a proof in 1821. Both were inspired by the goal of formalizing the analysis of functions and the work of Joseph-Louis Lagrange. The idea that continuous functions possess the intermediate value property has an earlier origin. Simon Stevin proved the intermediate value theorem for polynomials (using a cubic as an example) by providing an algorithm for constructing the decimal expansion of the solution. The algorithm iteratively subdivides the interval into 10 parts, producing an additional decimal digit at each step of the iteration. Before the formal definition of continuity was given, the intermediate value property was given as part of the definition of a continuous function. Proponents include Louis Arbogast, who assumed the functions to have no jumps, satisfy the intermediate value property and have increments whose sizes corresponded to the sizes of the increments of the variable. Earlier authors held the result to be intuitively obvious and requiring no proof. The insight of Bolzano and Cauchy was to define a general notion of continuity (in terms of infinitesimals in Cauchy's case and using real inequalities in Bolzano's case), and to provide a proof based on such definitions. Converse is false A Darboux function is a real-valued function that has the "intermediate value property," i.e., that satisfies the conclusion of the intermediate value theorem: for any two values and in the domain of , and any between and , there is some between and with . The intermediate value theorem says that every continuous function is a Darboux function. However, not every Darboux function is continuous; i.e., the converse of the intermediate value theorem is false. As an example, take the function defined by for and . This function is not continuous at because the limit of as tends to 0 does not exist; yet the function has the intermediate value property. Another, more complicated example is given by the Conway base 13 function. In fact, Darboux's theorem states that all functions that result from the differentiation of some other function on some interval have the intermediate value property (even though they need not be continuous). Historically, this intermediate value property has been suggested as a definition for continuity of real-valued functions; this definition was not adopted. Generalizations Multi-dimensional spaces The Poincaré-Miranda theorem is a generalization of the Intermediate value theorem from a (one-dimensional) interval to a (two-dimensional) rectangle, or more generally, to an n-dimensional cube. Vrahatis presents a similar generalization to triangles, or more generally, n-dimensional simplices. Let Dn be an n-dimensional simplex with n+1 vertices denoted by v0,...,vn. Let F=(f1,...,fn) be a continuous function from Dn to Rn, that never equals 0 on the boundary of Dn. Suppose F satisfies the following conditions: For all i in 1,...,n, the sign of fi(vi) is opposite to the sign of fi(x) for all points x on the face opposite to vi; The sign-vector of f1,...,fn on v0 is not equal to the sign-vector of f1,...,fn on all points on the face opposite to v0. Then there is a point z in the interior of Dn on which F(z)=(0,...,0). It is possible to normalize the fi such that fi(vi)>0 for all i; then the conditions become simpler: For all i in 1,...,n, fi(vi)>0, and fi(x)<0 for all points x on the face opposite to vi. In particular, fi(v0)<0. For all points x on the face opposite to v0, fi(x)>0 for at least one i in 1,...,n. The theorem can be proved based on the Knaster–Kuratowski–Mazurkiewicz lemma. In can be used for approximations of fixed points and zeros. General metric and topological spaces The intermediate value theorem is closely linked to the topological notion of connectedness and follows from the basic properties of connected sets in metric spaces and connected subsets of R in particular: If and are metric spaces, is a continuous map, and is a connected subset, then is connected. () A subset is connected if and only if it satisfies the following property: . () In fact, connectedness is a topological property and generalizes to topological spaces: If and are topological spaces, is a continuous map, and is a connected space, then is connected. The preservation of connectedness under continuous maps can be thought of as a generalization of the intermediate value theorem, a property of continuous, real-valued functions of a real variable, to continuous functions in general spaces. Recall the first version of the intermediate value theorem, stated previously: The intermediate value theorem is an immediate consequence of these two properties of connectedness: The intermediate value theorem generalizes in a natural way: Suppose that is a connected topological space and is a totally ordered set equipped with the order topology, and let be a continuous map. If and are two points in and is a point in lying between and with respect to , then there exists in such that . The original theorem is recovered by noting that is connected and that its natural topology is the order topology. The Brouwer fixed-point theorem is a related theorem that, in one dimension, gives a special case of the intermediate value theorem. In constructive mathematics In constructive mathematics, the intermediate value theorem is not true. Instead, one has to weaken the conclusion: Let and be real numbers and be a pointwise continuous function from the closed interval to the real line, and suppose that and . Then for every positive number there exists a point in the unit interval such that . Practical applications A similar result is the Borsuk–Ulam theorem, which says that a continuous map from the -sphere to Euclidean -space will always map some pair of antipodal points to the same place. In general, for any continuous function whose domain is some closed convex shape and any point inside the shape (not necessarily its center), there exist two antipodal points with respect to the given point whose functional value is the same. The theorem also underpins the explanation of why rotating a wobbly table will bring it to stability (subject to certain easily met constraints).
Mathematics
Real analysis
null
14895
https://en.wikipedia.org/wiki/Insulin
Insulin
Insulin (, from Latin insula, 'island') is a peptide hormone produced by beta cells of the pancreatic islets encoded in humans by the insulin (INS) gene. It is the main anabolic hormone of the body. It regulates the metabolism of carbohydrates, fats, and protein by promoting the absorption of glucose from the blood into cells of the liver, fat, and skeletal muscles. In these tissues the absorbed glucose is converted into either glycogen, via glycogenesis, or fats (triglycerides), via lipogenesis; in the liver, glucose is converted into both. Glucose production and secretion by the liver are strongly inhibited by high concentrations of insulin in the blood. Circulating insulin also affects the synthesis of proteins in a wide variety of tissues. It is thus an anabolic hormone, promoting the conversion of small molecules in the blood into large molecules in the cells. Low insulin in the blood has the opposite effect, promoting widespread catabolism, especially of reserve body fat. Beta cells are sensitive to blood sugar levels so that they secrete insulin into the blood in response to high level of glucose, and inhibit secretion of insulin when glucose levels are low. Insulin production is also regulated by glucose: high glucose promotes insulin production while low glucose levels lead to lower production. Insulin enhances glucose uptake and metabolism in the cells, thereby reducing blood sugar. Their neighboring alpha cells, by taking their cues from the beta cells, secrete glucagon into the blood in the opposite manner: increased secretion when blood glucose is low, and decreased secretion when glucose concentrations are high. Glucagon increases blood glucose by stimulating glycogenolysis and gluconeogenesis in the liver. The secretion of insulin and glucagon into the blood in response to the blood glucose concentration is the primary mechanism of glucose homeostasis. Decreased or absent insulin activity results in diabetes, a condition of high blood sugar level (hyperglycaemia). There are two types of the disease. In type 1 diabetes, the beta cells are destroyed by an autoimmune reaction so that insulin can no longer be synthesized or be secreted into the blood. In type 2 diabetes, the destruction of beta cells is less pronounced than in type 1, and is not due to an autoimmune process. Instead, there is an accumulation of amyloid in the pancreatic islets, which likely disrupts their anatomy and physiology. The pathogenesis of type 2 diabetes is not well understood but reduced population of islet beta-cells, reduced secretory function of islet beta-cells that survive, and peripheral tissue insulin resistance are known to be involved. Type 2 diabetes is characterized by increased glucagon secretion which is unaffected by, and unresponsive to the concentration of blood glucose. But insulin is still secreted into the blood in response to the blood glucose. As a result, glucose accumulates in the blood. The human insulin protein is composed of 51 amino acids, and has a molecular mass of 5808 Da. It is a heterodimer of an A-chain and a B-chain, which are linked together by disulfide bonds. Insulin's structure varies slightly between species of animals. Insulin from non-human animal sources differs somewhat in effectiveness (in carbohydrate metabolism effects) from human insulin because of these variations. Porcine insulin is especially close to the human version, and was widely used to treat type 1 diabetics before human insulin could be produced in large quantities by recombinant DNA technologies. Insulin was the first peptide hormone discovered. Frederick Banting and Charles Best, working in the laboratory of John Macleod at the University of Toronto, were the first to isolate insulin from dog pancreas in 1921. Frederick Sanger sequenced the amino acid structure in 1951, which made insulin the first protein to be fully sequenced. The crystal structure of insulin in the solid state was determined by Dorothy Hodgkin in 1969. Insulin is also the first protein to be chemically synthesised and produced by DNA recombinant technology. It is on the WHO Model List of Essential Medicines, the most important medications needed in a basic health system. Evolution and species distribution Insulin may have originated more than a billion years ago. The molecular origins of insulin go at least as far back as the simplest unicellular eukaryotes. Apart from animals, insulin-like proteins are also known to exist in fungi and protists. Insulin is produced by beta cells of the pancreatic islets in most vertebrates and by the Brockmann body in some teleost fish. Cone snails: Conus geographus and Conus tulipa, venomous sea snails that hunt small fish, use modified forms of insulin in their venom cocktails. The insulin toxin, closer in structure to fishes' than to snails' native insulin, slows down the prey fishes by lowering their blood glucose levels. Production Insulin is produced exclusively in the beta cells of the pancreatic islets in mammals, and the Brockmann body in some fish. Human insulin is produced from the INS gene, located on chromosome 11. Rodents have two functional insulin genes; one is the homolog of most mammalian genes (Ins2), and the other is a retroposed copy that includes promoter sequence but that is missing an intron (Ins1). Transcription of the insulin gene increases in response to elevated blood glucose. This is primarily controlled by transcription factors that bind enhancer sequences in the ~400 base pairs before the gene's transcription start site. The major transcription factors influencing insulin secretion are PDX1, NeuroD1, and MafA. During a low-glucose state, PDX1 (pancreatic and duodenal homeobox protein 1) is located in the nuclear periphery as a result of interaction with HDAC1 and 2, which results in downregulation of insulin secretion. An increase in blood glucose levels causes phosphorylation of PDX1, which leads it to undergo nuclear translocation and bind the A3 element within the insulin promoter. Upon translocation it interacts with coactivators HAT p300 and SETD7. PDX1 affects the histone modifications through acetylation and deacetylation as well as methylation. It is also said to suppress glucagon. NeuroD1, also known as β2, regulates insulin exocytosis in pancreatic β cells by directly inducing the expression of genes involved in exocytosis. It is localized in the cytosol, but in response to high glucose it becomes glycosylated by OGT and/or phosphorylated by ERK, which causes translocation to the nucleus. In the nucleus β2 heterodimerizes with E47, binds to the E1 element of the insulin promoter and recruits co-activator p300 which acetylates β2. It is able to interact with other transcription factors as well in activation of the insulin gene. MafA is degraded by proteasomes upon low blood glucose levels. Increased levels of glucose make an unknown protein glycosylated. This protein works as a transcription factor for MafA in an unknown manner and MafA is transported out of the cell. MafA is then translocated back into the nucleus where it binds the C1 element of the insulin promoter. These transcription factors work synergistically and in a complex arrangement. Increased blood glucose can after a while destroy the binding capacities of these proteins, and therefore reduce the amount of insulin secreted, causing diabetes. The decreased binding activities can be mediated by glucose induced oxidative stress and antioxidants are said to prevent the decreased insulin secretion in glucotoxic pancreatic β cells. Stress signalling molecules and reactive oxygen species inhibits the insulin gene by interfering with the cofactors binding the transcription factors and the transcription factors itself. Several regulatory sequences in the promoter region of the human insulin gene bind to transcription factors. In general, the A-boxes bind to Pdx1 factors, E-boxes bind to NeuroD, C-boxes bind to MafA, and cAMP response elements to CREB. There are also silencers that inhibit transcription. Synthesis Insulin is synthesized as an inactive precursor molecule, a 110 amino acid-long protein called "preproinsulin". Preproinsulin is translated directly into the rough endoplasmic reticulum (RER), where its signal peptide is removed by signal peptidase to form "proinsulin". As the proinsulin folds, opposite ends of the protein, called the "A-chain" and the "B-chain", are fused together with three disulfide bonds. Folded proinsulin then transits through the Golgi apparatus and is packaged into specialized secretory vesicles. In the granule, proinsulin is cleaved by proprotein convertase 1/3 and proprotein convertase 2, removing the middle part of the protein, called the "C-peptide". Finally, carboxypeptidase E removes two pairs of amino acids from the protein's ends, resulting in active insulin – the insulin A- and B- chains, now connected with two disulfide bonds. The resulting mature insulin is packaged inside mature granules waiting for metabolic signals (such as leucine, arginine, glucose and mannose) and vagal nerve stimulation to be exocytosed from the cell into the circulation. Insulin and its related proteins have been shown to be produced inside the brain, and reduced levels of these proteins are linked to Alzheimer's disease. Insulin release is stimulated also by beta-2 receptor stimulation and inhibited by alpha-1 receptor stimulation. In addition, cortisol, glucagon and growth hormone antagonize the actions of insulin during times of stress. Insulin also inhibits fatty acid release by hormone-sensitive lipase in adipose tissue. Structure Contrary to an initial belief that hormones would be generally small chemical molecules, as the first peptide hormone known of its structure, insulin was found to be quite large. A single protein (monomer) of human insulin is composed of 51 amino acids, and has a molecular mass of 5808 Da. The molecular formula of human insulin is C257H383N65O77S6. It is a combination of two peptide chains (dimer) named an A-chain and a B-chain, which are linked together by two disulfide bonds. The A-chain is composed of 21 amino acids, while the B-chain consists of 30 residues. The linking (interchain) disulfide bonds are formed at cysteine residues between the positions A7-B7 and A20-B19. There is an additional (intrachain) disulfide bond within the A-chain between cysteine residues at positions A6 and A11. The A-chain exhibits two α-helical regions at A1-A8 and A12-A19 which are antiparallel; while the B chain has a central α -helix (covering residues B9-B19) flanked by the disulfide bond on either sides and two β-sheets (covering B7-B10 and B20-B23). The amino acid sequence of insulin is strongly conserved and varies only slightly between species. Bovine insulin differs from human in only three amino acid residues, and porcine insulin in one. Even insulin from some species of fish is similar enough to human to be clinically effective in humans. Insulin in some invertebrates is quite similar in sequence to human insulin, and has similar physiological effects. The strong homology seen in the insulin sequence of diverse species suggests that it has been conserved across much of animal evolutionary history. The C-peptide of proinsulin, however, differs much more among species; it is also a hormone, but a secondary one. Insulin is produced and stored in the body as a hexamer (a unit of six insulin molecules), while the active form is the monomer. The hexamer is about 36000 Da in size. The six molecules are linked together as three dimeric units to form symmetrical molecule. An important feature is the presence of zinc atoms (Zn2+) on the axis of symmetry, which are surrounded by three water molecules and three histidine residues at position B10. The hexamer is an inactive form with long-term stability, which serves as a way to keep the highly reactive insulin protected, yet readily available. The hexamer-monomer conversion is one of the central aspects of insulin formulations for injection. The hexamer is far more stable than the monomer, which is desirable for practical reasons; however, the monomer is a much faster-reacting drug because diffusion rate is inversely related to particle size. A fast-reacting drug means insulin injections do not have to precede mealtimes by hours, which in turn gives people with diabetes more flexibility in their daily schedules. Insulin can aggregate and form fibrillar interdigitated beta-sheets. This can cause injection amyloidosis, and prevents the storage of insulin for long periods. Function Secretion Beta cells in the islets of Langerhans release insulin in two phases. The first-phase release is rapidly triggered in response to increased blood glucose levels, and lasts about 10 minutes. The second phase is a sustained, slow release of newly formed vesicles triggered independently of sugar, peaking in 2 to 3 hours. The two phases of the insulin release suggest that insulin granules are present in diverse stated populations or "pools". During the first phase of insulin exocytosis, most of the granules predispose for exocytosis are released after the calcium internalization. This pool is known as Readily Releasable Pool (RRP). The RRP granules represent 0.3-0.7% of the total insulin-containing granule population, and they are found immediately adjacent to the plasma membrane. During the second phase of exocytosis, insulin granules require mobilization of granules to the plasma membrane and a previous preparation to undergo their release. Thus, the second phase of insulin release is governed by the rate at which granules get ready for release. This pool is known as a Reserve Pool (RP). The RP is released slower than the RRP (RRP: 18 granules/min; RP: 6 granules/min). Reduced first-phase insulin release may be the earliest detectable beta cell defect predicting onset of type 2 diabetes. First-phase release and insulin sensitivity are independent predictors of diabetes. The description of first phase release is as follows: Glucose enters the β-cells through the glucose transporters, GLUT 2. At low blood sugar levels little glucose enters the β-cells; at high blood glucose concentrations large quantities of glucose enter these cells. The glucose that enters the β-cell is phosphorylated to glucose-6-phosphate (G-6-P) by glucokinase (hexokinase IV) which is not inhibited by G-6-P in the way that the hexokinases in other tissues (hexokinase I – III) are affected by this product. This means that the intracellular G-6-P concentration remains proportional to the blood sugar concentration. Glucose-6-phosphate enters glycolytic pathway and then, via the pyruvate dehydrogenase reaction, into the Krebs cycle, where multiple, high-energy ATP molecules are produced by the oxidation of acetyl CoA (the Krebs cycle substrate), leading to a rise in the ATP:ADP ratio within the cell. An increased intracellular ATP:ADP ratio closes the ATP-sensitive SUR1/Kir6.2 potassium channel (see sulfonylurea receptor). This prevents potassium ions (K+) from leaving the cell by facilitated diffusion, leading to a buildup of intracellular potassium ions. As a result, the inside of the cell becomes less negative with respect to the outside, leading to the depolarization of the cell surface membrane. Upon depolarization, voltage-gated calcium ion (Ca2+) channels open, allowing calcium ions to move into the cell by facilitated diffusion. The cytosolic calcium ion concentration can also be increased by calcium release from intracellular stores via activation of ryanodine receptors. The calcium ion concentration in the cytosol of the beta cells can also, or additionally, be increased through the activation of phospholipase C resulting from the binding of an extracellular ligand (hormone or neurotransmitter) to a G protein-coupled membrane receptor. Phospholipase C cleaves the membrane phospholipid, phosphatidyl inositol 4,5-bisphosphate, into inositol 1,4,5-trisphosphate and diacylglycerol. Inositol 1,4,5-trisphosphate (IP3) then binds to receptor proteins in the plasma membrane of the endoplasmic reticulum (ER). This allows the release of Ca2+ ions from the ER via IP3-gated channels, which raises the cytosolic concentration of calcium ions independently of the effects of a high blood glucose concentration. Parasympathetic stimulation of the pancreatic islets operates via this pathway to increase insulin secretion into the blood. The significantly increased amount of calcium ions in the cells' cytoplasm causes the release into the blood of previously synthesized insulin, which has been stored in intracellular secretory vesicles. This is the primary mechanism for release of insulin. Other substances known to stimulate insulin release include the amino acids arginine and leucine, parasympathetic release of acetylcholine (acting via the phospholipase C pathway), sulfonylurea, cholecystokinin (CCK, also via phospholipase C), and the gastrointestinally derived incretins, such as glucagon-like peptide-1 (GLP-1) and glucose-dependent insulinotropic peptide (GIP). Release of insulin is strongly inhibited by norepinephrine (noradrenaline), which leads to increased blood glucose levels during stress. It appears that release of catecholamines by the sympathetic nervous system has conflicting influences on insulin release by beta cells, because insulin release is inhibited by α2-adrenergic receptors and stimulated by β2-adrenergic receptors. The net effect of norepinephrine from sympathetic nerves and epinephrine from adrenal glands on insulin release is inhibition due to dominance of the α-adrenergic receptors. When the glucose level comes down to the usual physiologic value, insulin release from the β-cells slows or stops. If the blood glucose level drops lower than this, especially to dangerously low levels, release of hyperglycemic hormones (most prominently glucagon from islet of Langerhans alpha cells) forces release of glucose into the blood from the liver glycogen stores, supplemented by gluconeogenesis if the glycogen stores become depleted. By increasing blood glucose, the hyperglycemic hormones prevent or correct life-threatening hypoglycemia. Evidence of impaired first-phase insulin release can be seen in the glucose tolerance test, demonstrated by a substantially elevated blood glucose level at 30 minutes after the ingestion of a glucose load (75 or 100 g of glucose), followed by a slow drop over the next 100 minutes, to remain above 120 mg/100 mL after two hours after the start of the test. In a normal person the blood glucose level is corrected (and may even be slightly over-corrected) by the end of the test. An insulin spike is a 'first response' to blood glucose increase, this response is individual and dose specific although it was always previously assumed to be food type specific only. Oscillations Even during digestion, in general, one or two hours following a meal, insulin release from the pancreas is not continuous, but oscillates with a period of 3–6 minutes, changing from generating a blood insulin concentration more than about 800 p mol/l to less than 100 pmol/L (in rats). This is thought to avoid downregulation of insulin receptors in target cells, and to assist the liver in extracting insulin from the blood. This oscillation is important to consider when administering insulin-stimulating medication, since it is the oscillating blood concentration of insulin release, which should, ideally, be achieved, not a constant high concentration. This may be achieved by delivering insulin rhythmically to the portal vein, by light activated delivery, or by islet cell transplantation to the liver. Blood insulin level The blood insulin level can be measured in international units, such as μIU/mL or in molar concentration, such as pmol/L, where 1 μIU/mL equals 6.945 pmol/L. A typical blood level between meals is 8–11 μIU/mL (57–79 pmol/L). Signal transduction The effects of insulin are initiated by its binding to a receptor, the insulin receptor (IR), present in the cell membrane. The receptor molecule contains an α- and β subunits. Two molecules are joined to form what is known as a homodimer. Insulin binds to the α-subunits of the homodimer, which faces the extracellular side of the cells. The β subunits have tyrosine kinase enzyme activity which is triggered by the insulin binding. This activity provokes the autophosphorylation of the β subunits and subsequently the phosphorylation of proteins inside the cell known as insulin receptor substrates (IRS). The phosphorylation of the IRS activates a signal transduction cascade that leads to the activation of other kinases as well as transcription factors that mediate the intracellular effects of insulin. The cascade that leads to the insertion of GLUT4 glucose transporters into the cell membranes of muscle and fat cells, and to the synthesis of glycogen in liver and muscle tissue, as well as the conversion of glucose into triglycerides in liver, adipose, and lactating mammary gland tissue, operates via the activation, by IRS-1, of phosphoinositol 3 kinase (PI3K). This enzyme converts a phospholipid in the cell membrane by the name of phosphatidylinositol 4,5-bisphosphate (PIP2), into phosphatidylinositol 3,4,5-triphosphate (PIP3), which, in turn, activates protein kinase B (PKB). Activated PKB facilitates the fusion of GLUT4 containing endosomes with the cell membrane, resulting in an increase in GLUT4 transporters in the plasma membrane. PKB also phosphorylates glycogen synthase kinase (GSK), thereby inactivating this enzyme. This means that its substrate, glycogen synthase (GS), cannot be phosphorylated, and remains dephosphorylated, and therefore active. The active enzyme, glycogen synthase (GS), catalyzes the rate limiting step in the synthesis of glycogen from glucose. Similar dephosphorylations affect the enzymes controlling the rate of glycolysis leading to the synthesis of fats via malonyl-CoA in the tissues that can generate triglycerides, and also the enzymes that control the rate of gluconeogenesis in the liver. The overall effect of these final enzyme dephosphorylations is that, in the tissues that can carry out these reactions, glycogen and fat synthesis from glucose are stimulated, and glucose production by the liver through glycogenolysis and gluconeogenesis are inhibited. The breakdown of triglycerides by adipose tissue into free fatty acids and glycerol is also inhibited. After the intracellular signal that resulted from the binding of insulin to its receptor has been produced, termination of signaling is then needed. As mentioned below in the section on degradation, endocytosis and degradation of the receptor bound to insulin is a main mechanism to end signaling. In addition, the signaling pathway is also terminated by dephosphorylation of the tyrosine residues in the various signaling pathways by tyrosine phosphatases. Serine/Threonine kinases are also known to reduce the activity of insulin. The structure of the insulin–insulin receptor complex has been determined using the techniques of X-ray crystallography. Physiological effects The actions of insulin on the global human metabolism level include: Increase of cellular intake of certain substances, most prominently glucose in muscle and adipose tissue (about two-thirds of body cells) Increase of DNA replication and protein synthesis via control of amino acid uptake Modification of the activity of numerous enzymes. The actions of insulin (indirect and direct) on cells include: Stimulates the uptake of glucose – Insulin decreases blood glucose concentration by inducing intake of glucose by the cells. This is possible because Insulin causes the insertion of the GLUT4 transporter in the cell membranes of muscle and fat tissues which allows glucose to enter the cell. Increased fat synthesis – insulin forces fat cells to take in blood glucose, which is converted into triglycerides; decrease of insulin causes the reverse. Increased esterification of fatty acids – forces adipose tissue to make neutral fats (i.e., triglycerides) from fatty acids; decrease of insulin causes the reverse. Decreased lipolysis in – forces reduction in conversion of fat cell lipid stores into blood fatty acids and glycerol; decrease of insulin causes the reverse. Induced glycogen synthesis – When glucose levels are high, insulin induces the formation of glycogen by the activation of the hexokinase enzyme, which adds a phosphate group in glucose, thus resulting in a molecule that cannot exit the cell. At the same time, insulin inhibits the enzyme glucose-6-phosphatase, which removes the phosphate group. These two enzymes are key for the formation of glycogen. Also, insulin activates the enzymes phosphofructokinase and glycogen synthase which are responsible for glycogen synthesis. Decreased gluconeogenesis and glycogenolysis – decreases production of glucose from noncarbohydrate substrates, primarily in the liver (the vast majority of endogenous insulin arriving at the liver never leaves the liver); decrease of insulin causes glucose production by the liver from assorted substrates. Decreased proteolysis – decreasing the breakdown of protein Decreased autophagy – decreased level of degradation of damaged organelles. Postprandial levels inhibit autophagy completely. Increased amino acid uptake – forces cells to absorb circulating amino acids; decrease of insulin inhibits absorption. Arterial muscle tone – forces arterial wall muscle to relax, increasing blood flow, especially in microarteries; decrease of insulin reduces flow by allowing these muscles to contract. Increase in the secretion of hydrochloric acid by parietal cells in the stomach. Increased potassium uptake – forces cells synthesizing glycogen (a very spongy, "wet" substance, that increases the content of intracellular water, and its accompanying K+ ions) to absorb potassium from the extracellular fluids; lack of insulin inhibits absorption. Insulin's increase in cellular potassium uptake lowers potassium levels in blood plasma. This possibly occurs via insulin-induced translocation of the Na+/K+-ATPase to the surface of skeletal muscle cells. Decreased renal sodium excretion. In hepatocytes, insulin binding acutely leads to activation of protein phosphatase 2A (PP2A), which dephosphorylates the bifunctional enzyme fructose bisphosphatase-2 (PFKB1), activating the phosphofructokinase-2 (PFK-2) active site. PFK-2 increases production of fructose 2,6-bisphosphate. Fructose 2,6-bisphosphate allosterically activates PFK-1, which favors glycolysis over gluconeogenesis. Increased glycolysis increases the formation of malonyl-CoA, a molecule that can be shunted into lipogenesis and that allosterically inhibits of carnitine palmitoyltransferase I (CPT1), a mitochondrial enzyme necessary for the translocation of fatty acids into the intermembrane space of the mitochondria for fatty acid metabolism. Insulin also influences other body functions, such as vascular compliance and cognition. Once insulin enters the human brain, it enhances learning and memory and benefits verbal memory in particular. Enhancing brain insulin signaling by means of intranasal insulin administration also enhances the acute thermoregulatory and glucoregulatory response to food intake, suggesting that central nervous insulin contributes to the co-ordination of a wide variety of homeostatic or regulatory processes in the human body. Insulin also has stimulatory effects on gonadotropin-releasing hormone from the hypothalamus, thus favoring fertility. Degradation Once an insulin molecule has docked onto the receptor and effected its action, it may be released back into the extracellular environment, or it may be degraded by the cell. The two primary sites for insulin clearance are the liver and the kidney. It is broken down by the enzyme, protein-disulfide reductase (glutathione), which breaks the disulphide bonds between the A and B chains. The liver clears most insulin during first-pass transit, whereas the kidney clears most of the insulin in systemic circulation. Degradation normally involves endocytosis of the insulin-receptor complex, followed by the action of insulin-degrading enzyme. An insulin molecule produced endogenously by the beta cells is estimated to be degraded within about one hour after its initial release into circulation (insulin half-life ~ 4–6 minutes). Regulator of endocannabinoid metabolism Insulin is a major regulator of endocannabinoid (EC) metabolism and insulin treatment has been shown to reduce intracellular ECs, the 2-arachidonoylglycerol (2-AG) and anandamide (AEA), which correspond with insulin-sensitive expression changes in enzymes of EC metabolism. In insulin-resistant adipocytes, patterns of insulin-induced enzyme expression is disturbed in a manner consistent with elevated EC synthesis and reduced EC degradation. Findings suggest that insulin-resistant adipocytes fail to regulate EC metabolism and decrease intracellular EC levels in response to insulin stimulation, whereby obese insulin-resistant individuals exhibit increased concentrations of ECs. This dysregulation contributes to excessive visceral fat accumulation and reduced adiponectin release from abdominal adipose tissue, and further to the onset of several cardiometabolic risk factors that are associated with obesity and type 2 diabetes. Hypoglycemia Hypoglycemia, also known as "low blood sugar", is when blood sugar decreases to below normal levels. This may result in a variety of symptoms including clumsiness, trouble talking, confusion, loss of consciousness, seizures or death. A feeling of hunger, sweating, shakiness and weakness may also be present. Symptoms typically come on quickly. The most common cause of hypoglycemia is medications used to treat diabetes such as insulin and sulfonylureas. Risk is greater in diabetics who have eaten less than usual, exercised more than usual or have consumed alcohol. Other causes of hypoglycemia include kidney failure, certain tumors, such as insulinoma, liver disease, hypothyroidism, starvation, inborn error of metabolism, severe infections, reactive hypoglycemia and a number of drugs including alcohol. Low blood sugar may occur in otherwise healthy babies who have not eaten for a few hours. Diseases and syndromes There are several conditions in which insulin disturbance is pathologic: Diabetes – general term referring to all states characterized by hyperglycemia. It can be of the following types: Type 1 diabetes – autoimmune-mediated destruction of insulin-producing β-cells in the pancreas, resulting in absolute insulin deficiency Type 2 diabetes – either inadequate insulin production by the β-cells or insulin resistance or both because of reasons not completely understood. there is correlation with diet, with sedentary lifestyle, with obesity, with age and with metabolic syndrome. Causality has been demonstrated in multiple model organisms including mice and monkeys; importantly, non-obese people do get Type 2 diabetes due to diet, sedentary lifestyle and unknown risk factors, though this may not be a causal relationship. it is likely that there is genetic susceptibility to develop Type 2 diabetes under certain environmental conditions Other types of impaired glucose tolerance (see Diabetes) Insulinoma – a tumor of beta cells producing excess insulin or reactive hypoglycemia. Metabolic syndrome – a poorly understood condition first called syndrome X by Gerald Reaven. It is not clear whether the syndrome has a single, treatable cause, or is the result of body changes leading to type 2 diabetes. It is characterized by elevated blood pressure, dyslipidemia (disturbances in blood cholesterol forms and other blood lipids), and increased waist circumference (at least in populations in much of the developed world). The basic underlying cause may be the insulin resistance that precedes type 2 diabetes, which is a diminished capacity for insulin response in some tissues (e.g., muscle, fat). It is common for morbidities such as essential hypertension, obesity, type 2 diabetes, and cardiovascular disease (CVD) to develop. Polycystic ovary syndrome – a complex syndrome in women in the reproductive years where anovulation and androgen excess are commonly displayed as hirsutism. In many cases of PCOS, insulin resistance is present. Medical uses Biosynthetic human insulin (insulin human rDNA, INN) for clinical use is manufactured by recombinant DNA technology. Biosynthetic human insulin has increased purity when compared with extractive animal insulin, enhanced purity reducing antibody formation. Researchers have succeeded in introducing the gene for human insulin into plants as another method of producing insulin ("biopharming") in safflower. This technique is anticipated to reduce production costs. Several analogs of human insulin are available. These insulin analogs are closely related to the human insulin structure, and were developed for specific aspects of glycemic control in terms of fast action (prandial insulins) and long action (basal insulins). The first biosynthetic insulin analog was developed for clinical use at mealtime (prandial insulin), Humalog (insulin lispro), it is more rapidly absorbed after subcutaneous injection than regular insulin, with an effect 15 minutes after injection. Other rapid-acting analogues are NovoRapid and Apidra, with similar profiles. All are rapidly absorbed due to amino acid sequences that will reduce formation of dimers and hexamers (monomeric insulins are more rapidly absorbed). Fast acting insulins do not require the injection-to-meal interval previously recommended for human insulin and animal insulins. The other type is long acting insulin; the first of these was Lantus (insulin glargine). These have a steady effect for an extended period from 18 to 24 hours. Likewise, another protracted insulin analogue (Levemir) is based on a fatty acid acylation approach. A myristic acid molecule is attached to this analogue, which associates the insulin molecule to the abundant serum albumin, which in turn extends the effect and reduces the risk of hypoglycemia. Both protracted analogues need to be taken only once daily, and are used for type 1 diabetics as the basal insulin. A combination of a rapid acting and a protracted insulin is also available, making it more likely for patients to achieve an insulin profile that mimics that of the body's own insulin release. Insulin is also used in many cell lines, such as CHO-s, HEK 293 or Sf9, for the manufacturing of monoclonal antibodies, virus vaccines, and gene therapy products. Insulin is usually taken as subcutaneous injections by single-use syringes with needles, via an insulin pump, or by repeated-use insulin pens with disposable needles. Inhaled insulin is also available in the U.S. market. The Dispovan Single-Use Pen Needle by HMD is India’s first insulin pen needle that makes self-administration easy. Featuring extra-thin walls and a multi-bevel tapered point, these pen needles prioritise patient comfort by minimising pain and ensuring seamless medication delivery. The product aims to provide affordable Pen Needles to the developing part of the country through its wide distribution channel. Additionally, the universal design of these needles guarantees compatibility with all insulin pens. Unlike many medicines, insulin cannot be taken by mouth because, like nearly all other proteins introduced into the gastrointestinal tract, it is reduced to fragments, whereupon all activity is lost. There has been some research into ways to protect insulin from the digestive tract, so that it can be administered orally or sublingually. In 2021, the World Health Organization added insulin to its model list of essential medicines. Insulin, and all other medications, are supplied free of charge to people with diabetes by the National Health Service in the countries of the United Kingdom. History of study Discovery In 1869, while studying the structure of the pancreas under a microscope, Paul Langerhans, a medical student in Berlin, identified some previously unnoticed tissue clumps scattered throughout the bulk of the pancreas. The function of the "little heaps of cells", later known as the islets of Langerhans, initially remained unknown, but Édouard Laguesse later suggested they might produce secretions that play a regulatory role in digestion. Paul Langerhans' son, Archibald, also helped to understand this regulatory role. In 1889, the physician Oskar Minkowski, in collaboration with Joseph von Mering, removed the pancreas from a healthy dog to test its assumed role in digestion. On testing the urine, they found sugar, establishing for the first time a relationship between the pancreas and diabetes. In 1901, another major step was taken by the American physician and scientist Eugene Lindsay Opie, when he isolated the role of the pancreas to the islets of Langerhans: "Diabetes mellitus when the result of a lesion of the pancreas is caused by destruction of the islets of Langerhans and occurs only when these bodies are in part or wholly destroyed". Over the next two decades researchers made several attempts to isolate the islets' secretions. In 1906 George Ludwig Zuelzer achieved partial success in treating dogs with pancreatic extract, but he was unable to continue his work. Between 1911 and 1912, E.L. Scott at the University of Chicago tried aqueous pancreatic extracts and noted "a slight diminution of glycosuria", but was unable to convince his director of his work's value; it was shut down. Israel Kleiner demonstrated similar effects at Rockefeller University in 1915, but World War I interrupted his work and he did not return to it. In 1916, Nicolae Paulescu developed an aqueous pancreatic extract which, when injected into a diabetic dog, had a normalizing effect on blood sugar levels. He had to interrupt his experiments because of World War I, and in 1921 he wrote four papers about his work carried out in Bucharest and his tests on a diabetic dog. Later that year, he published "Research on the Role of the Pancreas in Food Assimilation". The name "insulin" was coined by Edward Albert Sharpey-Schafer in 1916 for a hypothetical molecule produced by pancreatic islets of Langerhans (Latin insula for islet or island) that controls glucose metabolism. Unbeknown to Sharpey-Schafer, Jean de Meyer had introduced the very similar word "insuline" in 1909 for the same molecule. Extraction and purification In October 1920, Canadian Frederick Banting concluded that the digestive secretions that Minkowski had originally studied were breaking down the islet secretion, thereby making it impossible to extract successfully. A surgeon by training, Banting knew that blockages of the pancreatic duct would lead most of the pancreas to atrophy, while leaving the islets of Langerhans intact. He reasoned that a relatively pure extract could be made from the islets once most of the rest of the pancreas was gone. He jotted a note to himself: "Ligate pancreatic ducts of dog. Keep dogs alive till acini degenerate leaving Islets. Try to isolate the internal secretion of these + relieve glycosurea[sic]." In the spring of 1921, Banting traveled to Toronto to explain his idea to John Macleod, Professor of Physiology at the University of Toronto. Macleod was initially skeptical, since Banting had no background in research and was not familiar with the latest literature, but he agreed to provide lab space for Banting to test out his ideas. Macleod also arranged for two undergraduates to be Banting's lab assistants that summer, but Banting required only one lab assistant. Charles Best and Clark Noble flipped a coin; Best won the coin toss and took the first shift. This proved unfortunate for Noble, as Banting kept Best for the entire summer and eventually shared half his Nobel Prize money and credit for the discovery with Best. On 30 July 1921, Banting and Best successfully isolated an extract ("isletin") from the islets of a duct-tied dog and injected it into a diabetic dog, finding that the extract reduced its blood sugar by 40% in 1 hour. Banting and Best presented their results to Macleod on his return to Toronto in the fall of 1921, but Macleod pointed out flaws with the experimental design, and suggested the experiments be repeated with more dogs and better equipment. He moved Banting and Best into a better laboratory and began paying Banting a salary from his research grants. Several weeks later, the second round of experiments was also a success, and Macleod helped publish their results privately in Toronto that November. Bottlenecked by the time-consuming task of duct-tying dogs and waiting several weeks to extract insulin, Banting hit upon the idea of extracting insulin from the fetal calf pancreas, which had not yet developed digestive glands. By December, they had also succeeded in extracting insulin from the adult cow pancreas. Macleod discontinued all other research in his laboratory to concentrate on the purification of insulin. He invited biochemist James Collip to help with this task, and the team felt ready for a clinical test within a month. On 11 January 1922, Leonard Thompson, a 14-year-old diabetic who lay dying at the Toronto General Hospital, was given the first injection of insulin. However, the extract was so impure that Thompson had a severe allergic reaction, and further injections were cancelled. Over the next 12 days, Collip worked day and night to improve the ox-pancreas extract. A second dose was injected on 23 January, eliminating the glycosuria that was typical of diabetes without causing any obvious side-effects. The first American patient was Elizabeth Hughes, the daughter of U.S. Secretary of State Charles Evans Hughes. The first patient treated in the U.S. was future woodcut artist James D. Havens; John Ralston Williams imported insulin from Toronto to Rochester, New York, to treat Havens. Banting and Best never worked well with Collip, regarding him as something of an interloper, and Collip left the project soon after. Over the spring of 1922, Best managed to improve his techniques to the point where large quantities of insulin could be extracted on demand, but the preparation remained impure. The drug firm Eli Lilly and Company had offered assistance not long after the first publications in 1921, and they took Lilly up on the offer in April. In November, Lilly's head chemist, George B. Walden discovered isoelectric precipitation and was able to produce large quantities of highly refined insulin. Shortly thereafter, insulin was offered for sale to the general public. Patent Toward the end of January 1922, tensions mounted between the four "co-discoverers" of insulin and Collip briefly threatened to separately patent his purification process. John G. FitzGerald, director of the non-commercial public health institution Connaught Laboratories, therefore stepped in as peacemaker. The resulting agreement of 25 January 1922 established two key conditions: 1) that the collaborators would sign a contract agreeing not to take out a patent with a commercial pharmaceutical firm during an initial working period with Connaught; and 2) that no changes in research policy would be allowed unless first discussed among FitzGerald and the four collaborators. It helped contain disagreement and tied the research to Connaught's public mandate. Initially, Macleod and Banting were particularly reluctant to patent their process for insulin on grounds of medical ethics. However, concerns remained that a private third-party would hijack and monopolize the research (as Eli Lilly and Company had hinted), and that safe distribution would be difficult to guarantee without capacity for quality control. To this end, Edward Calvin Kendall gave valuable advice. He had isolated thyroxin at the Mayo Clinic in 1914 and patented the process through an arrangement between himself, the brothers Mayo, and the University of Minnesota, transferring the patent to the public university. On 12 April, Banting, Best, Collip, Macleod, and FitzGerald wrote jointly to the president of the University of Toronto to propose a similar arrangement with the aim of assigning a patent to the Board of Governors of the university. The letter emphasized that:The assignment to the University of Toronto Board of Governors was completed on 15 January 1923, for the token payment of $1.00. The arrangement was congratulated in The World's Work in 1923 as "a step forward in medical ethics". It has also received much media attention in the 2010s regarding the issue of healthcare and drug affordability. Following further concern regarding Eli Lilly's attempts to separately patent parts of the manufacturing process, Connaught's Assistant Director and Head of the Insulin Division Robert Defries established a patent pooling policy which would require producers to freely share any improvements to the manufacturing process without compromising affordability. Structural analysis and synthesis Purified animal-sourced insulin was initially the only type of insulin available for experiments and diabetics. John Jacob Abel was the first to produce the crystallised form in 1926. Evidence of the protein nature was first given by Michael Somogyi, Edward A. Doisy, and Philip A. Shaffer in 1924. It was fully proven when Hans Jensen and Earl A. Evans Jr. isolated the amino acids phenylalanine and proline in 1935. The amino acid structure of insulin was first characterized in 1951 by Frederick Sanger, and the first synthetic insulin was produced simultaneously in the labs of Panayotis Katsoyannis at the University of Pittsburgh and Helmut Zahn at RWTH Aachen University in the mid-1960s. Synthetic crystalline bovine insulin was achieved by Chinese researchers in 1965. The complete 3-dimensional structure of insulin was determined by X-ray crystallography in Dorothy Hodgkin's laboratory in 1969. Hans E. Weber discovered preproinsulin while working as a research fellow at the University of California Los Angeles in 1974. In 1973–1974, Weber learned the techniques of how to isolate, purify, and translate messenger RNA. To further investigate insulin, he obtained pancreatic tissues from a slaughterhouse in Los Angeles and then later from animal stock at UCLA. He isolated and purified total messenger RNA from pancreatic islet cells which was then translated in oocytes from Xenopus laevis and precipitated using anti-insulin antibodies. When total translated protein was run on an SDS-polyacrylamide gel electrophoresis and sucrose gradient, peaks corresponding to insulin and proinsulin were isolated. However, to the surprise of Weber a third peak was isolated corresponding to a molecule larger than proinsulin. After reproducing the experiment several times, he consistently noted this large peak prior to proinsulin that he determined must be a larger precursor molecule upstream of proinsulin. In May 1975, at the American Diabetes Association meeting in New York, Weber gave an oral presentation of his work where he was the first to name this precursor molecule "preproinsulin". Following this oral presentation, Weber was invited to dinner to discuss his paper and findings by Donald Steiner, a researcher who contributed to the characterization of proinsulin. A year later in April 1976, this molecule was further characterized and sequenced by Steiner, referencing the work and discovery of Hans Weber. Preproinsulin became an important molecule to study the process of transcription and translation. The first genetically engineered (recombinant), synthetic human insulin was produced using E. coli in 1978 by Arthur Riggs and Keiichi Itakura at the Beckman Research Institute of the City of Hope in collaboration with Herbert Boyer at Genentech. Genentech, founded by Swanson, Boyer and Eli Lilly and Company, went on in 1982 to sell the first commercially available biosynthetic human insulin under the brand name Humulin. The vast majority of insulin used worldwide is biosynthetic recombinant human insulin or its analogues. Recently, another recombinant approach has been used by a pioneering group of Canadian researchers, using an easily grown safflower plant, for the production of much cheaper insulin. Recombinant insulin is produced either in yeast (usually Saccharomyces cerevisiae) or E. coli In yeast, insulin may be engineered as a single-chain protein with a KexII endoprotease (a yeast homolog of PCI/PCII) site that separates the insulin A chain from a C-terminally truncated insulin B chain. A chemically synthesized C-terminal tail containing the missing threonine is then grafted onto insulin by reverse proteolysis using the inexpensive protease trypsin; typically the lysine on the C-terminal tail is protected with a chemical protecting group to prevent proteolysis. The ease of modular synthesis and the relative safety of modifications in that region accounts for common insulin analogs with C-terminal modifications (e.g. lispro, aspart, glulisine). The Genentech synthesis and completely chemical synthesis such as that by Bruce Merrifield are not preferred because the efficiency of recombining the two insulin chains is low, primarily due to competition with the precipitation of insulin B chain. Nobel Prizes The Nobel Prize committee in 1923 credited the practical extraction of insulin to a team at the University of Toronto and awarded the Nobel Prize to two men: Frederick Banting and John Macleod. They were awarded the Nobel Prize in Physiology or Medicine in 1923 for the discovery of insulin. Banting, incensed that Best was not mentioned, shared his prize with him, and Macleod immediately shared his with James Collip. The patent for insulin was sold to the University of Toronto for one dollar. Two other Nobel Prizes have been awarded for work on insulin. British molecular biologist Frederick Sanger, who determined the primary structure of insulin in 1955, was awarded the 1958 Nobel Prize in Chemistry. Rosalyn Sussman Yalow received the 1977 Nobel Prize in Medicine for the development of the radioimmunoassay for insulin. Several Nobel Prizes also have an indirect connection with insulin. George Minot, co-recipient of the 1934 Nobel Prize for the development of the first effective treatment for pernicious anemia, had diabetes. William Castle observed that the 1921 discovery of insulin, arriving in time to keep Minot alive, was therefore also responsible for the discovery of a cure for pernicious anemia. Dorothy Hodgkin was awarded a Nobel Prize in Chemistry in 1964 for the development of crystallography, the technique she used for deciphering the complete molecular structure of insulin in 1969. Controversy The work published by Banting, Best, Collip and Macleod represented the preparation of purified insulin extract suitable for use on human patients. Although Paulescu discovered the principles of the treatment, his saline extract could not be used on humans; he was not mentioned in the 1923 Nobel Prize. Ian Murray was particularly active in working to correct "the historical wrong" against Nicolae Paulescu. Murray was a professor of physiology at the Anderson College of Medicine in Glasgow, Scotland, the head of the department of Metabolic Diseases at a leading Glasgow hospital, vice-president of the British Association of Diabetes, and a founding member of the International Diabetes Federation. Murray wrote: In a private communication, Arne Tiselius, former head of the Nobel Institute, expressed his personal opinion that Paulescu was equally worthy of the award in 1923.
Biology and health sciences
Biochemistry and molecular biology
null
14896
https://en.wikipedia.org/wiki/Inductor
Inductor
An inductor, also called a coil, choke, or reactor, is a passive two-terminal electrical component that stores energy in a magnetic field when an electric current flows through it. An inductor typically consists of an insulated wire wound into a coil. When the current flowing through the coil changes, the time-varying magnetic field induces an electromotive force (emf) (voltage) in the conductor, described by Faraday's law of induction. According to Lenz's law, the induced voltage has a polarity (direction) which opposes the change in current that created it. As a result, inductors oppose any changes in current through them. An inductor is characterized by its inductance, which is the ratio of the voltage to the rate of change of current. In the International System of Units (SI), the unit of inductance is the henry (H) named for 19th century American scientist Joseph Henry. In the measurement of magnetic circuits, it is equivalent to . Inductors have values that typically range from 1μH (10−6H) to 20H. Many inductors have a magnetic core made of iron or ferrite inside the coil, which serves to increase the magnetic field and thus the inductance. Along with capacitors and resistors, inductors are one of the three passive linear circuit elements that make up electronic circuits. Inductors are widely used in alternating current (AC) electronic equipment, particularly in radio equipment. They are used to block AC while allowing DC to pass; inductors designed for this purpose are called chokes. They are also used in electronic filters to separate signals of different frequencies, and in combination with capacitors to make tuned circuits, used to tune radio and TV receivers. The term inductor seems to come from Heinrich Daniel Ruhmkorff, who called the induction coil he invented in 1851 an inductorium. Description An electric current flowing through a conductor generates a magnetic field surrounding it. The magnetic flux linkage generated by a given current depends on the geometric shape of the circuit. Their ratio defines the inductance . Thus . The inductance of a circuit depends on the geometry of the current path as well as the magnetic permeability of nearby materials. An inductor is a component consisting of a wire or other conductor shaped to increase the magnetic flux through the circuit, usually in the shape of a coil or helix, with two terminals. Winding the wire into a coil increases the number of times the magnetic flux lines link the circuit, increasing the field and thus the inductance. The more turns, the higher the inductance. The inductance also depends on the shape of the coil, separation of the turns, and many other factors. By adding a "magnetic core" made of a ferromagnetic material like iron inside the coil, the magnetizing field from the coil will induce magnetization in the material, increasing the magnetic flux. The high permeability of a ferromagnetic core can increase the inductance of a coil by a factor of several thousand over what it would be without it. Constitutive equation Any change in the current through an inductor creates a changing flux, inducing a voltage across the inductor. By Faraday's law of induction, the voltage induced by any change in magnetic flux through the circuit is given by . Reformulating the definition of above, we obtain . It follows that if is independent of time, current and magnetic flux linkage. Thus, inductance is also a measure of the amount of electromotive force (voltage) generated for a given rate of change of current. This is usually taken to be the constitutive relation (defining equation) of the inductor. Lenz's law The polarity (direction) of the induced voltage is given by Lenz's law, which states that the induced voltage will be such as to oppose the change in current. For example, if the current through an inductor is increasing, the induced potential difference will be positive at the current's entrance point and negative at the exit point, tending to oppose the additional current. The energy from the external circuit necessary to overcome this potential "hill" is being stored in the magnetic field of the inductor. If the current is decreasing, the induced voltage will be negative at the current's entrance point and positive at the exit point, tending to maintain the current. In this case energy from the magnetic field is being returned to the circuit. Positive form of current–voltage relationship Because the induced voltage is positive at the current's entrance terminal, the inductor's current–voltage relationship is often expressed without a negative sign by using the current's exit terminal as the reference point for the voltage at the current's entrance terminal (as labeled in the schematic). The derivative form of this current–voltage relationship is then:The integral form of this current–voltage relationship, starting at time with some initial current , is then:The dual of the inductor is the capacitor, which stores energy in an electric field rather than a magnetic field. Its current–voltage relation replaces with the capacitance and has current and voltage swapped from these equations. Energy stored in an inductor One intuitive explanation as to why a potential difference is induced on a change of current in an inductor goes as follows: When there is a change in current through an inductor there is a change in the strength of the magnetic field. For example, if the current is increased, the magnetic field increases. This, however, does not come without a price. The magnetic field contains potential energy, and increasing the field strength requires more energy to be stored in the field. This energy comes from the electric current through the inductor. The increase in the magnetic potential energy of the field is provided by a corresponding drop in the electric potential energy of the charges flowing through the windings. This appears as a voltage drop across the windings as long as the current increases. Once the current is no longer increased and is held constant, the energy in the magnetic field is constant and no additional energy must be supplied, so the voltage drop across the windings disappears. Similarly, if the current through the inductor decreases, the magnetic field strength decreases, and the energy in the magnetic field decreases. This energy is returned to the circuit in the form of an increase in the electrical potential energy of the moving charges, causing a voltage rise across the windings. Derivation The work done per unit charge on the charges passing through the inductor is . The negative sign indicates that the work is done against the emf, and is not done by the emf. The current is the charge per unit time passing through the inductor. Therefore, the rate of work done by the charges against the emf, that is the rate of change of energy of the current, is given by From the constitutive equation for the inductor, so In a ferromagnetic core inductor, when the magnetic field approaches the level at which the core saturates, the inductance will begin to change, it will be a function of the current . Neglecting losses, the energy stored by an inductor with a current passing through it is equal to the amount of work required to establish the current through the inductor. This is given by: , where is the so-called "differential inductance" and is defined as: . In an air core inductor or a ferromagnetic core inductor below saturation, the inductance is constant (and equal to the differential inductance), so the stored energy is For inductors with magnetic cores, the above equation is only valid for linear regions of the magnetic flux, at currents below the saturation level of the inductor, where the inductance is approximately constant. Where this is not the case, the integral form must be used with variable. Voltage step response When a voltage step is applied to an inductor: In the short-time limit, since the current cannot change instantaneously, the initial current is zero. The equivalent circuit of an inductor immediately after the step is applied is an open circuit. As time passes, the current increases at a constant rate with time until the inductor starts to saturate. In the long-time limit, the transient response of the inductor will die out, the magnetic flux through the inductor will become constant, so no voltage would be induced between the terminals of the inductor. Therefore, assuming the resistance of the windings is negligible, the equivalent circuit of an inductor a long time after the step is applied is a short circuit. Ideal and real inductors The constitutive equation describes the behavior of an ideal inductor with inductance , and without resistance, capacitance, or energy dissipation. In practice, inductors do not follow this theoretical model; real inductors have a measurable resistance due to the resistance of the wire and energy losses in the core, and parasitic capacitance between turns of the wire. A real inductor's capacitive reactance rises with frequency, and at a certain frequency, the inductor will behave as a resonant circuit. Above this self-resonant frequency, the capacitive reactance is the dominant part of the inductor's impedance. At higher frequencies, resistive losses in the windings increase due to the skin effect and proximity effect. Inductors with ferromagnetic cores experience additional energy losses due to hysteresis and eddy currents in the core, which increase with frequency. At high currents, magnetic core inductors also show sudden departure from ideal behavior due to nonlinearity caused by magnetic saturation of the core. Inductors radiate electromagnetic energy into surrounding space and may absorb electromagnetic emissions from other circuits, resulting in potential electromagnetic interference. An early solid-state electrical switching and amplifying device called a saturable reactor exploits saturation of the core as a means of stopping the inductive transfer of current via the core. Q factor The winding resistance appears as a resistance in series with the inductor; it is referred to as DCR (DC resistance). This resistance dissipates some of the reactive energy. The quality factor (or Q) of an inductor is the ratio of its inductive reactance to its resistance at a given frequency, and is a measure of its efficiency. The higher the Q factor of the inductor, the closer it approaches the behavior of an ideal inductor. High Q inductors are used with capacitors to make resonant circuits in radio transmitters and receivers. The higher the Q is, the narrower the bandwidth of the resonant circuit. The Q factor of an inductor is defined as where is the inductance, is the DC resistance, and the product is the inductive reactance Q increases linearly with frequency if L and R are constant. Although they are constant at low frequencies, the parameters vary with frequency. For example, skin effect, proximity effect, and core losses increase R with frequency; winding capacitance and variations in permeability with frequency affect L. At low frequencies and within limits, increasing the number of turns N improves Q because L varies as N2 while R varies linearly with N. Similarly increasing the radius r of an inductor improves (or increases) Q because L varies with r2 while R varies linearly with r. So high Q air core inductors often have large diameters and many turns. Both of those examples assume the diameter of the wire stays the same, so both examples use proportionally more wire. If the total mass of wire is held constant, then there would be no advantage to increasing the number of turns or the radius of the turns because the wire would have to be proportionally thinner. Using a high permeability ferromagnetic core can greatly increase the inductance for the same amount of copper, so the core can also increase the Q. Cores however also introduce losses that increase with frequency. The core material is chosen for best results for the frequency band. High Q inductors must avoid saturation; one way is by using a (physically larger) air core inductor. At VHF or higher frequencies an air core is likely to be used. A well designed air core inductor may have a Q of several hundred. Applications Inductors are used extensively in analog circuits and signal processing. Applications range from the use of large inductors in power supplies, which in conjunction with filter capacitors remove ripple which is a multiple of the mains frequency (or the switching frequency for switched-mode power supplies) from the direct current output, to the small inductance of the ferrite bead or torus installed around a cable to prevent radio frequency interference from being transmitted down the wire. Inductors are used as the energy storage device in many switched-mode power supplies to produce DC current. The inductor supplies energy to the circuit to keep current flowing during the "off" switching periods and enables topographies where the output voltage is higher than the input voltage. A tuned circuit, consisting of an inductor connected to a capacitor, acts as a resonator for oscillating current. Tuned circuits are widely used in radio frequency equipment such as radio transmitters and receivers, as narrow bandpass filters to select a single frequency from a composite signal, and in electronic oscillators to generate sinusoidal signals. Two (or more) inductors in proximity that have coupled magnetic flux (mutual inductance) form a transformer, which is a fundamental component of every electric utility power grid. The efficiency of a transformer may decrease as the frequency increases due to eddy currents in the core material and skin effect on the windings. The size of the core can be decreased at higher frequencies. For this reason, aircraft use 400 hertz alternating current rather than the usual 50 or 60 hertz, allowing a great saving in weight from the use of smaller transformers. Transformers enable switched-mode power supplies that galvanically isolate the output from the input. Inductors are also employed in electrical transmission systems, where they are used to limit switching currents and fault currents. In this field, they are more commonly referred to as reactors. Inductors have parasitic effects which cause them to depart from ideal behavior. They create and suffer from electromagnetic interference (EMI). Their physical size prevents them from being integrated on semiconductor chips. So the use of inductors is declining in modern electronic devices, particularly compact portable devices. Real inductors are increasingly being replaced by active circuits such as the gyrator which can synthesize inductance using capacitors. Inductor construction An inductor usually consists of a coil of conducting material, typically insulated copper wire, wrapped around a core either of plastic (to create an air-core inductor) or of a ferromagnetic (or ferrimagnetic) material; the latter is called an "iron core" inductor. The high permeability of the ferromagnetic core increases the magnetic field and confines it closely to the inductor, thereby increasing the inductance. Low frequency inductors are constructed like transformers, with cores of electrical steel laminated to prevent eddy currents. 'Soft' ferrites are widely used for cores above audio frequencies, since they do not cause the large energy losses at high frequencies that ordinary iron alloys do. Inductors come in many shapes. Some inductors have an adjustable core, which enables changing of the inductance. Inductors used to block very high frequencies are sometimes made by stringing a ferrite bead on a wire. Small inductors can be etched directly onto a printed circuit board by laying out the trace in a spiral pattern. Some such planar inductors use a planar core. Small value inductors can also be built on integrated circuits using the same processes that are used to make interconnects. Aluminium interconnect is typically used, laid out in a spiral coil pattern. However, the small dimensions limit the inductance, and it is far more common to use a circuit called a gyrator that uses a capacitor and active components to behave similarly to an inductor. Regardless of the design, because of the low inductances and low power dissipation on-die inductors allow, they are currently only commercially used for high frequency RF circuits. Shielded inductors Inductors used in power regulation systems, lighting, and other systems that require low-noise operating conditions, are often partially or fully shielded. In telecommunication circuits employing induction coils and repeating transformers shielding of inductors in close proximity reduces circuit cross-talk. Types Air-core inductor The term air core coil describes an inductor that does not use a magnetic core made of a ferromagnetic material. The term refers to coils wound on plastic, ceramic, or other nonmagnetic forms, as well as those that have only air inside the windings. Air core coils have lower inductance than ferromagnetic core coils, but are often used at high frequencies because they are free from energy losses called core losses that occur in ferromagnetic cores, which increase with frequency. A side effect that can occur in air core coils in which the winding is not rigidly supported on a form is 'microphony': mechanical vibration of the windings can cause variations in the inductance. Radio-frequency inductor At high frequencies, particularly radio frequencies (RF), inductors have higher resistance and other losses. In addition to causing power loss, in resonant circuits this can reduce the Q factor of the circuit, broadening the bandwidth. In RF inductors specialized construction techniques are used to minimize these losses. The losses are due to these effects: Skin effect: The resistance of a wire to high frequency current is higher than its resistance to direct current because of skin effect. Due to induced eddy currents, radio frequency alternating current does not penetrate far into the body of a conductor but travels along its surface. For example, at 6 MHz the skin depth of copper wire is about 0.001 inches (25 μm); most of the current is within this depth of the surface. Therefore, in a solid wire, the interior portion of the wire may carry little current, effectively increasing its resistance. Proximity effect: Another similar effect that also increases the resistance of the wire at high frequencies is proximity effect, which occurs in parallel wires that lie close to each other. The individual magnetic field of adjacent turns induces eddy currents in the wire of the coil, which causes the current density in the conductor to be displaced away from the adjacent surfaces. Like skin effect, this reduces the effective cross-sectional area of the wire conducting current, increasing its resistance. Dielectric losses: The high frequency electric field near the conductors in a tank coil can cause the motion of polar molecules in nearby insulating materials, dissipating energy as heat. For this reason, coils used for tuned circuits may be suspended in air, supported by narrow plastic or ceramic strips rather than being wound on coil forms. Parasitic capacitance: The capacitance between individual wire turns of the coil, called parasitic capacitance, does not cause energy losses but can change the behavior of the coil. Each turn of the coil is at a slightly different potential, so the electric field between neighboring turns stores charge on the wire, so the coil acts as if it has a capacitor in parallel with it. At a high enough frequency this capacitance can resonate with the inductance of the coil forming a tuned circuit, causing the coil to become self-resonant. To reduce parasitic capacitance and proximity effect, high Q RF coils are constructed to avoid having many turns lying close together, parallel to one another. The windings of RF coils are often limited to a single layer, and the turns are spaced apart. To reduce resistance due to skin effect, in high-power inductors such as those used in transmitters the windings are sometimes made of a metal strip or tubing which has a larger surface area, and the surface is silver-plated. Basket-weave coils To reduce proximity effect and parasitic capacitance, multilayer RF coils are wound in patterns in which successive turns are not parallel but crisscrossed at an angle; these are often called honeycomb or basket-weave coils. These are occasionally wound on a vertical insulating supports with dowels or slots, with the wire weaving in and out through the slots. Spiderweb coils Another construction technique with similar advantages is flat spiral coils. These are often wound on a flat insulating support with radial spokes or slots, with the wire weaving in and out through the slots; these are called spiderweb coils. The form has an odd number of slots, so successive turns of the spiral lie on opposite sides of the form, increasing separation. Litz wire To reduce skin effect losses, some coils are wound with a special type of radio frequency wire called litz wire. Instead of a single solid conductor, litz wire consists of a number of smaller wire strands that carry the current. Unlike ordinary stranded wire, the strands are insulated from each other, to prevent skin effect from forcing the current to the surface, and are twisted or braided together. The twist pattern ensures that each wire strand spends the same amount of its length on the outside of the wire bundle, so skin effect distributes the current equally between the strands, resulting in a larger cross-sectional conduction area than an equivalent single wire. Axial Inductor Small inductors for low current and low power are made in molded cases resembling resistors. These may be either plain (phenolic) core or ferrite core. An ohmmeter readily distinguishes them from similar-sized resistors by showing the low resistance of the inductor. Ferromagnetic-core inductor Ferromagnetic-core or iron-core inductors use a magnetic core made of a ferromagnetic or ferrimagnetic material such as iron or ferrite to increase the inductance. A magnetic core can increase the inductance of a coil by a factor of several thousand, by increasing the magnetic field due to its higher magnetic permeability. However the magnetic properties of the core material cause several side effects which alter the behavior of the inductor and require special construction: Laminated-core inductor Low-frequency inductors are often made with laminated cores to prevent eddy currents, using construction similar to transformers. The core is made of stacks of thin steel sheets or laminations oriented parallel to the field, with an insulating coating on the surface. The insulation prevents eddy currents between the sheets, so any remaining currents must be within the cross sectional area of the individual laminations, reducing the area of the loop and thus reducing the energy losses greatly. The laminations are made of low-conductivity silicon steel to further reduce eddy current losses. Ferrite-core inductor For higher frequencies, inductors are made with cores of ferrite. Ferrite is a ceramic ferrimagnetic material that is nonconductive, so eddy currents cannot flow within it. The formulation of ferrite is xxFe2O4 where xx represents various metals. For inductor cores soft ferrites are used, which have low coercivity and thus low hysteresis losses. Powdered-iron-core inductor Another material is powdered iron cemented with a binder. Medium frequency equipment almost exclusively uses powdered iron cores, and inductors and transformers built for the lower shortwaves are made using either cemented powdered iron or ferrites. Toroidal-core inductor In an inductor wound on a straight rod-shaped core, the magnetic field lines emerging from one end of the core must pass through the air to re-enter the core at the other end. This reduces the field, because much of the magnetic field path is in air rather than the higher permeability core material and is a source of electromagnetic interference. A higher magnetic field and inductance can be achieved by forming the core in a closed magnetic circuit. The magnetic field lines form closed loops within the core without leaving the core material. The shape often used is a toroidal or doughnut-shaped ferrite core. Because of their symmetry, toroidal cores allow a minimum of the magnetic flux to escape outside the core (called leakage flux), so they radiate less electromagnetic interference than other shapes. Toroidal core coils are manufactured of various materials, primarily ferrite, powdered iron and laminated cores. Variable inductor Probably the most common type of variable inductor today is one with a moveable ferrite magnetic core, which can be slid or screwed in or out of the coil. Moving the core farther into the coil increases the permeability, increasing the magnetic field and the inductance. Many inductors used in radio applications (usually less than 100 MHz) use adjustable cores in order to tune such inductors to their desired value, since manufacturing processes have certain tolerances (inaccuracy). Sometimes such cores for frequencies above 100 MHz are made from highly conductive non-magnetic material such as aluminum. They decrease the inductance because the magnetic field must bypass them. Air core inductors can use sliding contacts or multiple taps to increase or decrease the number of turns included in the circuit, to change the inductance. A type much used in the past but mostly obsolete today has a spring contact that can slide along the bare surface of the windings. The disadvantage of this type is that the contact usually short-circuits one or more turns. These turns act like a single-turn short-circuited transformer secondary winding; the large currents induced in them cause power losses. A type of continuously variable air core inductor is the variometer. This consists of two coils with the same number of turns connected in series, one inside the other. The inner coil is mounted on a shaft so its axis can be turned with respect to the outer coil. When the two coils' axes are collinear, with the magnetic fields pointing in the same direction, the fields add and the inductance is maximum. When the inner coil is turned so its axis is at an angle with the outer, the mutual inductance between them is smaller so the total inductance is less. When the inner coil is turned 180° so the coils are collinear with their magnetic fields opposing, the two fields cancel each other and the inductance is very small. This type has the advantage that it is continuously variable over a wide range. It is used in antenna tuners and matching circuits to match low frequency transmitters to their antennas. Another method to control the inductance without any moving parts requires an additional DC current bias winding which controls the permeability of an easily saturable core material. See Magnetic amplifier. Choke A choke is an inductor designed specifically for blocking high-frequency alternating current (AC) in an electrical circuit, while allowing DC or low-frequency signals to pass. Because the inductor restricts or "chokes" the changes in current, this type of inductor is called a choke. It usually consists of a coil of insulated wire wound on a magnetic core, although some consist of a donut-shaped "bead" of ferrite material strung on a wire. Like other inductors, chokes resist changes in current passing through them increasingly with frequency. The difference between chokes and other inductors is that chokes do not require the high Q factor construction techniques that are used to reduce the resistance in inductors used in tuned circuits. Circuit analysis The effect of an inductor in a circuit is to oppose changes in current through it by developing a voltage across it proportional to the rate of change of the current. An ideal inductor would offer no resistance to a constant direct current; however, only superconducting inductors have truly zero electrical resistance. The relationship between the time-varying voltage v(t) across an inductor with inductance L and the time-varying current i(t) passing through it is described by the differential equation: When there is a sinusoidal alternating current (AC) through an inductor, a sinusoidal voltage is induced. The amplitude of the voltage is proportional to the product of the amplitude () of the current and the angular frequency () of the current. In this situation, the phase of the current lags that of the voltage by π/2 (90°). For sinusoids, as the voltage across the inductor goes to its maximum value, the current goes to zero, and as the voltage across the inductor goes to zero, the current through it goes to its maximum value. If an inductor is connected to a direct current source with value I via a resistance R (at least the DCR of the inductor), and then the current source is short-circuited, the differential relationship above shows that the current through the inductor will discharge with an exponential decay: Reactance The ratio of the peak voltage to the peak current in an inductor energised from an AC source is called the reactance and is denoted XL. Thus, where ω is the angular frequency. Reactance is measured in ohms but referred to as impedance rather than resistance; energy is stored in the magnetic field as current rises and discharged as current falls. Inductive reactance is proportional to frequency. At low frequency the reactance falls; at DC, the inductor behaves as a short circuit. As frequency increases the reactance increases and at a sufficiently high frequency the reactance approaches that of an open circuit. Corner frequency In filtering applications, with respect to a particular load impedance, an inductor has a corner frequency defined as: Laplace circuit analysis (s-domain) When using the Laplace transform in circuit analysis, the impedance of an ideal inductor with no initial current is represented in the s domain by: where is the inductance, and is the complex frequency. If the inductor does have initial current, it can be represented by: Inductor networks Inductors in a parallel configuration each have the same potential difference (voltage). To find their total equivalent inductance (Leq): The current through inductors in series stays the same, but the voltage across each inductor can be different. The sum of the potential differences (voltage) is equal to the total voltage. To find their total inductance: These simple relationships hold true only when there is no mutual coupling of magnetic fields between individual inductors. Mutual inductance Mutual inductance occurs when the magnetic field of an inductor induces a magnetic field in an adjacent inductor. Mutual induction is the basis of transformer construction. where M is the maximum mutual inductance possible between 2 inductors and L1 and L2 are the two inductors. In general as only a fraction of self flux is linked with the other. This fraction is called "Coefficient of flux linkage (K)" or "Coefficient of coupling". Inductance formulas The table below lists some common simplified formulas for calculating the approximate inductance of several inductor constructions.
Technology
Components
null
14907
https://en.wikipedia.org/wiki/Inverse%20function
Inverse function
In mathematics, the inverse function of a function (also called the inverse of ) is a function that undoes the operation of . The inverse of exists if and only if is bijective, and if it exists, is denoted by For a function , its inverse admits an explicit description: it sends each element to the unique element such that . As an example, consider the real-valued function of a real variable given by . One can think of as the function which multiplies its input by 5 then subtracts 7 from the result. To undo this, one adds 7 to the input, then divides the result by 5. Therefore, the inverse of is the function defined by Definitions Let be a function whose domain is the set , and whose codomain is the set . Then is invertible if there exists a function from to such that for all and for all . If is invertible, then there is exactly one function satisfying this property. The function is called the inverse of , and is usually denoted as , a notation introduced by John Frederick William Herschel in 1813. The function is invertible if and only if it is bijective. This is because the condition for all implies that is injective, and the condition for all implies that is surjective. The inverse function to can be explicitly described as the function . Inverses and composition Recall that if is an invertible function with domain and codomain , then , for every and for every . Using the composition of functions, this statement can be rewritten to the following equations between functions: and where is the identity function on the set ; that is, the function that leaves its argument unchanged. In category theory, this statement is used as the definition of an inverse morphism. Considering function composition helps to understand the notation . Repeatedly composing a function with itself is called iteration. If is applied times, starting with the value , then this is written as ; so , etc. Since , composing and yields , "undoing" the effect of one application of . Notation While the notation might be misunderstood, certainly denotes the multiplicative inverse of and has nothing to do with the inverse function of . The notation might be used for the inverse function to avoid ambiguity with the multiplicative inverse. In keeping with the general notation, some English authors use expressions like to denote the inverse of the sine function applied to (actually a partial inverse; see below). Other authors feel that this may be confused with the notation for the multiplicative inverse of , which can be denoted as . To avoid any confusion, an inverse trigonometric function is often indicated by the prefix "arc" (for Latin ). For instance, the inverse of the sine function is typically called the arcsine function, written as . Similarly, the inverse of a hyperbolic function is indicated by the prefix "ar" (for Latin ). For instance, the inverse of the hyperbolic sine function is typically written as . The expressions like can still be useful to distinguish the multivalued inverse from the partial inverse: . Other inverse special functions are sometimes prefixed with the prefix "inv", if the ambiguity of the notation should be avoided. Examples Squaring and square root functions The function given by is not injective because for all . Therefore, is not invertible. If the domain of the function is restricted to the nonnegative reals, that is, we take the function with the same rule as before, then the function is bijective and so, invertible. The inverse function here is called the (positive) square root function and is denoted by . Standard inverse functions The following table shows several standard functions and their inverses: Formula for the inverse Many functions given by algebraic formulas possess a formula for their inverse. This is because the inverse of an invertible function has an explicit description as . This allows one to easily determine inverses of many functions that are given by algebraic formulas. For example, if is the function then to determine for a real number , one must find the unique real number such that . This equation can be solved: Thus the inverse function is given by the formula Sometimes, the inverse of a function cannot be expressed by a closed-form formula. For example, if is the function then is a bijection, and therefore possesses an inverse function . The formula for this inverse has an expression as an infinite sum: Properties Since a function is a special type of binary relation, many of the properties of an inverse function correspond to properties of converse relations. Uniqueness If an inverse function exists for a given function , then it is unique. This follows since the inverse function must be the converse relation, which is completely determined by . Symmetry There is a symmetry between a function and its inverse. Specifically, if is an invertible function with domain and codomain , then its inverse has domain and image , and the inverse of is the original function . In symbols, for functions and , and This statement is a consequence of the implication that for to be invertible it must be bijective. The involutory nature of the inverse can be concisely expressed by The inverse of a composition of functions is given by Notice that the order of and have been reversed; to undo followed by , we must first undo , and then undo . For example, let and let . Then the composition is the function that first multiplies by three and then adds five, To reverse this process, we must first subtract five, and then divide by three, This is the composition . Self-inverses If is a set, then the identity function on is its own inverse: More generally, a function is equal to its own inverse, if and only if the composition is equal to . Such a function is called an involution. Graph of the inverse If is invertible, then the graph of the function is the same as the graph of the equation This is identical to the equation that defines the graph of , except that the roles of and have been reversed. Thus the graph of can be obtained from the graph of by switching the positions of the and axes. This is equivalent to reflecting the graph across the line . Inverses and derivatives By the inverse function theorem, a continuous function of a single variable (where ) is invertible on its range (image) if and only if it is either strictly increasing or decreasing (with no local maxima or minima). For example, the function is invertible, since the derivative is always positive. If the function is differentiable on an interval and for each , then the inverse is differentiable on . If , the derivative of the inverse is given by the inverse function theorem, Using Leibniz's notation the formula above can be written as This result follows from the chain rule (see the article on inverse functions and differentiation). The inverse function theorem can be generalized to functions of several variables. Specifically, a continuously differentiable multivariable function is invertible in a neighborhood of a point as long as the Jacobian matrix of at is invertible. In this case, the Jacobian of at is the matrix inverse of the Jacobian of at . Real-world examples Let be the function that converts a temperature in degrees Celsius to a temperature in degrees Fahrenheit, then its inverse function converts degrees Fahrenheit to degrees Celsius, since Suppose assigns each child in a family its birth year. An inverse function would output which child was born in a given year. However, if the family has children born in the same year (for instance, twins or triplets, etc.) then the output cannot be known when the input is the common birth year. As well, if a year is given in which no child was born then a child cannot be named. But if each child was born in a separate year, and if we restrict attention to the three years in which a child was born, then we do have an inverse function. For example, Let be the function that leads to an percentage rise of some quantity, and be the function producing an percentage fall. Applied to $100 with = 10%, we find that applying the first function followed by the second does not restore the original value of $100, demonstrating the fact that, despite appearances, these two functions are not inverses of each other. The formula to calculate the pH of a solution is . In many cases we need to find the concentration of acid from a pH measurement. The inverse function is used. Generalizations Partial inverses Even if a function is not one-to-one, it may be possible to define a partial inverse of by restricting the domain. For example, the function is not one-to-one, since . However, the function becomes one-to-one if we restrict to the domain , in which case (If we instead restrict to the domain , then the inverse is the negative of the square root of .) Alternatively, there is no need to restrict the domain if we are content with the inverse being a multivalued function: Sometimes, this multivalued inverse is called the full inverse of , and the portions (such as and −) are called branches. The most important branch of a multivalued function (e.g. the positive square root) is called the principal branch, and its value at is called the principal value of . For a continuous function on the real line, one branch is required between each pair of local extrema. For example, the inverse of a cubic function with a local maximum and a local minimum has three branches (see the adjacent picture). These considerations are particularly important for defining the inverses of trigonometric functions. For example, the sine function is not one-to-one, since for every real (and more generally for every integer ). However, the sine is one-to-one on the interval , and the corresponding partial inverse is called the arcsine. This is considered the principal branch of the inverse sine, so the principal value of the inverse sine is always between − and . The following table describes the principal branch of each inverse trigonometric function: Left and right inverses Function composition on the left and on the right need not coincide. In general, the conditions "There exists such that " and "There exists such that " imply different properties of . For example, let denote the squaring map, such that for all in , and let denote the square root map, such that for all . Then for all in ; that is, is a right inverse to . However, is not a left inverse to , since, e.g., . Left inverses If , a left inverse for (or retraction of ) is a function such that composing with from the left gives the identity function That is, the function satisfies the rule If , then . The function must equal the inverse of on the image of , but may take any values for elements of not in the image. A function with nonempty domain is injective if and only if it has a left inverse. An elementary proof runs as follows: If is the left inverse of , and , then . If nonempty is injective, construct a left inverse as follows: for all , if is in the image of , then there exists such that . Let ; this definition is unique because is injective. Otherwise, let be an arbitrary element of .For all , is in the image of . By construction, , the condition for a left inverse. In classical mathematics, every injective function with a nonempty domain necessarily has a left inverse; however, this may fail in constructive mathematics. For instance, a left inverse of the inclusion of the two-element set in the reals violates indecomposability by giving a retraction of the real line to the set . Right inverses A right inverse for (or section of ) is a function such that That is, the function satisfies the rule If , then Thus, may be any of the elements of that map to under . A function has a right inverse if and only if it is surjective (though constructing such an inverse in general requires the axiom of choice). If is the right inverse of , then is surjective. For all , there is such that . If is surjective, has a right inverse , which can be constructed as follows: for all , there is at least one such that (because is surjective), so we choose one to be the value of . Two-sided inverses An inverse that is both a left and right inverse (a two-sided inverse), if it exists, must be unique. In fact, if a function has a left inverse and a right inverse, they are both the same two-sided inverse, so it can be called the inverse. If is a left inverse and a right inverse of , for all , . A function has a two-sided inverse if and only if it is bijective. A bijective function is injective, so it has a left inverse (if is the empty function, is its own left inverse). is surjective, so it has a right inverse. By the above, the left and right inverse are the same. If has a two-sided inverse , then is a left inverse and right inverse of , so is injective and surjective. Preimages If is any function (not necessarily invertible), the preimage (or inverse image) of an element is defined to be the set of all elements of that map to : The preimage of can be thought of as the image of under the (multivalued) full inverse of the function . The notion can be generalized to subsets of the range. Specifically, if is any subset of , the preimage of , denoted by , is the set of all elements of that map to : For example, take the function . This function is not invertible as it is not bijective, but preimages may be defined for subsets of the codomain, e.g. . The original notion and its generalization are related by the identity The preimage of a single element – a singleton set – is sometimes called the fiber of . When is the set of real numbers, it is common to refer to as a level set.
Mathematics
Basics
null
14909
https://en.wikipedia.org/wiki/Inertia
Inertia
Inertia is the natural tendency of objects in motion to stay in motion and objects at rest to stay at rest, unless a force causes the velocity to change. It is one of the fundamental principles in classical physics, and described by Isaac Newton in his first law of motion (also known as The Principle of Inertia). It is one of the primary manifestations of mass, one of the core quantitative properties of physical systems. Newton writes: In his 1687 work Philosophiæ Naturalis Principia Mathematica, Newton defined inertia as a property: History and development Early understanding of inertial motion Professor John H. Lienhard points out the Mozi – based on a Chinese text from the Warring States period (475–221 BCE) – as having given the first description of inertia. Before the European Renaissance, the prevailing theory of motion in western philosophy was that of Aristotle (384–322 BCE). On the surface of the Earth, the inertia property of physical objects is often masked by gravity and the effects of friction and air resistance, both of which tend to decrease the speed of moving objects (commonly to the point of rest). This misled the philosopher Aristotle to believe that objects would move only as long as force was applied to them. Aristotle said that all moving objects (on Earth) eventually come to rest unless an external power (force) continued to move them. Aristotle explained the continued motion of projectiles, after being separated from their projector, as an (itself unexplained) action of the surrounding medium continuing to move the projectile. Despite its general acceptance, Aristotle's concept of motion was disputed on several occasions by notable philosophers over nearly two millennia. For example, Lucretius (following, presumably, Epicurus) stated that the "default state" of the matter was motion, not stasis (stagnation). In the 6th century, John Philoponus criticized the inconsistency between Aristotle's discussion of projectiles, where the medium keeps projectiles going, and his discussion of the void, where the medium would hinder a body's motion. Philoponus proposed that motion was not maintained by the action of a surrounding medium, but by some property imparted to the object when it was set in motion. Although this was not the modern concept of inertia, for there was still the need for a power to keep a body in motion, it proved a fundamental step in that direction. This view was strongly opposed by Averroes and by many scholastic philosophers who supported Aristotle. However, this view did not go unchallenged in the Islamic world, where Philoponus had several supporters who further developed his ideas. In the 11th century, Persian polymath Ibn Sina (Avicenna) claimed that a projectile in a vacuum would not stop unless acted upon. Theory of impetus In the 14th century, Jean Buridan rejected the notion that a motion-generating property, which he named impetus, dissipated spontaneously. Buridan's position was that a moving object would be arrested by the resistance of the air and the weight of the body which would oppose its impetus. Buridan also maintained that impetus increased with speed; thus, his initial idea of impetus was similar in many ways to the modern concept of momentum. Despite the obvious similarities to more modern ideas of inertia, Buridan saw his theory as only a modification to Aristotle's basic philosophy, maintaining many other peripatetic views, including the belief that there was still a fundamental difference between an object in motion and an object at rest. Buridan also believed that impetus could be not only linear but also circular in nature, causing objects (such as celestial bodies) to move in a circle. Buridan's theory was followed up by his pupil Albert of Saxony (1316–1390) and the Oxford Calculators, who performed various experiments which further undermined the Aristotelian model. Their work in turn was elaborated by Nicole Oresme who pioneered the practice of illustrating the laws of motion with graphs. Shortly before Galileo's theory of inertia, Giambattista Benedetti modified the growing theory of impetus to involve linear motion alone: Benedetti cites the motion of a rock in a sling as an example of the inherent linear motion of objects, forced into circular motion. Classical inertia According to science historian Charles Coulston Gillispie, inertia "entered science as a physical consequence of Descartes' geometrization of space-matter, combined with the immutability of God." The first physicist to completely break away from the Aristotelian model of motion was Isaac Beeckman in 1614. The term "inertia" was first introduced by Johannes Kepler in his Epitome Astronomiae Copernicanae (published in three parts from 1617 to 1621). However, the meaning of Kepler's term, which he derived from the Latin word for "idleness" or "laziness", was not quite the same as its modern interpretation. Kepler defined inertia only in terms of resistance to movement, once again based on the axiomatic assumption that rest was a natural state which did not need explanation. It was not until the later work of Galileo and Newton unified rest and motion in one principle that the term "inertia" could be applied to those concepts as it is today. The principle of inertia, as formulated by Aristotle for "motions in a void", includes that a mundane object tends to resist a change in motion. The Aristotelian division of motion into mundane and celestial became increasingly problematic in the face of the conclusions of Nicolaus Copernicus in the 16th century, who argued that the Earth is never at rest, but is actually in constant motion around the Sun. Galileo, in his further development of the Copernican model, recognized these problems with the then-accepted nature of motion and, at least partially, as a result, included a restatement of Aristotle's description of motion in a void as a basic physical principle: A body moving on a level surface will continue in the same direction at a constant speed unless disturbed. Galileo writes that "all external impediments removed, a heavy body on a spherical surface concentric with the earth will maintain itself in that state in which it has been; if placed in a movement towards the west (for example), it will maintain itself in that movement." This notion, which is termed "circular inertia" or "horizontal circular inertia" by historians of science, is a precursor to, but is distinct from, Newton's notion of rectilinear inertia. For Galileo, a motion is "horizontal" if it does not carry the moving body towards or away from the center of the Earth, and for him, "a ship, for instance, having once received some impetus through the tranquil sea, would move continually around our globe without ever stopping." It is also worth noting that Galileo later (in 1632) concluded that based on this initial premise of inertia, it is impossible to tell the difference between a moving object and a stationary one without some outside reference to compare it against. This observation ultimately came to be the basis for Albert Einstein to develop the theory of special relativity. Concepts of inertia in Galileo's writings would later come to be refined, modified, and codified by Isaac Newton as the first of his laws of motion (first published in Newton's work, Philosophiæ Naturalis Principia Mathematica, in 1687): Every body perseveres in its state of rest, or of uniform motion in a right line, unless it is compelled to change that state by forces impressed thereon. Despite having defined the concept in his laws of motion, Newton did not actually use the term "inertia.” In fact, he originally viewed the respective phenomena as being caused by "innate forces" inherent in matter which resist any acceleration. Given this perspective, and borrowing from Kepler, Newton conceived of "inertia" as "the innate force possessed by an object which resists changes in motion", thus defining "inertia" to mean the cause of the phenomenon, rather than the phenomenon itself. However, Newton's original ideas of "innate resistive force" were ultimately problematic for a variety of reasons, and thus most physicists no longer think in these terms. As no alternate mechanism has been readily accepted, and it is now generally accepted that there may not be one that we can know, the term "inertia" has come to mean simply the phenomenon itself, rather than any inherent mechanism. Thus, ultimately, "inertia" in modern classical physics has come to be a name for the same phenomenon as described by Newton's first law of motion, and the two concepts are now considered to be equivalent. Relativity Albert Einstein's theory of special relativity, as proposed in his 1905 paper entitled "On the Electrodynamics of Moving Bodies", was built on the understanding of inertial reference frames developed by Galileo, Huygens and Newton. While this revolutionary theory did significantly change the meaning of many Newtonian concepts such as mass, energy, and distance, Einstein's concept of inertia remained at first unchanged from Newton's original meaning. However, this resulted in a limitation inherent in special relativity: the principle of relativity could only apply to inertial reference frames. To address this limitation, Einstein developed his general theory of relativity ("The Foundation of the General Theory of Relativity", 1916), which provided a theory including noninertial (accelerated) reference frames. In general relativity, the concept of inertial motion got a broader meaning. Taking into account general relativity, inertial motion is any movement of a body that is not affected by forces of electrical, magnetic, or other origin, but that is only under the influence of gravitational masses. Physically speaking, this happens to be exactly what a properly functioning three-axis accelerometer is indicating when it does not detect any proper acceleration. Etymology The term inertia comes from the Latin word iners, meaning idle or sluggish. Rotational inertia A quantity related to inertia is rotational inertia (→ moment of inertia), the property that a rotating rigid body maintains its state of uniform rotational motion. Its angular momentum remains unchanged unless an external torque is applied; this is called conservation of angular momentum. Rotational inertia is often considered in relation to a rigid body. For example, a gyroscope uses the property that it resists any change in the axis of rotation.
Physical sciences
Classical mechanics
null
14919
https://en.wikipedia.org/wiki/ISBN
ISBN
The International Standard Book Number (ISBN) is a numeric commercial book identifier that is intended to be unique. Publishers purchase or receive ISBNs from an affiliate of the International ISBN Agency. A different ISBN is assigned to each separate edition and variation of a publication, but not to a simple reprinting of an existing item. For example, an e-book, a paperback and a hardcover edition of the same book must each have a different ISBN, but an unchanged reprint of the hardcover edition keeps the same ISBN. The ISBN is ten digits long if assigned before 2007, and thirteen digits long if assigned on or after 1 January 2007. The method of assigning an ISBN is nation-specific and varies between countries, often depending on how large the publishing industry is within a country. The first version of the ISBN identification format was devised in 1967, based upon the 9-digit Standard Book Numbering (SBN) created in 1966. The 10-digit ISBN format was developed by the International Organization for Standardization (ISO) and was published in 1970 as international standard ISO 2108 (any 9-digit SBN can be converted to a 10-digit ISBN by prefixing it with a zero). Privately published books sometimes appear without an ISBN. The International ISBN Agency sometimes assigns ISBNs to such books on its own initiative. A separate identifier code of a similar kind, the International Standard Serial Number (ISSN), identifies periodical publications such as magazines and newspapers. The International Standard Music Number (ISMN) covers musical scores. History The Standard Book Number (SBN) is a commercial system using nine-digit code numbers to identify books. In 1965, British bookseller and stationers WHSmith announced plans to implement a standard numbering system for its books. They hired consultants to work on their behalf, and the system was devised by Gordon Foster, emeritus professor of statistics at Trinity College Dublin. The International Organization for Standardization (ISO) Technical Committee on Documentation sought to adapt the British SBN for international use. The ISBN identification format was conceived in 1967 in the United Kingdom by David Whitaker (regarded as the "Father of the ISBN") and in 1968 in the United States by Emery Koltay (who later became director of the U.S. ISBN agency R. R. Bowker). The 10-digit ISBN format was developed by the ISO and was published in 1970 as international standard ISO 2108. The United Kingdom continued to use the nine-digit SBN code until 1974. ISO has appointed the International ISBN Agency as the registration authority for ISBN worldwide and the ISBN Standard is developed under the control of ISO Technical Committee 46/Subcommittee 9 TC 46/SC 9. The ISO on-line facility only refers back to 1978. An SBN may be converted to an ISBN by prefixing the digit "0". For example, the second edition of Mr. J. G. Reeder Returns, published by Hodder in 1965, has , where "340" indicates the publisher, "01381" is the serial number assigned by the publisher, and "8" is the check digit. By prefixing a zero, this can be converted to ; the check digit does not need to be re-calculated. Some publishers, such as Ballantine Books, would sometimes use 12-digit SBNs where the last three digits indicated the price of the book; for example, Woodstock Handmade Houses had a 12-digit Standard Book Number of 345-24223-8-595 (valid SBN: 345-24223-8, : ), and it cost . Since 1 January 2007, ISBNs have contained thirteen digits, a format that is compatible with "Bookland" European Article Numbers, which have 13 digits. Since 2016, ISBNs have also been used to identify mobile games by China's Administration of Press and Publication. The United States, with 3.9 million registered ISBNs in 2020, was by far the biggest user of the ISBN identifier in 2020, followed by the Republic of Korea (329,582), Germany (284,000), China (263,066), the UK (188,553) and Indonesia (144,793). Lifetime ISBNs registered in the United States are over 39 million as of 2020. Overview A separate ISBN is assigned to each edition and variation (except reprintings) of a publication. For example, an ebook, audiobook, paperback, and hardcover edition of the same book must each have a different ISBN assigned to it. The ISBN is thirteen digits long if assigned on or after 1 January 2007, and ten digits long if assigned before 2007. An International Standard Book Number consists of four parts (if it is a 10-digit ISBN) or five parts (for a 13-digit ISBN). Section 5 of the International ISBN Agency's official user manual describes the structure of the 13-digit ISBN, as follows: for a 13-digit ISBN, a prefix element – a GS1 prefix: so far 978 or 979 have been made available by GS1, the registration group element (language-sharing country group, individual country or territory), the registrant element, the publication element, and a checksum character or check digit. A 13-digit ISBN can be separated into its parts (prefix element, registration group, registrant, publication and check digit), and when this is done it is customary to separate the parts with hyphens or spaces. Separating the parts (registration group, registrant, publication and check digit) of a 10-digit ISBN is also done with either hyphens or spaces. Figuring out how to correctly separate a given ISBN is complicated, because most of the parts do not use a fixed number of digits. Issuing process ISBN issuance is country-specific, in that ISBNs are issued by the ISBN registration agency that is responsible for that country or territory regardless of the publication language. The ranges of ISBNs assigned to any particular country are based on the publishing profile of the country concerned, and so the ranges will vary depending on the number of books and the number, type, and size of publishers that are active. Some ISBN registration agencies are based in national libraries or within ministries of culture and thus may receive direct funding from the government to support their services. In other cases, the ISBN registration service is provided by organisations such as bibliographic data providers that are not government funded. A full directory of ISBN agencies is available on the International ISBN Agency website. A list for a few countries is given below: Australia – Thorpe-Bowker Brazil – The National Library of Brazil; (Up to 28 February 2020) Brazil – Câmara Brasileira do Livro (From 1 March 2020) Canada – English Library and Archives Canada, a government agency; French ; Colombia – Cámara Colombiana del Libro, an NGO Hong Kong – Books Registration Office (BRO), under the Hong Kong Public Libraries Iceland – Landsbókasafn (National and University Library of Iceland) India – The Raja Rammohun Roy National Agency for ISBN (Book Promotion and Copyright Division), under Department of Higher Education, a constituent of the Ministry of Human Resource Development Israel – The Israel Center for Libraries Italy – EDISER srl, owned by Associazione Italiana Editori (Italian Publishers Association) Kenya – National Library of Kenya Latvia - Latvian ISBN Agency Lebanon – Lebanese ISBN Agency Maldives – The National Bureau of Classification (NBC) Malta – The National Book Council () Morocco – The National Library of Morocco New Zealand – The National Library of New Zealand Nigeria – National Library of Nigeria Pakistan – National Library of Pakistan Philippines – National Library of the Philippines South Africa – National Library of South Africa Spain – Spanish ISBN Agency – Agencia del ISBN Turkey – General Directorate of Libraries and Publications, a branch of the Ministry of Culture United Kingdom and Republic of Ireland – Nielsen Book Services Ltd, part of NIQ United States – R. R. Bowker Registration group element The ISBN registration group element is a 1-to-5-digit number that is valid within a single prefix element (i.e. one of 978 or 979), and can be separated between hyphens, such as . Registration groups have primarily been allocated within the 978 prefix element. The single-digit registration groups within the 978-prefix element are: 0 or 1 for English-speaking countries; 2 for French-speaking countries; 3 for German-speaking countries; 4 for Japan; 5 for Russian-speaking countries; and 7 for People's Republic of China. Example 5-digit registration groups are 99936 and 99980, for Bhutan. The allocated registration groups are: 0–5, 600–631, 65, 7, 80–94, 950–989, 9910–9989, and 99901–99993. Books published in rare languages typically have longer group elements. Within the 979 prefix element, the registration group 0 is reserved for compatibility with International Standard Music Numbers (ISMNs), but such material is not actually assigned an ISBN. The registration groups within prefix element 979 that have been assigned are 8 for the United States of America, 10 for France, 11 for the Republic of Korea, and 12 for Italy. The original 9-digit standard book number (SBN) had no registration group identifier, but prefixing a zero to a 9-digit SBN creates a valid 10-digit ISBN. Registrant element The national ISBN agency assigns the registrant element (cf. :Category:ISBN agencies) and an accompanying series of ISBNs within that registrant element to the publisher; the publisher then allocates one of the ISBNs to each of its books. In most countries, a book publisher is not legally required to assign an ISBN, although most large bookstores only handle publications that have ISBNs assigned to them. The International ISBN Agency maintains the details of over one million ISBN prefixes and publishers in the Global Register of Publishers. This database is freely searchable over the internet. Publishers receive blocks of ISBNs, with larger blocks allotted to publishers expecting to need them; a small publisher may receive ISBNs of one or more digits for the registration group identifier, several digits for the registrant, and a single digit for the publication element. Once that block of ISBNs is used, the publisher may receive another block of ISBNs, with a different registrant element. Consequently, a publisher may have different allotted registrant elements. There also may be more than one registration group identifier used in a country. This might occur once all the registrant elements from a particular registration group have been allocated to publishers. By using variable block lengths, registration agencies are able to customise the allocations of ISBNs that they make to publishers. For example, a large publisher may be given a block of ISBNs where fewer digits are allocated for the registrant element and many digits are allocated for the publication element; likewise, countries publishing many titles have few allocated digits for the registration group identifier and many for the registrant and publication elements. Here are some sample ISBN-10 codes, illustrating block length variations. English-language pattern English-language registration group elements are 0 and 1 (2 of more than 220 registration group elements). These two registration group elements are divided into registrant elements in a systematic pattern, which allows their length to be determined, as follows: Check digits A check digit is a form of redundancy check used for error detection, the decimal equivalent of a binary check bit. It consists of a single digit computed from the other digits in the number. The method for the 10-digit ISBN is an extension of that for SBNs, so the two systems are compatible; an SBN prefixed with a zero (the 10-digit ISBN) will give the same check digit as the SBN without the zero. The check digit is base eleven, and can be an integer between 0 and 9, or an 'X'. The system for 13-digit ISBNs is not compatible with SBNs and will, in general, give a different check digit from the corresponding 10-digit ISBN, so does not provide the same protection against transposition. This is because the 13-digit code was required to be compatible with the EAN format, and hence could not contain the letter 'X'. ISBN-10 check digits According to the 2001 edition of the International ISBN Agency's official user manual, the ISBN-10 check digit (which is the last digit of the 10-digit ISBN) must range from 0 to 10 (the symbol 'X' is used for 10), and must be such that the sum of the ten digits, each multiplied by its (integer) weight, descending from 10 to 1, is a multiple of 11. That is, if is the th digit, then must be chosen such that: For example, for an ISBN-10 of 0-306-40615-2: Formally, using modular arithmetic, this is rendered It is also true for ISBN 10s that the sum of all ten digits, each multiplied by its weight in ascending order from 1 to 10, is a multiple of 11. For this example: Formally, this is rendered The two most common errors in handling an ISBN (e.g. when typing it or writing it down) are a single altered digit or the transposition of adjacent digits. It can be proven mathematically that all pairs of valid ISBN 10s differ in at least two digits. It can also be proven that there are no pairs of valid ISBN 10s with eight identical digits and two transposed digits (these proofs are true because the ISBN is less than eleven digits long and because 11 is a prime number). The ISBN check digit method therefore ensures that it will always be possible to detect these two most common types of error, i.e., if either of these types of error has occurred, the result will never be a valid ISBN—the sum of the digits multiplied by their weights will never be a multiple of 11. However, if the error were to occur in the publishing house and remain undetected, the book would be issued with an invalid ISBN. In contrast, it is possible for other types of error, such as two altered non-transposed digits, or three altered digits, to result in a valid ISBN (although it is still unlikely). ISBN-10 check digit calculation Each of the first nine digits of the 10-digit ISBN—excluding the check digit itself—is multiplied by its (integer) weight, descending from 10 to 2, and the sum of these nine products found. The value of the check digit is simply the one number between 0 and 10 which, when added to this sum, means the total is a multiple of 11. For example, the check digit for an ISBN-10 of 0-306-40615-? is calculated as follows: Adding 2 to 130 gives a multiple of 11 (because 132 = 12×11)—this is the only number between 0 and 10 which does so. Therefore, the check digit has to be 2, and the complete sequence is ISBN 0-306-40615-2. If the value of required to satisfy this condition is 10, then an 'X' should be used. Alternatively, modular arithmetic is convenient for calculating the check digit using modulus 11. The remainder of this sum when it is divided by 11 (i.e. its value modulo 11), is computed. This remainder plus the check digit must equal either 0 or 11. Therefore, the check digit is (11 minus the remainder of the sum of the products modulo 11) modulo 11. Taking the remainder modulo 11 a second time accounts for the possibility that the first remainder is 0. Without the second modulo operation, the calculation could result in a check digit value of , which is invalid. (Strictly speaking, the first "modulo 11" is not needed, but it may be considered to simplify the calculation.) For example, the check digit for the ISBN of 0-306-40615-? is calculated as follows: Thus the check digit is 2. It is possible to avoid the multiplications in a software implementation by using two accumulators. Repeatedly adding t into s computes the necessary multiples: // Returns ISBN error syndrome, zero for a valid ISBN, non-zero for an invalid one. // digits[i] must be between 0 and 10. int CheckISBN(int const digits[10]) { int i, s = 0, t = 0; for (i = 0; i < 10; ++i) { t += digits[i]; s += t; } return s % 11; } The modular reduction can be done once at the end, as shown above (in which case s could hold a value as large as 496, for the invalid ISBN 99999-999-9-X), or s and t could be reduced by a conditional subtract after each addition. ISBN-13 check digit calculation Appendix 1 of the International ISBN Agency's official user manual describes how the 13-digit ISBN check digit is calculated. The ISBN-13 check digit, which is the last digit of the ISBN, must range from 0 to 9 and must be such that the sum of all the thirteen digits, each multiplied by its (integer) weight, alternating between 1 and 3, is a multiple of 10. As ISBN-13 is a subset of EAN-13, the algorithm for calculating the check digit is exactly the same for both. Formally, using modular arithmetic, this is rendered: The calculation of an ISBN-13 check digit begins with the first twelve digits of the 13-digit ISBN (thus excluding the check digit itself). Each digit, from left to right, is alternately multiplied by 1 or 3, then those products are summed modulo 10 to give a value ranging from 0 to 9. Subtracted from 10, that leaves a result from 1 to 10. A zero replaces a ten, so, in all cases, a single check digit results. For example, the ISBN-13 check digit of 978-0-306-40615-? is calculated as follows: s = 9×1 + 7×3 + 8×1 + 0×3 + 3×1 + 0×3 + 6×1 + 4×3 + 0×1 + 6×3 + 1×1 + 5×3 = 9 + 21 + 8 + 0 + 3 + 0 + 6 + 12 + 0 + 18 + 1 + 15 = 93 93 / 10 = 9 remainder 3 10 – 3 = 7 Thus, the check digit is 7, and the complete sequence is ISBN 978-0-306-40615-7. In general, the ISBN check digit is calculated as follows. Let Then This check system—similar to the UPC check digit formula—does not catch all errors of adjacent digit transposition. Specifically, if the difference between two adjacent digits is 5, the check digit will not catch their transposition. For instance, the above example allows this situation with the 6 followed by a 1. The correct order contributes to the sum; while, if the digits are transposed (1 followed by a 6), the contribution of those two digits will be . However, 19 and 9 are congruent modulo 10, and so produce the same, final result: both ISBNs will have a check digit of 7. The ISBN-10 formula uses the prime modulus 11 which avoids this blind spot, but requires more than the digits 0–9 to express the check digit. Additionally, if the sum of the 2nd, 4th, 6th, 8th, 10th, and 12th digits is tripled then added to the remaining digits (1st, 3rd, 5th, 7th, 9th, 11th, and 13th), the total will always be divisible by 10 (i.e., end in 0). ISBN-10 to ISBN-13 conversion A 10-digit ISBN is converted to a 13-digit ISBN by prepending "978" to the ISBN-10 and recalculating the final checksum digit using the ISBN-13 algorithm. The reverse process can also be performed, but not for numbers commencing with a prefix other than 978, which have no 10-digit equivalent. Errors in usage Publishers and libraries have varied policies about the use of the ISBN check digit. Publishers sometimes fail to check the correspondence of a book title and its ISBN before publishing it; that failure causes book identification problems for libraries, booksellers, and readers. For example, is shared by two books—Ninja gaiden: a novel based on the best-selling game by Tecmo (1990) and Wacky laws (1997), both published by Scholastic. Most libraries and booksellers display the book record for an invalid ISBN issued by the publisher. The Library of Congress catalogue contains books published with invalid ISBNs, which it usually tags with the phrase "Cancelled ISBN". The International Union Library Catalog (a.k.a., WorldCat OCLC—Online Computer Library Center system) often indexes by invalid ISBNs, if the book is indexed in that way by a member library. eISBN Only the term "ISBN" should be used; the terms "eISBN" and "e-ISBN" have historically been sources of confusion and should be avoided. If a book exists in one or more digital (e-book) formats, each of those formats must have its own ISBN. In other words, each of the three separate EPUB, Amazon Kindle, and PDF formats of a particular book will have its own specific ISBN. They should not share the ISBN of the paper version, and there is no generic "eISBN" which encompasses all the e-book formats for a title. EAN format used in barcodes, and upgrading The barcodes on a book's back cover (or inside a mass-market paperback book's front cover) are EAN-13; they may have a separate barcode encoding five digits called an EAN-5 for the currency and the recommended retail price. For 10-digit ISBNs, the number "978", the Bookland "country code", is prefixed to the ISBN in the barcode data, and the check digit is recalculated according to the EAN-13 formula (modulo 10, 1× and 3× weighting on alternating digits). Partly because of an expected shortage in certain ISBN categories, the International Organization for Standardization (ISO) decided to migrate to a 13-digit ISBN (ISBN-13). The process began on 1 January 2005 and was planned to conclude on 1 January 2007. , all the 13-digit ISBNs began with 978. As the 978 ISBN supply is exhausted, the 979 prefix was introduced. Part of the 979 prefix is reserved for use with the Musicland code for musical scores with an ISMN. The 10-digit ISMN codes differed visually as they began with an "M" letter; the bar code represents the "M" as a zero, and for checksum purposes it counted as a 3. All ISMNs are now thirteen digits commencing ; to will be used by ISBN. Publisher identification code numbers are unlikely to be the same in the 978 and 979 ISBNs, likewise, there is no guarantee that language area code numbers will be the same. Moreover, the 10-digit ISBN check digit generally is not the same as the 13-digit ISBN check digit. Because the GTIN-13 is part of the Global Trade Item Number (GTIN) system (that includes the GTIN-14, the GTIN-12, and the GTIN-8), the 13-digit ISBN falls within the 14-digit data field range. Barcode format compatibility is maintained, because (aside from the group breaks) the ISBN-13 barcode format is identical to the EAN barcode format of existing 10-digit ISBNs. So, migration to an EAN-based system allows booksellers the use of a single numbering system for both books and non-book products that is compatible with existing ISBN based data, with only minimal changes to information technology systems. Hence, many booksellers (e.g., Barnes & Noble) migrated to EAN barcodes as early as March 2005. Although many American and Canadian booksellers were able to read EAN-13 barcodes before 2005, most general retailers could not read them. The upgrading of the UPC barcode system to full EAN-13, in 2005, eased migration to the ISBN in North America.
Technology
Printing
null
14921
https://en.wikipedia.org/wiki/IP%20address
IP address
An Internet Protocol address (IP address) is a numerical label such as that is assigned to a device connected to a computer network that uses the Internet Protocol for communication. IP addresses serve two main functions: network interface identification, and location addressing. Internet Protocol version 4 (IPv4) was the first standalone specification for the IP address, and has been in use since 1983. IPv4 addresses are defined as a 32-bit number, which became too small to provide enough addresses as the internet grew, leading to IPv4 address exhaustion over the 2010s. Its designated successor, IPv6, uses 128 bits for the IP address, giving it a larger address space. Although IPv6 deployment has been ongoing since the mid-2000s, both IPv4 and IPv6 are still used side-by-side as of 2024. IP addresses are usually displayed in a human-readable notation, but systems may use them in various different computer number formats. CIDR notation can also be used to designate how much of the address should be treated as a routing prefix. For example, indicates that 24 significant bits of the address are the prefix, with the remaining 8 bits used for host addressing. This is equivalent to the historically used subnet mask (in this case, ). The IP address space is managed globally by the Internet Assigned Numbers Authority (IANA) and the five regional Internet registries (RIRs). IANA assigns blocks of IP addresses to the RIRs, which are responsible for distributing them to local Internet registries in their region such as internet service providers (ISPs) and large institutions. Some addresses are reserved for private networks and are not globally unique. Within a network, the network administrator assigns an IP address to each device. Such assignments may be on a static (fixed or permanent) or dynamic basis, depending on network practices and software features. Some jurisdictions consider IP addresses to be personal data. Function An IP address serves two principal functions: it identifies the host, or more specifically, its network interface, and it provides the location of the host in the network, and thus, the capability of establishing a path to that host. Its role has been characterized as follows: "A name indicates what we seek. An address indicates where it is. A route indicates how to get there." The header of each IP packet contains the IP address of the sending host and that of the destination host. IP versions Two versions of the Internet Protocol are in common use on the Internet today. The original version of the Internet Protocol that was first deployed in 1983 in the ARPANET, the predecessor of the Internet, is Internet Protocol version 4 (IPv4). By the early 1990s, the rapid exhaustion of IPv4 address space available for assignment to Internet service providers and end-user organizations prompted the Internet Engineering Task Force (IETF) to explore new technologies to expand addressing capability on the Internet. The result was a redesign of the Internet Protocol which became eventually known as Internet Protocol Version 6 (IPv6) in 1995. IPv6 technology was in various testing stages until the mid-2000s when commercial production deployment commenced. Today, these two versions of the Internet Protocol are in simultaneous use. Among other technical changes, each version defines the format of addresses differently. Because of the historical prevalence of IPv4, the generic term IP address typically still refers to the addresses defined by IPv4. The gap in version sequence between IPv4 and IPv6 resulted from the assignment of version 5 to the experimental Internet Stream Protocol in 1979, which however was never referred to as IPv5. Other versions v1 to v9 were defined, but only v4 and v6 ever gained widespread use. v1 and v2 were names for TCP protocols in 1974 and 1977, as there was no separate IP specification at the time. v3 was defined in 1978, and v3.1 is the first version where TCP is separated from IP. v6 is a synthesis of several suggested versions, v6 Simple Internet Protocol, v7 TP/IX: The Next Internet, v8 PIP — The P Internet Protocol, and v9 TUBA — Tcp & Udp with Big Addresses. Subnetworks IP networks may be divided into subnetworks in both IPv4 and IPv6. For this purpose, an IP address is recognized as consisting of two parts: the network prefix in the high-order bits and the remaining bits called the rest field, host identifier, or interface identifier (IPv6), used for host numbering within a network. The subnet mask or CIDR notation determines how the IP address is divided into network and host parts. The term subnet mask is only used within IPv4. Both IP versions however use the CIDR concept and notation. In this, the IP address is followed by a slash and the number (in decimal) of bits used for the network part, also called the routing prefix. For example, an IPv4 address and its subnet mask may be and , respectively. The CIDR notation for the same IP address and subnet is , because the first 24 bits of the IP address indicate the network and subnet. IPv4 addresses An IPv4 address has a size of 32 bits, which limits the address space to (232) addresses. Of this number, some addresses are reserved for special purposes such as private networks (≈18 million addresses) and multicast addressing (≈270 million addresses). IPv4 addresses are usually represented in dot-decimal notation, consisting of four decimal numbers, each ranging from 0 to 255, separated by dots, e.g., . Each part represents a group of 8 bits (an octet) of the address. In some cases of technical writing, IPv4 addresses may be presented in various hexadecimal, octal, or binary representations. Subnetting history In the early stages of development of the Internet Protocol, the network number was always the highest order octet (most significant eight bits). Because this method allowed for only 256 networks, it soon proved inadequate as additional networks developed that were independent of the existing networks already designated by a network number. In 1981, the addressing specification was revised with the introduction of classful network architecture. Classful network design allowed for a larger number of individual network assignments and fine-grained subnetwork design. The first three bits of the most significant octet of an IP address were defined as the class of the address. Three classes (A, B, and C) were defined for universal unicast addressing. Depending on the class derived, the network identification was based on octet boundary segments of the entire address. Each class used successively additional octets in the network identifier, thus reducing the possible number of hosts in the higher order classes (B and C). The following table gives an overview of this now-obsolete system. Classful network design served its purpose in the startup stage of the Internet, but it lacked scalability in the face of the rapid expansion of networking in the 1990s. The class system of the address space was replaced with Classless Inter-Domain Routing (CIDR) in 1993. CIDR is based on variable-length subnet masking (VLSM) to allow allocation and routing based on arbitrary-length prefixes. Today, remnants of classful network concepts function only in a limited scope as the default configuration parameters of some network software and hardware components (e.g. netmask), and in the technical jargon used in network administrators' discussions. Private addresses Early network design, when global end-to-end connectivity was envisioned for communications with all Internet hosts, intended that IP addresses be globally unique. However, it was found that this was not always necessary as private networks developed and public address space needed to be conserved. Computers not connected to the Internet, such as factory machines that communicate only with each other via TCP/IP, need not have globally unique IP addresses. Today, such private networks are widely used and typically connect to the Internet with network address translation (NAT), when needed. Three non-overlapping ranges of IPv4 addresses for private networks are reserved. These addresses are not routed on the Internet and thus their use need not be coordinated with an IP address registry. Any user may use any of the reserved blocks. Typically, a network administrator will divide a block into subnets; for example, many home routers automatically use a default address range of through (). IPv6 addresses In IPv6, the address size was increased from 32 bits in IPv4 to 128 bits, thus providing up to 2128 (approximately ) addresses. This is deemed sufficient for the foreseeable future. The intent of the new design was not to provide just a sufficient quantity of addresses, but also redesign routing in the Internet by allowing more efficient aggregation of subnetwork routing prefixes. This resulted in slower growth of routing tables in routers. The smallest possible individual allocation is a subnet for 264 hosts, which is the square of the size of the entire IPv4 Internet. At these levels, actual address utilization ratios will be small on any IPv6 network segment. The new design also provides the opportunity to separate the addressing infrastructure of a network segment, i.e. the local administration of the segment's available space, from the addressing prefix used to route traffic to and from external networks. IPv6 has facilities that automatically change the routing prefix of entire networks, should the global connectivity or the routing policy change, without requiring internal redesign or manual renumbering. The large number of IPv6 addresses allows large blocks to be assigned for specific purposes and, where appropriate, to be aggregated for efficient routing. With a large address space, there is no need to have complex address conservation methods as used in CIDR. All modern desktop and enterprise server operating systems include native support for IPv6, but it is not yet widely deployed in other devices, such as residential networking routers, voice over IP (VoIP) and multimedia equipment, and some networking hardware. Private addresses Just as IPv4 reserves addresses for private networks, blocks of addresses are set aside in IPv6. In IPv6, these are referred to as unique local addresses (ULAs). The routing prefix is reserved for this block, which is divided into two blocks with different implied policies. The addresses include a 40-bit pseudorandom number that minimizes the risk of address collisions if sites merge or packets are misrouted. Early practices used a different block for this purpose (), dubbed site-local addresses. However, the definition of what constituted a site remained unclear and the poorly defined addressing policy created ambiguities for routing. This address type was abandoned and must not be used in new systems. Addresses starting with , called link-local addresses, are assigned to interfaces for communication on the attached link. The addresses are automatically generated by the operating system for each network interface. This provides instant and automatic communication between all IPv6 hosts on a link. This feature is used in the lower layers of IPv6 network administration, such as for the Neighbor Discovery Protocol. Private and link-local address prefixes may not be routed on the public Internet. IP address assignment IP addresses are assigned to a host either dynamically as they join the network, or persistently by configuration of the host hardware or software. Persistent configuration is also known as using a static IP address. In contrast, when a computer's IP address is assigned each time it restarts, this is known as using a dynamic IP address. Dynamic IP addresses are assigned by network using Dynamic Host Configuration Protocol (DHCP). DHCP is the most frequently used technology for assigning addresses. It avoids the administrative burden of assigning specific static addresses to each device on a network. It also allows devices to share the limited address space on a network if only some of them are online at a particular time. Typically, dynamic IP configuration is enabled by default in modern desktop operating systems. The address assigned with DHCP is associated with a lease and usually has an expiration period. If the lease is not renewed by the host before expiry, the address may be assigned to another device. Some DHCP implementations attempt to reassign the same IP address to a host, based on its MAC address, each time it joins the network. A network administrator may configure DHCP by allocating specific IP addresses based on MAC address. DHCP is not the only technology used to assign IP addresses dynamically. Bootstrap Protocol is a similar protocol and predecessor to DHCP. Dialup and some broadband networks use dynamic address features of the Point-to-Point Protocol. Computers and equipment used for the network infrastructure, such as routers and mail servers, are typically configured with static addressing. In the absence or failure of static or dynamic address configurations, an operating system may assign a link-local address to a host using stateless address autoconfiguration. Sticky dynamic IP address Sticky is an informal term used to describe a dynamically assigned IP address that seldom changes. IPv4 addresses, for example, are usually assigned with DHCP, and a DHCP service can use rules that maximize the chance of assigning the same address each time a client asks for an assignment. In IPv6, a prefix delegation can be handled similarly, to make changes as rare as feasible. In a typical home or small-office setup, a single router is the only device visible to an Internet service provider (ISP), and the ISP may try to provide a configuration that is as stable as feasible, i.e. sticky. On the local network of the home or business, a local DHCP server may be designed to provide sticky IPv4 configurations, and the ISP may provide a sticky IPv6 prefix delegation, giving clients the option to use sticky IPv6 addresses. Sticky should not be confused with static; sticky configurations have no guarantee of stability, while static configurations are used indefinitely and only changed deliberately. Address autoconfiguration Address block is defined for the special use of link-local addressing for IPv4 networks. In IPv6, every interface, whether using static or dynamic addresses, also receives a link-local address automatically in the block . These addresses are only valid on the link, such as a local network segment or point-to-point connection, to which a host is connected. These addresses are not routable and, like private addresses, cannot be the source or destination of packets traversing the Internet. When the link-local IPv4 address block was reserved, no standards existed for mechanisms of address autoconfiguration. Filling the void, Microsoft developed a protocol called Automatic Private IP Addressing (APIPA), whose first public implementation appeared in Windows 98. APIPA has been deployed on millions of machines and became a de facto standard in the industry. In May 2005, the IETF defined a formal standard for it. Addressing conflicts An IP address conflict occurs when two devices on the same local physical or wireless network claim to have the same IP address. A second assignment of an address generally stops the IP functionality of one or both of the devices. Many modern operating systems notify the administrator of IP address conflicts. When IP addresses are assigned by multiple people and systems with differing methods, any of them may be at fault. If one of the devices involved in the conflict is the default gateway access beyond the LAN for all devices on the LAN, all devices may be impaired. Routing IP addresses are classified into several classes of operational characteristics: unicast, multicast, anycast and broadcast addressing. Unicast addressing The most common concept of an IP address is in unicast addressing, available in both IPv4 and IPv6. It normally refers to a single sender or a single receiver, and can be used for both sending and receiving. Usually, a unicast address is associated with a single device or host, but a device or host may have more than one unicast address. Sending the same data to multiple unicast addresses requires the sender to send all the data many times over, once for each recipient. Broadcast addressing Broadcasting is an addressing technique available in IPv4 to address data to all possible destinations on a network in one transmission operation as an all-hosts broadcast. All receivers capture the network packet. The address is used for network broadcast. In addition, a more limited directed broadcast uses the all-ones host address with the network prefix. For example, the destination address used for directed broadcast to devices on the network is . IPv6 does not implement broadcast addressing and replaces it with multicast to the specially defined all-nodes multicast address. Multicast addressing A multicast address is associated with a group of interested receivers. In IPv4, addresses through (the former Class D addresses) are designated as multicast addresses. IPv6 uses the address block with the prefix for multicast. In either case, the sender sends a single datagram from its unicast address to the multicast group address and the intermediary routers take care of making copies and sending them to all interested receivers (those that have joined the corresponding multicast group). Anycast addressing Like broadcast and multicast, anycast is a one-to-many routing topology. However, the data stream is not transmitted to all receivers, just the one which the router decides is closest in the network. Anycast addressing is a built-in feature of IPv6. In IPv4, anycast addressing is implemented with Border Gateway Protocol using the shortest-path metric to choose destinations. Anycast methods are useful for global load balancing and are commonly used in distributed DNS systems. Geolocation A host may use geolocation to deduce the geographic position of its communicating peer. This is typically done by retrieving geolocation info about the IP address of the other node from a database. Public address A public IP address is a globally routable unicast IP address, meaning that the address is not an address reserved for use in private networks, such as those reserved by , or the various IPv6 address formats of local scope or site-local scope, for example for link-local addressing. Public IP addresses may be used for communication between hosts on the global Internet. In a home situation, a public IP address is the IP address assigned to the home's network by the ISP. In this case, it is also locally visible by logging into the router configuration. Most public IP addresses change, and relatively often. Any type of IP address that changes is called a dynamic IP address. In home networks, the ISP usually assigns a dynamic IP. If an ISP gave a home network an unchanging address, it is more likely to be abused by customers who host websites from home, or by hackers who can try the same IP address over and over until they breach a network. Address translation Multiple client devices can appear to share an IP address, either because they are part of a shared web hosting service environment or because an IPv4 network address translator (NAT) or proxy server acts as an intermediary agent on behalf of the client, in which case the real originating IP address is masked from the server receiving a request. A common practice is to have a NAT mask many devices in a private network. Only the public interface(s) of the NAT needs to have an Internet-routable address. The NAT device maps different IP addresses on the private network to different TCP or UDP port numbers on the public network. In residential networks, NAT functions are usually implemented in a residential gateway. In this scenario, the computers connected to the router have private IP addresses and the router has a public address on its external interface to communicate on the Internet. The internal computers appear to share one public IP address. Law In March 2024, the Supreme Court of Canada decided that IP addresses were protected private information under the Canadian Charter of Rights and Freedoms, with police searches requiring a warrant in order to obtain them. IP addresses are considered personal data by the European Commission and are protected by the General Data Protection Regulation. Diagnostic tools Computer operating systems provide various diagnostic tools to examine network interfaces and address configuration. Microsoft Windows provides the command-line interface tools ipconfig and netsh and users of Unix-like systems may use ifconfig, netstat, route, lanstat, fstat, and iproute2 utilities to accomplish the task.
Technology
Internet
null
14922
https://en.wikipedia.org/wiki/If%20and%20only%20if
If and only if
In logic and related fields such as mathematics and philosophy, "if and only if" (often shortened as "iff") is paraphrased by the biconditional, a logical connective between statements. The biconditional is true in two cases, where either both statements are true or both are false. The connective is biconditional (a statement of material equivalence), and can be likened to the standard material conditional ("only if", equal to "if ... then") combined with its reverse ("if"); hence the name. The result is that the truth of either one of the connected statements requires the truth of the other (i.e. either both statements are true, or both are false), though it is controversial whether the connective thus defined is properly rendered by the English "if and only if"—with its pre-existing meaning. For example, P if and only if Q means that P is true whenever Q is true, and the only case in which P is true is if Q is also true, whereas in the case of P if Q, there could be other scenarios where P is true and Q is false. In writing, phrases commonly used as alternatives to P "if and only if" Q include: Q is necessary and sufficient for P, for P it is necessary and sufficient that Q, P is equivalent (or materially equivalent) to Q (compare with material implication), P precisely if Q, P precisely (or exactly) when Q, P exactly in case Q, and P just in case Q. Some authors regard "iff" as unsuitable in formal writing; others consider it a "borderline case" and tolerate its use. In logical formulae, logical symbols, such as and , are used instead of these phrases; see below. Definition The truth table of P Q is as follows: It is equivalent to that produced by the XNOR gate, and opposite to that produced by the XOR gate. Usage Notation The corresponding logical symbols are "", "", and , and sometimes "iff". These are usually treated as equivalent. However, some texts of mathematical logic (particularly those on first-order logic, rather than propositional logic) make a distinction between these, in which the first, ↔, is used as a symbol in logic formulas, while ⇔ is used in reasoning about those logic formulas (e.g., in metalogic). In Łukasiewicz's Polish notation, it is the prefix symbol . Another term for the logical connective, i.e., the symbol in logic formulas, is exclusive nor. In TeX, "if and only if" is shown as a long double arrow: via command \iff or \Longleftrightarrow. Proofs In most logical systems, one proves a statement of the form "P iff Q" by proving either "if P, then Q" and "if Q, then P", or "if P, then Q" and "if not-P, then not-Q". Proving these pairs of statements sometimes leads to a more natural proof, since there are not obvious conditions in which one would infer a biconditional directly. An alternative is to prove the disjunction "(P and Q) or (not-P and not-Q)", which itself can be inferred directly from either of its disjuncts—that is, because "iff" is truth-functional, "P iff Q" follows if P and Q have been shown to be both true, or both false. Origin of iff and pronunciation Usage of the abbreviation "iff" first appeared in print in John L. Kelley's 1955 book General Topology. Its invention is often credited to Paul Halmos, who wrote "I invented 'iff,' for 'if and only if'—but I could never believe I was really its first inventor." It is somewhat unclear how "iff" was meant to be pronounced. In current practice, the single 'word' "iff" is almost always read as the four words "if and only if". However, in the preface of General Topology, Kelley suggests that it should be read differently: "In some cases where mathematical content requires 'if and only if' and euphony demands something less I use Halmos' 'iff. The authors of one discrete mathematics textbook suggest: "Should you need to pronounce iff, really hang on to the 'ff' so that people hear the difference from 'if, implying that "iff" could be pronounced as . Usage in definitions Conventionally, definitions are "if and only if" statements; some texts — such as Kelley's General Topology — follow this convention, and use "if and only if" or iff in definitions of new terms. However, this usage of "if and only if" is relatively uncommon and overlooks the linguistic fact that the "if" of a definition is interpreted as meaning "if and only if". The majority of textbooks, research papers and articles (including English Wikipedia articles) follow the linguistic convention of interpreting "if" as "if and only if" whenever a mathematical definition is involved (as in "a topological space is compact if every open cover has a finite subcover"). Moreover, in the case of a recursive definition, the only if half of the definition is interpreted as a sentence in the metalanguage stating that the sentences in the definition of a predicate are the only sentences determining the extension of the predicate. In terms of Euler diagrams Euler diagrams show logical relationships among events, properties, and so forth. "P only if Q", "if P then Q", and "P→Q" all mean that P is a subset, either proper or improper, of Q. "P if Q", "if Q then P", and Q→P all mean that Q is a proper or improper subset of P. "P if and only if Q" and "Q if and only if P" both mean that the sets P and Q are identical to each other. More general usage Iff is used outside the field of logic as well. Wherever logic is applied, especially in mathematical discussions, it has the same meaning as above: it is an abbreviation for if and only if, indicating that one statement is both necessary and sufficient for the other. This is an example of mathematical jargon (although, as noted above, if is more often used than iff in statements of definition). The elements of X are all and only the elements of Y means: "For any z in the domain of discourse, z is in X if and only if z is in Y." When "if" means "if and only if" In their Artificial Intelligence: A Modern Approach, Russell and Norvig note (page 282), in effect, that it is often more natural to express if and only if as if together with a "database (or logic programming) semantics". They give the example of the English sentence "Richard has two brothers, Geoffrey and John". In a database or logic program, this could be represented simply by two sentences: Brother(Richard, Geoffrey). Brother(Richard, John). The database semantics interprets the database (or program) as containing all and only the knowledge relevant for problem solving in a given domain. It interprets only if as expressing in the metalanguage that the sentences in the database represent the only knowledge that should be considered when drawing conclusions from the database. In first-order logic (FOL) with the standard semantics, the same English sentence would need to be represented, using if and only if, with only if interpreted in the object language, in some such form as: X(Brother(Richard, X) iff X = Geoffrey or X = John). Geoffrey ≠ John. Compared with the standard semantics for FOL, the database semantics has a more efficient implementation. Instead of reasoning with sentences of the form: conclusion iff conditions it uses sentences of the form: conclusion if conditions to reason forwards from conditions to conclusions or backwards from conclusions to conditions. The database semantics is analogous to the legal principle expressio unius est exclusio alterius (the express mention of one thing excludes all others). Moreover, it underpins the application of logic programming to the representation of legal texts and legal reasoning.
Mathematics
Mathematical logic
null
14939
https://en.wikipedia.org/wiki/Intercontinental%20ballistic%20missile
Intercontinental ballistic missile
An intercontinental ballistic missile (ICBM) is a ballistic missile with a range greater than , primarily designed for nuclear weapons delivery (delivering one or more thermonuclear warheads). Conventional, chemical, and biological weapons can also be delivered with varying effectiveness, but have never been deployed on ICBMs. Most modern designs support multiple independently targetable reentry vehicle (MIRVs), allowing a single missile to carry several warheads, each of which can strike a different target. The United States, Russia, China, France, India, the United Kingdom, Israel, and North Korea are the only countries known to have operational ICBMs. Pakistan is the only nuclear-armed state that does not possess ICBMs. Early ICBMs had limited precision, which made them suitable for use only against the largest targets, such as cities. They were seen as a "safe" basing option, one that would keep the deterrent force close to home where it would be difficult to attack. Attacks against military targets (especially hardened ones) demanded the use of a more precise, crewed bomber. Second- and third-generation designs (such as the LGM-118 Peacekeeper) dramatically improved accuracy to the point where even the smallest point targets can be successfully attacked. ICBMs are differentiated by having greater range and speed than other ballistic missiles: intermediate-range ballistic missiles (IRBMs), medium-range ballistic missiles (MRBMs), short-range ballistic missiles (SRBMs) and tactical ballistic missiles. History World War II The first practical design for an ICBM grew out of Nazi Germany's V-2 rocket program. The liquid-fueled V-2, designed by Wernher von Braun and his team, was then widely used by Nazi Germany from mid-1944 until March 1945 to bomb British and Belgian cities, particularly Antwerp and London. Under Projekt Amerika, von Braun's team developed the A9/10 ICBM, intended for use in bombing New York and other American cities. Initially intended to be guided by radio, it was changed to be a piloted craft after the failure of Operation Elster. The second stage of the A9/A10 rocket was tested a few times in January and February 1945. After the war, the US executed Operation Paperclip, which took von Braun and hundreds of other leading Nazi scientists to the United States to develop IRBMs, ICBMs, and launchers for the US Army. This technology was predicted by US General of the Army Hap Arnold, who wrote in 1943: Cold War After World War II, the Americans and the Soviets started rocket research programs based on the V-2 and other German wartime designs. Each branch of the US military started its own programs, leading to considerable duplication of effort. In the Soviet Union, rocket research was centrally organized although several teams worked on different designs. The US initiated ICBM research in 1946 with the RTV-A-2 Hiroc project. This was a three-stage effort with the ICBM development not starting until the third stage. However, funding was cut in 1948 after only three partially successful launches of the second stage design, that was used to test variations of the V-2 design. With overwhelming air superiority and truly intercontinental bombers, the newly formed US Air Force did not take the problem of ICBM development seriously. Things changed in 1953 with the Soviet testing of their first thermonuclear weapon, but it was not until 1954 that the Atlas missile program was given the highest national priority. The Atlas A first flew on 11 June 1957; the flight lasted only about 24 seconds before the rocket exploded. The first successful flight of an Atlas missile to full range occurred 28 November 1958. The first armed version of the Atlas, the Atlas D, was declared operational in January 1959 at Vandenberg, although it had not yet flown. The first test flight was carried out on 9 July 1959, and the missile was accepted for service on 1 September. The Titan I was another US multistage ICBM, with a successful launch February 5, 1959, with Titan I A3. Unlike the Atlas, the Titan I was a two-stage missile, rather than three. The Titan was larger, yet lighter, than the Atlas. Due to the improvements in engine technology and guidance systems the Titan I overtook the Atlas. In the Soviet Union, early development was focused on missiles able to attack European targets. That changed in 1953, when Sergei Korolyov was directed to start development of a true ICBM able to deliver newly developed hydrogen bombs. Given steady funding throughout, the R-7 developed with some speed. The first launch took place on 15 May 1957 and led to an unintended crash from the site. The first successful test followed on 21 August 1957; the R-7 flew over and became the world's first ICBM. The first strategic-missile unit became operational on 9 February 1959 at Plesetsk in north-west Russia. It was the same R-7 launch vehicle that placed the first artificial satellite in space, Sputnik, on 4 October 1957. The first human spaceflight in history was accomplished on a derivative of R-7, Vostok, on 12 April 1961, by Soviet cosmonaut Yuri Gagarin. A heavily modernized version of the R-7 is still used as the launch vehicle for the Soviet/Russian Soyuz spacecraft, marking more than 60 years of operational history of Sergei Korolyov's original rocket design. The R-7 and Atlas each required a large launch facility, making them vulnerable to attack, and could not be kept in a ready state. Failure rates were very high throughout the early years of ICBM technology. Human spaceflight programs (Vostok, Mercury, Voskhod, Gemini, etc.) served as a highly visible means of demonstrating confidence in reliability, with successes translating directly to national defense implications. The US was well behind the Soviets in the Space Race and so US President John F. Kennedy increased the stakes with the Apollo program, which used Saturn rocket technology that had been funded by President Dwight D. Eisenhower. These early ICBMs also formed the basis of many space launch systems. Examples include R-7, Atlas, Redstone, Titan, and Proton, which was derived from the earlier ICBMs but never deployed as an ICBM. The Eisenhower administration supported the development of solid-fueled missiles such as the LGM-30 Minuteman, Polaris and Skybolt. Modern ICBMs tend to be smaller than their ancestors, due to increased accuracy and smaller and lighter warheads, and use solid fuels, making them less useful as orbital launch vehicles. The Western view of the deployment of these systems was governed by the strategic theory of mutual assured destruction. In the 1950s and 1960s, development began on anti-ballistic missile systems by both the Americans and Soviets. Such systems were restricted by the 1972 Anti-Ballistic Missile Treaty. The first successful ABM test was conducted by the Soviets in 1961, which later deployed a fully operational system defending Moscow in the 1970s (see Moscow ABM system). The 1972 SALT treaty froze the number of ICBM launchers of both the Americans and the Soviets at existing levels and allowed new submarine-based SLBM launchers only if an equal number of land-based ICBM launchers were dismantled. Subsequent talks, called SALT II, were held from 1972 to 1979 and actually reduced the number of nuclear warheads held by the US and Soviets. SALT II was never ratified by the US Senate, but its terms were honored by both sides until 1986, when the Reagan administration "withdrew" after it had accused the Soviets of violating the pact. In the 1980s, President Ronald Reagan launched the Strategic Defense Initiative as well as the MX and Midgetman ICBM programs. China developed a minimal independent nuclear deterrent entering its own cold war after an ideological split with the Soviet Union beginning in the early 1960s. After first testing a domestic built nuclear weapon in 1964, it went on to develop various warheads and missiles. Beginning in the early 1970s, the liquid fuelled DF-5 ICBM was developed and used as a satellite launch vehicle in 1975. The DF-5, with a range of —long enough to strike the Western United States and the Soviet Union—was silo deployed, with the first pair in service by 1981 and possibly twenty missiles in service by the late 1990s. China also deployed the JL-1 Medium-range ballistic missile with a reach of aboard the ultimately unsuccessful Type 092 submarine. Post–Cold War In 1991, the United States and the Soviet Union agreed in the START I treaty to reduce their deployed ICBMs and attributed warheads. , all five of the nations with permanent seats on the United Nations Security Council have fully operational long-range ballistic missile systems; Russia, the United States, and China also have land-based ICBMs (the US missiles are silo-based, while China and Russia have both silo and road-mobile (DF-31, RT-2PM2 Topol-M missiles). Israel is believed to have deployed a road mobile nuclear ICBM, the Jericho III, which entered service in 2008; an upgraded version is in development. India successfully test fired Agni V, with a strike range of more than on 19 April 2012, claiming entry into the ICBM club. The missile's actual range is speculated by foreign researchers to be up to with India having downplayed its capabilities to avoid causing concern to other countries. On 15 December 2022, first night trial of Agni-V was successfully carried out by SFC from Abdul Kalam Island, Odisha. The missile is now 20 percent lighter because the use of composite materials rather than steel material. The range has been increased to 7,000 km. By 2012 there was speculation by some intelligence agencies that North Korea is developing an ICBM. North Korea successfully put a satellite into space on 12 December 2012 using the Unha-3 rocket. The United States claimed that the launch was in fact a way to test an ICBM. (See Timeline of first orbital launches by country.) In early July 2017, North Korea claimed for the first time to have tested successfully an ICBM capable of carrying a large thermonuclear warhead. In July 2014, China announced the development of its newest generation of ICBM, the Dongfeng-41 (DF-41), which has a range of , capable of reaching the United States, and which analysts believe is capable of being outfitted with MIRV technology. Most countries in the early stages of developing ICBMs have used liquid propellants, with the known exceptions being the Indian Agni-V, the planned but cancelled South African RSA-4 ICBM, and the now in service Israeli Jericho III. The RS-28 Sarmat (Russian: РС-28 Сармат; NATO reporting name: SATAN 2), is a Russian liquid-fueled, MIRV-equipped, super-heavy thermonuclear armed intercontinental ballistic missile in development by the Makeyev Rocket Design Bureau from 2009, intended to replace the previous R-36 missile. Its large payload would allow for up to 10 heavy warheads or 15 lighter ones or up to 24 hypersonic glide vehicles Yu-74, or a combination of warheads and massive amounts of countermeasures designed to defeat anti-missile systems; it was announced by the Russian military as a response to the US Prompt Global Strike. In July 2023, North Korea fired a suspected intercontinental ballistic missile that landed short of Japanese waters. The launch follows North Korea's threat to retaliate against the US for alleged spy plane incursions. Flight phases The following flight phases can be distinguished: Boost phase, which can last from 3 to 5 minutes. It is shorter for a solid-fuel rocket than for a liquid-propellant rocket. Depending on the trajectory chosen, typical burnout speed is , up to . The altitude of the missile at the end of this phase is typically . Midcourse phase, which lasts approx. 25 minutes, is sub-orbital spaceflight with the flightpath being a part of an ellipse with a vertical major axis. The apogee (halfway through the midcourse phase) is at an altitude of approximately . The semi-major axis is between and the projection of the flightpath on the Earth's surface is close to a great circle, though slightly displaced due to earth rotation during the time of flight. In this phase, the missile may release several independent warheads and penetration aids, such as metallic-coated balloons, aluminum chaff, and full-scale warhead decoys. Reentry/Terminal phase, which lasts two minutes starting at an altitude of . At the end of this phase, the missile's payload will impact the target, with impact at a speed of up to (for early ICBMs less than ); see also maneuverable reentry vehicle. ICBMs usually use the trajectory which optimizes range for a given amount of payload (the minimum-energy trajectory); an alternative is a depressed trajectory, which allows less payload, shorter flight time, and has a much lower apogee. Modern ICBMs Modern ICBMs typically carry multiple independently targetable reentry vehicles (MIRVs), each of which carries a separate nuclear warhead, allowing a single missile to hit multiple targets. MIRV was an outgrowth of the rapidly shrinking size and weight of modern warheads and the Strategic Arms Limitation Treaties (SALT I and SALT II), which imposed limitations on the number of launch vehicles. It has also proved to be an "easy answer" to proposed deployments of anti-ballistic missile (ABM) systems: It is far less expensive to add more warheads to an existing missile system than to build an ABM system capable of shooting down the additional warheads; hence, most ABM system proposals have been judged to be impractical. The first operational ABM systems were deployed in the United States during the 1970s. The Safeguard ABM facility, located in North Dakota, was operational from 1975 to 1976. The Soviets deployed their ABM-1 Galosh system around Moscow in the 1970s, which remains in service. Israel deployed a national ABM system based on the Arrow missile in 1998, but it is mainly designed to intercept shorter-ranged theater ballistic missiles, not ICBMs. The Alaska-based United States national missile defense system attained initial operational capability in 2004. ICBMs can be deployed from multiple platforms: In missile silos, which offer some protection from military attack (including, the designers hope, some protection from a nuclear first strike) On submarines: submarine-launched ballistic missiles (SLBMs); most or all SLBMs have the long range of ICBMs (as opposed to IRBMs) On heavy trucks: this applies to one version of the Topol which may be deployed from a self-propelled mobile launcher, capable of moving through roadless terrain, and launching a missile from any point along its route Mobile launchers on rails; this applies, for example, to РТ-23УТТХ "Молодец" (RT-23UTTH "Molodets" – SS-24 "Scalpel") The last three kinds are mobile and therefore hard to detect prior to a missile launch. During storage, one of the most important features of the missile is its serviceability. One of the key features of the first computer-controlled ICBM, the Minuteman missile, was that it could quickly and easily use its computer to test itself. After launch, a booster pushes the missile and then falls away. Most modern boosters are Solid-propellant rocket motors, which can be stored easily for long periods of time. Early missiles used liquid-fueled rocket motors. Many liquid-fueled ICBMs could not be kept fueled at all times as the cryogenic fuel liquid oxygen boiled off and caused ice formation, and therefore fueling the rocket was necessary before launch. This procedure was a source of significant operational delay and might allow the missiles to be destroyed by enemy counterparts before they could be used. To resolve this problem Nazi Germany invented the missile silo that protected the missile from Strategic Bombing and also hid fueling operations underground. Although the USSR/Russia preferred ICBM designs that use hypergolic liquid fuels, which can be stored at room temperature for more than a few years. Once the booster falls away, the remaining "bus" releases several warheads, each of which continues on its own unpowered ballistic trajectory, much like an artillery shell or cannonball. The warhead is encased in a cone-shaped reentry vehicle and is difficult to detect in this phase of flight as there is no rocket exhaust or other emissions to mark its position to defenders. The high speeds of the warheads make them difficult to intercept and allow for little warning, striking targets many thousands of kilometers away from the launch site (and due to the possible locations of the submarines: anywhere in the world) within approximately 30 minutes. Many authorities say that missiles also release aluminized balloons, electronic noisemakers, and other decoys intended to confuse interception devices and radars. As the nuclear warhead reenters the Earth's atmosphere, its high speed causes compression of the air, leading to a dramatic rise in temperature which would destroy it, if it were not shielded in some way. In one design, warhead components are contained within an aluminium honeycomb substructure, sheathed in a pyrolytic carbon-epoxy synthetic resin composite material heat shield. Warheads are also often radiation-hardened (to protect against nuclear armed ABMs or the nearby detonation of friendly warheads), one neutron-resistant material developed for this purpose in the UK is three-dimensional quartz phenolic. Circular error probable is crucial, because halving the circular error probable decreases the needed warhead energy by a factor of four. Accuracy is limited by the accuracy of the navigation system and the available geodetic information. Strategic missile systems are thought to use custom integrated circuits designed to calculate navigational differential equations thousands to millions of FLOPS in order to reduce navigational errors caused by calculation alone. These circuits are usually a network of binary addition circuits that continually recalculate the missile's position. The inputs to the navigation circuit are set by a general-purpose computer according to a navigational input schedule loaded into the missile before launch. One particular weapon developed by the Soviet Unionthe Fractional Orbital Bombardment Systemhad a partial orbital trajectory, and unlike most ICBMs its target could not be deduced from its orbital flight path. It was decommissioned in compliance with arms control agreements, which address the maximum range of ICBMs and prohibit orbital or fractional-orbital weapons. However, according to reports, Russia is working on the new Sarmat ICBM which leverages Fractional Orbital Bombardment concepts to use a Southern polar approach instead of flying over the northern polar regions. Using that approach, it is theorized, avoids the American missile defense batteries in California and Alaska. New development of ICBM technology are ICBMs able to carry hypersonic glide vehicles as a payload such as RS-28 Sarmat. On 12 March 2024 India announced that it had joined a very limited group of countries, which are capable of firing multiple warheads on a single ICBM. The announcement came after successfully testing multiple independently targetable reentry vehicle (MIRV) technology. Specific ICBMs Land-based ICBMs Russia, the United States, China, North Korea, India and Israel are the only countries currently known to possess land-based ICBMs. The United States currently operates 405 ICBMs in three USAF bases. The only model deployed is LGM-30G Minuteman-III. All previous USAF Minuteman II missiles were destroyed in accordance with START II, and their launch silos have been sealed or sold to the public. The powerful MIRV-capable Peacekeeper missiles were phased out in 2005. The Russian Strategic Rocket Forces have 286 ICBMs able to deliver 958 nuclear warheads: 46 silo-based R-36M2 (SS-18), 30 silo-based UR-100N (SS-19), 36 mobile RT-2PM "Topol" (SS-25), 60 silo-based RT-2UTTH "Topol M" (SS-27), 18 mobile RT-2UTTH "Topol M" (SS-27), 84 mobile RS-24 "Yars" (SS-29), and 12 silo-based RS-24 "Yars" (SS-29). China has developed several long-range ICBMs, like the DF-31. The Dongfeng 5 or DF-5 is a 3-stage liquid fuel ICBM and has an estimated range of 13,000 kilometers. The DF-5 had its first flight in 1971 and was in operational service 10 years later. One of the downsides of the missile was that it took between 30 and 60 minutes to fuel. The Dong Feng 31 (a.k.a. CSS-10) is a medium-range, three-stage, solid-propellant intercontinental ballistic missile, and is a land-based variant of the submarine-launched JL-2. The DF-41 or CSS-X-10 can carry up to 10 nuclear warheads, which are MIRVs and has a range of approximately . The DF-41 deployed underground in Xinjiang, Qinghai, Gansu and Inner Mongolia. The mysterious underground subway ICBM carrier systems are called the "Underground Great Wall Project". Israel is believed to have deployed a road mobile nuclear ICBM, the Jericho III, which entered service in 2008. It is possible for the missile to be equipped with a single nuclear warhead or up to three MIRV warheads. It is believed to be based on the Shavit space launch vehicle and is estimated to have a range of . In November 2011 Israel tested an ICBM believed to be an upgraded version of the Jericho III. India has a series of ballistic missiles called Agni. On 19 April 2012, India successfully test fired its first Agni-V, a three-stage solid fueled missile, with a strike range of more than . Missile was test-fired for the second time on 15 September 2013. On 31 January 2015, India conducted a third successful test flight of the Agni-V from the Abdul Kalam Island facility. The test used a canisterised version of the missile, mounted over a Tata truck. On 15 December 2022, first night trial of Agni-V was successfully carried out by SFC from Abdul Kalam Island, Odisha. The missile is now 20 percent lighter because the use of composite materials rather than steel material. The range has been increased to 7,000 km. Submarine-launched ICBMs Missile defense An anti-ballistic missile is a missile which can be deployed to counter an incoming nuclear or non-nuclear ICBM. ICBMs can be intercepted in three regions of their trajectory: boost phase, mid-course phase or terminal phase. The United States, Russia, India, France, Israel, and China have now developed anti-ballistic missile systems, of which the Russian A-135 anti-ballistic missile system, the American Ground-Based Midcourse Defense, the Indian Prithvi Defence Vehicle Mark-II and the Israeli Arrow 3 are the only systems having the capability to intercept and shoot down ICBMs carrying nuclear, chemical, biological, or conventional warheads.
Technology
Missiles
null
14946
https://en.wikipedia.org/wiki/Ice
Ice
Ice is water that is frozen into a solid state, typically forming at or below temperatures of 0 °C, 32 °F, or 273.15 K. It occurs naturally on Earth, on other planets, in Oort cloud objects, and as interstellar ice. As a naturally occurring crystalline inorganic solid with an ordered structure, ice is considered to be a mineral. Depending on the presence of impurities such as particles of soil or bubbles of air, it can appear transparent or a more or less opaque bluish-white color. Virtually all of the ice on Earth is of a hexagonal crystalline structure denoted as ice Ih (spoken as "ice one h"). Depending on temperature and pressure, at least nineteen phases (packing geometries) can exist. The most common phase transition to ice Ih occurs when liquid water is cooled below (, ) at standard atmospheric pressure. When water is cooled rapidly (quenching), up to three types of amorphous ice can form. Interstellar ice is overwhelmingly low-density amorphous ice (LDA), which likely makes LDA ice the most abundant type in the universe. When cooled slowly, correlated proton tunneling occurs below (, ) giving rise to macroscopic quantum phenomena. Ice is abundant on the Earth's surface, particularly in the polar regions and above the snow line, where it can aggregate from snow to form glaciers and ice sheets. As snowflakes and hail, ice is a common form of precipitation, and it may also be deposited directly by water vapor as frost. The transition from ice to water is melting and from ice directly to water vapor is sublimation. These processes plays a key role in Earth's water cycle and climate. In the recent decades, ice volume on Earth has been decreasing due to climate change. The largest declines have occurred in the Arctic and in the mountains located outside of the polar regions. The loss of grounded ice (as opposed to floating sea ice) is the primary contributor to sea level rise. Humans have been using ice for various purposes for thousands of years. Some historic structures designed to hold ice to provide cooling are over 2,000 years old. Before the invention of refrigeration technology, the only way to safely store food without modifying it through preservatives was to use ice. Sufficiently solid surface ice makes waterways accessible to land transport during winter, and dedicated ice roads may be maintained. Ice also plays a major role in winter sports. Physical properties Ice possesses a regular crystalline structure based on the molecule of water, which consists of a single oxygen atom covalently bonded to two hydrogen atoms, or H–O–H. However, many of the physical properties of water and ice are controlled by the formation of hydrogen bonds between adjacent oxygen and hydrogen atoms; while it is a weak bond, it is nonetheless critical in controlling the structure of both water and ice. An unusual property of water is that its solid form—ice frozen at atmospheric pressure—is approximately 8.3% less dense than its liquid form; this is equivalent to a volumetric expansion of 9%. The density of ice is 0.9167–0.9168 g/cm3 at 0 °C and standard atmospheric pressure (101,325 Pa), whereas water has a density of 0.9998–0.999863 g/cm3 at the same temperature and pressure. Liquid water is densest, essentially 1.00 g/cm3, at 4 °C and begins to lose its density as the water molecules begin to form the hexagonal crystals of ice as the freezing point is reached. This is due to hydrogen bonding dominating the intermolecular forces, which results in a packing of molecules less compact in the solid. The density of ice increases slightly with decreasing temperature and has a value of 0.9340 g/cm3 at −180 °C (93 K). When water freezes, it increases in volume (about 9% for fresh water). The effect of expansion during freezing can be dramatic, and ice expansion is a basic cause of freeze-thaw weathering of rock in nature and damage to building foundations and roadways from frost heaving. It is also a common cause of the flooding of houses when water pipes burst due to the pressure of expanding water when it freezes. Because ice is less dense than liquid water, it floats, and this prevents bottom-up freezing of the bodies of water. Instead, a sheltered environment for animal and plant life is formed beneath the floating ice, which protects the underside from short-term weather extremes such as wind chill. Sufficiently thin floating ice allows light to pass through, supporting the photosynthesis of bacterial and algal colonies. When sea water freezes, the ice is riddled with brine-filled channels which sustain sympagic organisms such as bacteria, algae, copepods and annelids. In turn, they provide food for animals such as krill and specialized fish like the bald notothen, fed upon in turn by larger animals such as emperor penguins and minke whales. When ice melts, it absorbs as much energy as it would take to heat an equivalent mass of water by . During the melting process, the temperature remains constant at . While melting, any energy added breaks the hydrogen bonds between ice (water) molecules. Energy becomes available to increase the thermal energy (temperature) only after enough hydrogen bonds are broken that the ice can be considered liquid water. The amount of energy consumed in breaking hydrogen bonds in the transition from ice to water is known as the heat of fusion. As with water, ice absorbs light at the red end of the spectrum preferentially as the result of an overtone of an oxygen–hydrogen (O–H) bond stretch. Compared with water, this absorption is shifted toward slightly lower energies. Thus, ice appears blue, with a slightly greener tint than liquid water. Since absorption is cumulative, the color effect intensifies with increasing thickness or if internal reflections cause the light to take a longer path through the ice. Other colors can appear in the presence of light absorbing impurities, where the impurity is dictating the color rather than the ice itself. For instance, icebergs containing impurities (e.g., sediments, algae, air bubbles) can appear brown, grey or green. Because ice in natural environments is usually close to its melting temperature, its hardness shows pronounced temperature variations. At its melting point, ice has a Mohs hardness of 2 or less, but the hardness increases to about 4 at a temperature of and to 6 at a temperature of , the vaporization point of solid carbon dioxide (dry ice). Phases Most liquids under increased pressure freeze at higher temperatures because the pressure helps to hold the molecules together. However, the strong hydrogen bonds in water make it different: for some pressures higher than , water freezes at a temperature below . Ice, water, and water vapour can coexist at the triple point, which is exactly at a pressure of 611.657 Pa. The kelvin was defined as of the difference between this triple point and absolute zero, though this definition changed in May 2019. Unlike most other solids, ice is difficult to superheat. In an experiment, ice at −3 °C was superheated to about 17 °C for about 250 picoseconds. Subjected to higher pressures and varying temperatures, ice can form in nineteen separate known crystalline phases at various densities, along with hypothetical proposed phases of ice that have not been observed. With care, at least fifteen of these phases (one of the known exceptions being ice X) can be recovered at ambient pressure and low temperature in metastable form. The types are differentiated by their crystalline structure, proton ordering, and density. There are also two metastable phases of ice under pressure, both fully hydrogen-disordered; these are Ice IV and Ice XII. Ice XII was discovered in 1996. In 2006, Ice XIII and Ice XIV were discovered. Ices XI, XIII, and XIV are hydrogen-ordered forms of ices I, V, and XII respectively. In 2009, ice XV was found at extremely high pressures and −143 °C. At even higher pressures, ice is predicted to become a metal; this has been variously estimated to occur at 1.55 TPa or 5.62 TPa. As well as crystalline forms, solid water can exist in amorphous states as amorphous solid water (ASW) of varying densities. In outer space, hexagonal crystalline ice is present in the ice volcanoes, but is extremely rare otherwise. Even icy moons like Ganymede are expected to mainly consist of other crystalline forms of ice. Water in the interstellar medium is dominated by amorphous ice, making it likely the most common form of water in the universe. Low-density ASW (LDA), also known as hyperquenched glassy water, may be responsible for noctilucent clouds on Earth and is usually formed by deposition of water vapor in cold or vacuum conditions. High-density ASW (HDA) is formed by compression of ordinary ice I or LDA at GPa pressures. Very-high-density ASW (VHDA) is HDA slightly warmed to 160 K under 1–2 GPa pressures. Ice from a theorized superionic water may possess two crystalline structures. At pressures in excess of such superionic ice would take on a body-centered cubic structure. However, at pressures in excess of the structure may shift to a more stable face-centered cubic lattice. It is speculated that superionic ice could compose the interior of ice giants such as Uranus and Neptune. Friction properties Ice is "slippery" because it has a low coefficient of friction. This subject was first scientifically investigated in the 19th century. The preferred explanation at the time was "pressure melting" -i.e. the blade of an ice skate, upon exerting pressure on the ice, would melt a thin layer, providing sufficient lubrication for the blade to glide across the ice. Yet, 1939 research by Frank P. Bowden and T. P. Hughes found that skaters would experience a lot more friction than they actually do if it were the only explanation. Further, the optimum temperature for figure skating is and for hockey; yet, according to pressure melting theory, skating below would be outright impossible. Instead, Bowden and Hughes argued that heating and melting of the ice layer is caused by friction. However, this theory does not sufficiently explain why ice is slippery when standing still even at below-zero temperatures. Subsequent research suggested that ice molecules at the interface cannot properly bond with the molecules of the mass of ice beneath (and thus are free to move like molecules of liquid water). These molecules remain in a semi-liquid state, providing lubrication regardless of pressure against the ice exerted by any object. However, the significance of this hypothesis is disputed by experiments showing a high coefficient of friction for ice using atomic force microscopy. Thus, the mechanism controlling the frictional properties of ice is still an active area of scientific study. A comprehensive theory of ice friction must take into account all of the aforementioned mechanisms to estimate friction coefficient of ice against various materials as a function of temperature and sliding speed. 2014 research suggests that frictional heating is the most important process under most typical conditions. Natural formation The term that collectively describes all of the parts of the Earth's surface where water is in frozen form is the cryosphere. Ice is an important component of the global climate, particularly in regard to the water cycle. Glaciers and snowpacks are an important storage mechanism for fresh water; over time, they may sublimate or melt. Snowmelt is an important source of seasonal fresh water. The World Meteorological Organization defines several kinds of ice depending on origin, size, shape, influence and so on. Clathrate hydrates are forms of ice that contain gas molecules trapped within its crystal lattice. In the oceans Ice that is found at sea may be in the form of drift ice floating in the water, fast ice fixed to a shoreline or anchor ice if attached to the seafloor. Ice which calves (breaks off) from an ice shelf or a coastal glacier may become an iceberg. The aftermath of calving events produces a loose mixture of snow and ice known as Ice mélange. Sea ice forms in several stages. At first, small, millimeter-scale crystals accumulate on the water surface in what is known as frazil ice. As they become somewhat larger and more consistent in shape and cover, the water surface begins to look "oily" from above, so this stage is called grease ice. Then, ice continues to clump together, and solidify into flat cohesive pieces known as ice floes. Ice floes are the basic building blocks of sea ice cover, and their horizontal size (defined as half of their diameter) varies dramatically, with the smallest measured in centimeters and the largest in hundreds of kilometers. An area which is over 70% ice on its surface is said to be covered by pack ice. Fully formed sea ice can be forced together by currents and winds to form pressure ridges up to tall. On the other hand, active wave activity can reduce sea ice to small, regularly shaped pieces, known as pancake ice. Sometimes, wind and wave activity "polishes" sea ice to perfectly spherical pieces known as ice eggs. On land The largest ice formations on Earth are the two ice sheets which almost completely cover the world's largest island, Greenland, and the continent of Antarctica. These ice sheets have an average thickness of over and have existed for millions of years. Other major ice formations on land include ice caps, ice fields, ice streams and glaciers. In particular, the Hindu Kush region is known as the Earth's "Third Pole" due to the large number of glaciers it contains. They cover an area of around , and have a combined volume of between 3,000-4,700 km3. These glaciers are nicknamed "Asian water towers", because their meltwater run-off feeds into rivers which provide water for an estimated two billion people. Permafrost refers to soil or underwater sediment which continuously remains below for two years or more. The ice within permafrost is divided into four categories: pore ice, vein ice (also known as ice wedges), buried surface ice and intrasedimental ice (from the freezing of underground waters). One example of ice formation in permafrost areas is aufeis - layered ice that forms in Arctic and subarctic stream valleys. Ice, frozen in the stream bed, blocks normal groundwater discharge, and causes the local water table to rise, resulting in water discharge on top of the frozen layer. This water then freezes, causing the water table to rise further and repeat the cycle. The result is a stratified ice deposit, often several meters thick. Snow line and snow fields are two related concepts, in that snow fields accumulate on top of and ablate away to the equilibrium point (the snow line) in an ice deposit. On rivers and streams Ice which forms on moving water tends to be less uniform and stable than ice which forms on calm water. Ice jams (sometimes called "ice dams"), when broken chunks of ice pile up, are the greatest ice hazard on rivers. Ice jams can cause flooding, damage structures in or near the river, and damage vessels on the river. Ice jams can cause some hydropower industrial facilities to completely shut down. An ice dam is a blockage from the movement of a glacier which may produce a proglacial lake. Heavy ice flows in rivers can also damage vessels and require the use of an icebreaker vessel to keep navigation possible. Ice discs are circular formations of ice floating on river water. They form within eddy currents, and their position results in asymmetric melting, which makes them continuously rotate at a low speed. On lakes Ice forms on calm water from the shores, a thin layer spreading across the surface, and then downward. Ice on lakes is generally four types: primary, secondary, superimposed and agglomerate. Primary ice forms first. Secondary ice forms below the primary ice in a direction parallel to the direction of the heat flow. Superimposed ice forms on top of the ice surface from rain or water which seeps up through cracks in the ice which often settles when loaded with snow. An ice shove occurs when ice movement, caused by ice expansion and/or wind action, occurs to the extent that ice pushes onto the shores of lakes, often displacing sediment that makes up the shoreline. Shelf ice is formed when floating pieces of ice are driven by the wind piling up on the windward shore. This kind of ice may contain large air pockets under a thin surface layer, which makes it particularly hazardous to walk across it. Another dangerous form of rotten ice to traverse on foot is candle ice, which develops in columns perpendicular to the surface of a lake. Because it lacks a firm horizontal structure, a person who has fallen through has nothing to hold onto to pull themselves out. As precipitation Snow and freezing rain Snow crystals form when tiny supercooled cloud droplets (about 10 μm in diameter) freeze. These droplets are able to remain liquid at temperatures lower than , because to freeze, a few molecules in the droplet need to get together by chance to form an arrangement similar to that in an ice lattice; then the droplet freezes around this "nucleus". Experiments show that this "homogeneous" nucleation of cloud droplets only occurs at temperatures lower than . In warmer clouds an aerosol particle or "ice nucleus" must be present in (or in contact with) the droplet to act as a nucleus. Our understanding of what particles make efficient ice nuclei is poor – what we do know is they are very rare compared to that cloud condensation nuclei on which liquid droplets form. Clays, desert dust and biological particles may be effective, although to what extent is unclear. Artificial nuclei are used in cloud seeding. The droplet then grows by condensation of water vapor onto the ice surfaces. Ice storm is a type of winter storm characterized by freezing rain, which produces a glaze of ice on surfaces, including roads and power lines. In the United States, a quarter of winter weather events produce glaze ice, and utilities need to be prepared to minimize damages. Hard forms Hail forms in storm clouds when supercooled water droplets freeze on contact with condensation nuclei, such as dust or dirt. The storm's updraft blows the hailstones to the upper part of the cloud. The updraft dissipates and the hailstones fall down, back into the updraft, and are lifted up again. Hail has a diameter of or more. Within METAR code, GR is used to indicate larger hail, of a diameter of at least and GS for smaller. Stones of , and are the most frequently reported hail sizes in North America. Hailstones can grow to and weigh more than . In large hailstones, latent heat released by further freezing may melt the outer shell of the hailstone. The hailstone then may undergo 'wet growth', where the liquid outer shell collects other smaller hailstones. The hailstone gains an ice layer and grows increasingly larger with each ascent. Once a hailstone becomes too heavy to be supported by the storm's updraft, it falls from the cloud. Hail forms in strong thunderstorm clouds, particularly those with intense updrafts, high liquid water content, great vertical extent, large water droplets, and where a good portion of the cloud layer is below freezing . Hail-producing clouds are often identifiable by their green coloration. The growth rate is maximized at about , and becomes vanishingly small much below as supercooled water droplets become rare. For this reason, hail is most common within continental interiors of the mid-latitudes, as hail formation is considerably more likely when the freezing level is below the altitude of . Entrainment of dry air into strong thunderstorms over continents can increase the frequency of hail by promoting evaporative cooling which lowers the freezing level of thunderstorm clouds giving hail a larger volume to grow in. Accordingly, hail is actually less common in the tropics despite a much higher frequency of thunderstorms than in the mid-latitudes because the atmosphere over the tropics tends to be warmer over a much greater depth. Hail in the tropics occurs mainly at higher elevations. Ice pellets (METAR code PL) are a form of precipitation consisting of small, translucent balls of ice, which are usually smaller than hailstones. This form of precipitation is also referred to as "sleet" by the United States National Weather Service. (In British English "sleet" refers to a mixture of rain and snow.) Ice pellets typically form alongside freezing rain, when a wet warm front ends up between colder and drier atmospheric layers. There, raindrops would both freeze and shrink in size due to evaporative cooling. So-called snow pellets, or graupel, form when multiple water droplets freeze onto snowflakes until a soft ball-like shape is formed. So-called "diamond dust", (METAR code IC) also known as ice needles or ice crystals, forms at temperatures approaching due to air with slightly higher moisture from aloft mixing with colder, surface-based air. On surfaces As water drips and re-freezes, it can form hanging icicles, or stalagmite-like structures on the ground. On sloped roofs, buildup of ice can produce an ice dam, which stops melt water from draining properly and potentially leads to damaging leaks. More generally, water vapor depositing onto surfaces due to high relative humidity and then freezing results in various forms of atmospheric icing, or frost. Inside buildings, this can be seen as ice on the surface of un-insulated windows. Hoar frost is common in the environment, particularly in the low-lying areas such as valleys. In Antarctica, the temperatures can be so low that electrostatic attraction is increased to the point hoarfrost on snow sticks together when blown by wind into tumbleweed-like balls known as yukimarimo. Sometimes, drops of water crystallize on cold objects as rime instead of glaze. Soft rime has a density between a quarter and two thirds that of pure ice, due to a high proportion of trapped air, which also makes soft rime appear white. Hard rime is denser, more transparent, and more likely to appear on ships and aircraft. Cold wind specifically causes what is known as advection frost when it collides with objects. When it occurs on plants, it often causes damage to them. Various methods exist to protect agricultural crops from frost - from simply covering them to using wind machines. In recent decades, irrigation sprinklers have been calibrated to spray just enough water to preemptively create a layer of ice that would form slowly and so avoid a sudden temperature shock to the plant, and not be so thick as to cause damage with its weight. Ablation Ablation of ice refers to both its melting and its dissolution. The melting of ice entails the breaking of hydrogen bonds between the water molecules. The ordering of the molecules in the solid breaks down to a less ordered state and the solid melts to become a liquid. This is achieved by increasing the internal energy of the ice beyond the melting point. When ice melts it absorbs as much energy as would be required to heat an equivalent amount of water by 80 °C. While melting, the temperature of the ice surface remains constant at 0 °C. The rate of the melting process depends on the efficiency of the energy exchange process. An ice surface in fresh water melts solely by free convection with a rate that depends linearly on the water temperature, T∞, when T∞ is less than 3.98 °C, and superlinearly when T∞ is equal to or greater than 3.98 °C, with the rate being proportional to (T∞ − 3.98 °C)α, with α =  for T∞ much greater than 8 °C, and α =  for in between temperatures T∞. In salty ambient conditions, dissolution rather than melting often causes the ablation of ice. For example, the temperature of the Arctic Ocean is generally below the melting point of ablating sea ice. The phase transition from solid to liquid is achieved by mixing salt and water molecules, similar to the dissolution of sugar in water, even though the water temperature is far below the melting point of the sugar. However, the dissolution rate is limited by salt concentration and is therefore slower than melting. Role in human activities Cooling Ice has long been valued as a means of cooling. In 400 BC Iran, Persian engineers had already developed techniques for ice storage in the desert through the summer months. During the winter, ice was transported from harvesting pools and nearby mountains in large quantities to be stored in specially designed, naturally cooled refrigerators, called yakhchal (meaning ice storage). Yakhchals were large underground spaces (up to 5000 m3) that had thick walls (at least two meters at the base) made of a specific type of mortar called sarooj made from sand, clay, egg whites, lime, goat hair, and ash. The mortar was resistant to heat transfer, helping to keep the ice cool enough not to melt; it was also impenetrable by water. Yakhchals often included a qanat and a system of windcatchers that could lower internal temperatures to frigid levels, even during the heat of the summer. One use for the ice was to create chilled treats for royalty. Harvesting There were thriving industries in 16th–17th century England whereby low-lying areas along the Thames Estuary were flooded during the winter, and ice harvested in carts and stored inter-seasonally in insulated wooden houses as a provision to an icehouse often located in large country houses, and widely used to keep fish fresh when caught in distant waters. This was allegedly copied by an Englishman who had seen the same activity in China. Ice was imported into England from Norway on a considerable scale as early as 1823. In the United States, the first cargo of ice was sent from New York City to Charleston, South Carolina, in 1799, and by the first half of the 19th century, ice harvesting had become a big business. Frederic Tudor, who became known as the "Ice King", worked on developing better insulation products for long distance shipments of ice, especially to the tropics; this became known as the ice trade. Between 1812 and 1822, under Lloyd Hesketh Bamford Hesketh's instruction, Gwrych Castle was built with 18 large towers, one of those towers is called the 'Ice Tower'. Its sole purpose was to store Ice. Trieste sent ice to Egypt, Corfu, and Zante; Switzerland, to France; and Germany sometimes was supplied from Bavarian lakes. From 1930s and up until 1994, the Hungarian Parliament building used ice harvested in the winter from Lake Balaton for air conditioning. Ice houses were used to store ice formed in the winter, to make ice available all year long, and an early type of refrigerator known as an icebox was cooled using a block of ice placed inside it. Many cities had a regular ice delivery service during the summer. The advent of artificial refrigeration technology made the delivery of ice obsolete. Ice is still harvested for ice and snow sculpture events. For example, a swing saw is used to get ice for the Harbin International Ice and Snow Sculpture Festival each year from the frozen surface of the Songhua River. Artificial production The earliest known written process to artificially make ice is by the 13th-century writings of Arab historian Ibn Abu Usaybia in his book Kitab Uyun al-anba fi tabaqat-al-atibba concerning medicine in which Ibn Abu Usaybia attributes the process to an even older author, Ibn Bakhtawayhi, of whom nothing is known. Ice is now produced on an industrial scale, for uses including food storage and processing, chemical manufacturing, concrete mixing and curing, and consumer or packaged ice. Most commercial icemakers produce three basic types of fragmentary ice: flake, tubular and plate, using a variety of techniques. Large batch ice makers can produce up to 75 tons of ice per day. In 2002, there were 426 commercial ice-making companies in the United States, with a combined value of shipments of $595,487,000. Home refrigerators can also make ice with a built in icemaker, which will typically make ice cubes or crushed ice. The first such device was presented in 1965 by Frigidaire. Land travel Ice forming on roads is a common winter hazard, and black ice particularly dangerous because it is very difficult to see. It is both very transparent, and often forms specifically in shaded (and therefore cooler and darker) areas, i.e. beneath overpasses. Whenever there is freezing rain or snow which occurs at a temperature near the melting point, it is common for ice to build up on the windows of vehicles. Often, snow melts, re-freezes, and forms a fragmented layer of ice which effectively "glues" snow to the window. In this case, the frozen mass is commonly removed with ice scrapers. A thin layer of ice crystals can also form on the inside surface of car windows during sufficiently cold weather. In the 1970s and 1980s, some vehicles such as Ford Thunderbird could be upgraded with heated windshields as the result. This technology fell out of style as it was too expensive and prone to damage, but rear-window defrosters are cheaper to maintain and so are more widespread. In sufficiently cold places, the layers of ice on water surfaces can get thick enough for ice roads to be built. Some regulations specify that the minimum safe thickness is for a person, for a snowmobile and for an automobile lighter than 5 tonnes. For trucks, effective thickness varies with load - i.e. a vehicle with 9-ton total weight requires a thickness of . Notably, the speed limit for a vehicle moving at a road which meets its minimum safe thickness is 25 km/h (15 mph), going up to 35 km/h (25 mph) if the road's thickness is 2 or more times larger than the minimum safe value. There is a known instance where a railroad has been built on ice. The most famous ice road had been the Road of Life across Lake Ladoga. It operated in the winters of 1941–1942 and 1942–1943, when it was the only land route available to the Soviet Union to relieve the Siege of Leningrad by the German Army Group North. The trucks moved hundreds of thousands tonnes of supplies into the city, and hundreds of thousands of civilians were evacuated. It is now a World Heritage Site. Water-borne travel For ships, ice presents two distinct hazards. Firstly, spray and freezing rain can produce an ice build-up on the superstructure of a vessel sufficient to make it unstable, potentially to the point of capsizing. Earlier, crewmembers were regularly forced to manually hack off ice build-up. After 1980s, spraying de-icing chemicals or melting the ice through hot water/steam hoses became more common. Secondly, icebergs – large masses of ice floating in water (typically created when glaciers reach the sea) – can be dangerous if struck by a ship when underway. Icebergs have been responsible for the sinking of many ships, the most famous being the Titanic. For harbors near the poles, being ice-free, ideally all year long, is an important advantage. Examples are Murmansk (Russia), Petsamo (Russia, formerly Finland), and Vardø (Norway). Harbors which are not ice-free are opened up using specialized vessels, called icebreakers. Icebreakers are also used to open routes through the sea ice for other vessels, as the only alternative is to find the openings called "polynyas" or "leads". A widespread production of icebreakers began during the 19th century. Earlier designs simply had reinforced bows in a spoon-like or diagonal shape to effectively crush the ice. Later designs attached a forward propeller underneath the protruding bow, as the typical rear propellers were incapable of effectively steering the ship through the ice Air travel For aircraft, ice can cause a number of dangers. As an aircraft climbs, it passes through air layers of different temperature and humidity, some of which may be conducive to ice formation. If ice forms on the wings or control surfaces, this may adversely affect the flying qualities of the aircraft. In 1919, during the first non-stop flight across the Atlantic, the British aviators Captain John Alcock and Lieutenant Arthur Whitten Brown encountered such icing conditions – Brown left the cockpit and climbed onto the wing several times to remove ice which was covering the engine air intakes of the Vickers Vimy aircraft they were flying. One vulnerability effected by icing that is associated with reciprocating internal combustion engines is the carburetor. As air is sucked through the carburetor into the engine, the local air pressure is lowered, which causes adiabatic cooling. Thus, in humid near-freezing conditions, the carburetor will be colder, and tend to ice up. This will block the supply of air to the engine, and cause it to fail. Between 1969 and 1975, 468 such instances were recorded, causing 75 aircraft losses, 44 fatalities and 202 serious injuries. Thus, carburetor air intake heaters were developed. Further, reciprocating engines with fuel injection do not require carburetors in the first place. Jet engines do not experience carb icing, but they can be affected by the moisture inherently present in jet fuel freezing and forming ice crystals, which can potentially clog up fuel intake to the engine. Fuel heaters and/or de-icing additives are used to address the issue. Recreation and sports Ice plays a central role in winter recreation and in many sports such as ice skating, tour skating, ice hockey, bandy, ice fishing, ice climbing, curling, broomball and sled racing on bobsled, luge and skeleton. Many of the different sports played on ice get international attention every four years during the Winter Olympic Games. Small boat-like craft can be mounted on blades and be driven across the ice by sails. This sport is known as ice yachting, and it had been practiced for centuries. Another vehicular sport is ice racing, where drivers must speed on lake ice, while also controlling the skid of their vehicle (similar in some ways to dirt track racing). The sport has even been modified for ice rinks. Other uses As thermal ballast Ice is still used to cool and preserve food in portable coolers. Ice cubes or crushed ice can be used to cool drinks. As the ice melts, it absorbs heat and keeps the drink near . Ice can be used as part of an air conditioning system, using battery- or solar-powered fans to blow hot air over the ice. This is especially useful during heat waves when power is out and standard (electrically powered) air conditioners do not work. Ice can be used (like other cold packs) to reduce swelling (by decreasing blood flow) and pain by pressing it against an area of the body. As structural material Engineers used the substantial strength of pack ice when they constructed Antarctica's first floating ice pier in 1973. Such ice piers are used during cargo operations to load and offload ships. Fleet operations personnel make the floating pier during the winter. They build upon naturally occurring frozen seawater in McMurdo Sound until the dock reaches a depth of about . Ice piers are inherently temporary structures, although some can last as long as 10 years. Once a pier is no longer usable, it is towed to sea with an icebreaker. Structures and ice sculptures are built out of large chunks of ice or by spraying water The structures are mostly ornamental (as in the case with ice castles), and not practical for long-term habitation. Ice hotels exist on a seasonal basis in a few cold areas. Igloos are another example of a temporary structure, made primarily from snow. Engineers can also use ice to destroy. In mining, drilling holes in rock structures and then pouring water during cold weather is an accepted alternative to using dynamite, as the rock cracks when the water expands as ice. During World War II, Project Habbakuk was an Allied programme which investigated the use of pykrete (wood fibers mixed with ice) as a possible material for warships, especially aircraft carriers, due to the ease with which a vessel immune to torpedoes, and a large deck, could be constructed by ice. A small-scale prototype was built, but it soon turned out the project would cost far more than a conventional aircraft carrier while being many times slower and also vulnerable to melting. Ice has even been used as the material for a variety of musical instruments, for example by percussionist Terje Isungset. Impacts of climate change Historical Greenhouse gas emissions from human activities unbalance the Earth's energy budget and so cause an accumulation of heat. About 90% of that heat is added to ocean heat content, 1% is retained in the atmosphere and 3-4% goes to melt major parts of the cryosphere. Between 1994 and 2017, 28 trillion tonnes of ice were lost around the globe as the result. Arctic sea ice decline accounted for the single largest loss (7.6 trillion tonnes), followed by the melting of Antarctica's ice shelves (6.5 trillion tonnes), the retreat of mountain glaciers (6.1 trillion tonnes), the melting of the Greenland ice sheet (3.8 trillion tonnes) and finally the melting of the Antarctic ice sheet (2.5 trillion tonnes) and the limited losses of the sea ice in the Southern Ocean (0.9 trillion tonnes). Other than the sea ice (which already displaces water due to Archimedes' principle), these losses are a major cause of sea level rise (SLR) and they are expected to intensify in the future. In particular, the melting of the West Antarctic ice sheet may accelerate substantially as the floating ice shelves are lost and can no longer buttress the glaciers. This would trigger poorly understood marine ice sheet instability processes, which could then increase the SLR expected for the end of the century (between and , depending on future warming), by tens of centimeters more. Ice loss in Greenland and Antarctica also produces large quantities of fresh meltwater, which disrupts the Atlantic meridional overturning circulation (AMOC) and the Southern Ocean overturning circulation, respectively. These two halves of the thermohaline circulation are very important for the global climate. A continuation of high meltwater flows may cause a severe disruption (up to a point of a "collapse") of either circulation, or even both of them. Either event would be considered an example of tipping points in the climate system, because it would be extremely difficult to reverse. AMOC is generally not expected to collapse during the 21st century, while there is only limited knowledge about the Southern Ocean circulation. Another example of ice-related tipping point is permafrost thaw. While the organic content in the permafrost causes and methane emissions once it thaws and begins to decompose, ice melting liqufies the ground, causing anything built above the former permafrost to collapse. By 2050, the economic damages from such infrastructure loss are expected to cost tens of billions of dollars. Predictions In the future, the Arctic Ocean is likely to lose effectively all of its sea ice during at least some Septembers (the end of the ice melting season), although some of the ice would refreeze during the winter. I.e. an ice-free September is likely to occur once in every 40 years if global warming is at , but would occur once in every 8 years at and once in every 1.5 years at . This would affect the regional and global climate due to the ice-albedo feedback. Because ice is highly reflective of solar energy, persistent sea ice cover lowers local temperatures. Once that ice cover melts, the darker ocean waters begin to absorb more heat, which also helps to melt the remaining ice. Global losses of sea ice between 1992 and 2018, almost all of them in the Arctic, have already had the same impact as 10% of greenhouse gas emissions over the same period. If all the Arctic sea ice was gone every year between June and September (polar day, when the Sun is constantly shining), temperatures in the Arctic would increase by over , while the global temperatures would increase by around . By 2100, at least a quarter of mountain glaciers outside of Greenland and Antarctica would melt, and effectively all ice caps on non-polar mountains are likely to be lost around 200 years after global warming reaches . The West Antarctic ice sheet is highly vulnerable and will likely disappear even if the warming does not progress further, although it could take around 2,000 years before its loss is complete. The Greenland ice sheet will most likely be lost with the sustained warming between and , although its total loss requires around 10,000 years. Finally, the East Antarctic ice sheet will take at least 10,000 years to melt entirely, which requires a warming of between and . If all the ice on Earth melted, it would result in about of sea level rise, with some coming from East Antarctica. Due to isostatic rebound, the ice-free land would eventually become higher in Greenland and  in Antarctica, on average. Areas in the center of each landmass would become up to and  higher, respectively. The impact on global temperatures from losing West Antartica, mountain glaciers and the Greenland ice sheet is estimated at , and , respectively, while the lack of the East Antarctic ice sheet would increase the temperatures by . Non-water The solid phases of several other volatile substances are also referred to as ices; generally a volatile is classed as an ice if its melting or sublimation point lies above or around (assuming standard atmospheric pressure). The best known example is dry ice, the solid form of carbon dioxide. Its sublimation/deposition point occurs at . A "magnetic analogue" of ice is also realized in some insulating magnetic materials in which the magnetic moments mimic the position of protons in water ice and obey energetic constraints similar to the Bernal-Fowler ice rules arising from the geometrical frustration of the proton configuration in water ice. These materials are called spin ice.
Physical sciences
Inorganic compounds
null
14951
https://en.wikipedia.org/wiki/Ionic%20bonding
Ionic bonding
Ionic bonding is a type of chemical bonding that involves the electrostatic attraction between oppositely charged ions, or between two atoms with sharply different electronegativities, and is the primary interaction occurring in ionic compounds. It is one of the main types of bonding, along with covalent bonding and metallic bonding. Ions are atoms (or groups of atoms) with an electrostatic charge. Atoms that gain electrons make negatively charged ions (called anions). Atoms that lose electrons make positively charged ions (called cations). This transfer of electrons is known as electrovalence in contrast to covalence. In the simplest case, the cation is a metal atom and the anion is a nonmetal atom, but these ions can be more complex, e.g. molecular ions like or . In simpler words, an ionic bond results from the transfer of electrons from a metal to a non-metal to obtain a full valence shell for both atoms. Clean ionic bonding — in which one atom or molecule completely transfers an electron to another — cannot exist: all ionic compounds have some degree of covalent bonding or electron sharing. Thus, the term "ionic bonding" is given when the ionic character is greater than the covalent character – that is, a bond in which there is a large difference in electronegativity between the two atoms, causing the bonding to be more polar (ionic) than in covalent bonding where electrons are shared more equally. Bonds with partially ionic and partially covalent characters are called polar covalent bonds. Ionic compounds conduct electricity when molten or in solution, typically not when solid. Ionic compounds generally have a high melting point, depending on the charge of the ions they consist of. The higher the charges the stronger the cohesive forces and the higher the melting point. They also tend to be soluble in water; the stronger the cohesive forces, the lower the solubility. Overview Atoms that have an almost full or almost empty valence shell tend to be very reactive. Strongly electronegative atoms (such as halogens) often have only one or two empty electron states in their valence shell, and frequently bond with other atoms or gain electrons to form anions. Weakly electronegative atoms (such as alkali metals) have relatively few valence electrons, which can easily be lost to strongly electronegative atoms. As a result, weakly electronegative atoms tend to distort their electron cloud and form cations. Properties of ionic bonds They are considered to be among the strongest of all types of chemical bonds. This often causes ionic compounds to be very stable. Ionic bonds have high bond energy. Bond energy is the mean amount of energy required to break the bond in the gaseous state. Most ionic compounds exist in the form of a crystal structure, in which the ions occupy the corners of the crystal. Such a structure is called a crystal lattice. Ionic compounds lose their crystal lattice structure and break up into ions when dissolved in water or any other polar solvent. This process is called solvation. The presence of these free ions makes aqueous ionic compound solutions good conductors of electricity. The same occurs when the compounds are heated above their melting point in a process known as melting. Formation Ionic bonding can result from a redox reaction when atoms of an element (usually metal), whose ionization energy is low, give some of their electrons to achieve a stable electron configuration. In doing so, cations are formed. An atom of another element (usually nonmetal) with greater electron affinity accepts one or more electrons to attain a stable electron configuration, and after accepting electrons an atom becomes an anion. Typically, the stable electron configuration is one of the noble gases for elements in the s-block and the p-block, and particular stable electron configurations for d-block and f-block elements. The electrostatic attraction between the anions and cations leads to the formation of a solid with a crystallographic lattice in which the ions are stacked in an alternating fashion. In such a lattice, it is usually not possible to distinguish discrete molecular units, so that the compounds formed are not molecular. However, the ions themselves can be complex and form molecular ions like the acetate anion or the ammonium cation. For example, common table salt is sodium chloride. When sodium (Na) and chlorine (Cl) are combined, the sodium atoms each lose an electron, forming cations (Na+), and the chlorine atoms each gain an electron to form anions (Cl−). These ions are then attracted to each other in a 1:1 ratio to form sodium chloride (NaCl). Na + Cl → Na+ + Cl− → NaCl However, to maintain charge neutrality, strict ratios between anions and cations are observed so that ionic compounds, in general, obey the rules of stoichiometry despite not being molecular compounds. For compounds that are transitional to the alloys and possess mixed ionic and metallic bonding, this may not be the case anymore. Many sulfides, e.g., do form non-stoichiometric compounds. Many ionic compounds are referred to as salts as they can also be formed by the neutralization reaction of an Arrhenius base like NaOH with an Arrhenius acid like HCl NaOH + HCl → NaCl + H2O The salt NaCl is then said to consist of the acid rest Cl− and the base rest Na+. The removal of electrons to form the cation is endothermic, raising the system's overall energy. There may also be energy changes associated with breaking of existing bonds or the addition of more than one electron to form anions. However, the action of the anion's accepting the cation's valence electrons and the subsequent attraction of the ions to each other releases (lattice) energy and, thus, lowers the overall energy of the system. Ionic bonding will occur only if the overall energy change for the reaction is favorable. In general, the reaction is exothermic, but, e.g., the formation of mercuric oxide (HgO) is endothermic. The charge of the resulting ions is a major factor in the strength of ionic bonding, e.g. a salt C+A− is held together by electrostatic forces roughly four times weaker than C2+A2− according to Coulomb's law, where C and A represent a generic cation and anion respectively. The sizes of the ions and the particular packing of the lattice are ignored in this rather simplistic argument. Structures Ionic compounds in the solid state form lattice structures. The two principal factors in determining the form of the lattice are the relative charges of the ions and their relative sizes. Some structures are adopted by a number of compounds; for example, the structure of the rock salt sodium chloride is also adopted by many alkali halides, and binary oxides such as magnesium oxide. Pauling's rules provide guidelines for predicting and rationalizing the crystal structures of ionic crystals Strength of the bonding For a solid crystalline ionic compound the enthalpy change in forming the solid from gaseous ions is termed the lattice energy. The experimental value for the lattice energy can be determined using the Born–Haber cycle. It can also be calculated (predicted) using the Born–Landé equation as the sum of the electrostatic potential energy, calculated by summing interactions between cations and anions, and a short-range repulsive potential energy term. The electrostatic potential can be expressed in terms of the interionic separation and a constant (Madelung constant) that takes account of the geometry of the crystal. The further away from the nucleus the weaker the shield. The Born–Landé equation gives a reasonable fit to the lattice energy of, e.g., sodium chloride, where the calculated (predicted) value is −756 kJ/mol, which compares to −787 kJ/mol using the Born–Haber cycle. In aqueous solution the binding strength can be described by the Bjerrum or Fuoss equation as function of the ion charges, rather independent of the nature of the ions such as polarizability or size. The strength of salt bridges is most often evaluated by measurements of equilibria between molecules containing cationic and anionic sites, most often in solution. Equilibrium constants in water indicate additive free energy contributions for each salt bridge. Another method for the identification of hydrogen bonds in complicated molecules is crystallography, sometimes also NMR-spectroscopy. The attractive forces defining the strength of ionic bonding can be modeled by Coulomb's Law. Ionic bond strengths are typically (cited ranges vary) between 170 and 1500 kJ/mol. Polarization power effects Ions in crystal lattices of purely ionic compounds are spherical; however, if the positive ion is small and/or highly charged, it will distort the electron cloud of the negative ion, an effect summarised in Fajans' rules. This polarization of the negative ion leads to a build-up of extra charge density between the two nuclei, that is, to partial covalency. Larger negative ions are more easily polarized, but the effect is usually important only when positive ions with charges of 3+ (e.g., Al3+) are involved. However, 2+ ions (Be2+) or even 1+ (Li+) show some polarizing power because their sizes are so small (e.g., LiI is ionic but has some covalent bonding present). Note that this is not the ionic polarization effect that refers to the displacement of ions in the lattice due to the application of an electric field. Comparison with covalent bonding In ionic bonding, the atoms are bound by the attraction of oppositely charged ions, whereas, in covalent bonding, atoms are bound by sharing electrons to attain stable electron configurations. In covalent bonding, the molecular geometry around each atom is determined by valence shell electron pair repulsion VSEPR rules, whereas, in ionic materials, the geometry follows maximum packing rules. One could say that covalent bonding is more directional in the sense that the energy penalty for not adhering to the optimum bond angles is large, whereas ionic bonding has no such penalty. There are no shared electron pairs to repel each other, the ions should simply be packed as efficiently as possible. This often leads to much higher coordination numbers. In NaCl, each ion has 6 bonds and all bond angles are 90°. In CsCl the coordination number is 8. By comparison, carbon typically has a maximum of four bonds. Purely ionic bonding cannot exist, as the proximity of the entities involved in the bonding allows some degree of sharing electron density between them. Therefore, all ionic bonding has some covalent character. Thus, bonding is considered ionic where the ionic character is greater than the covalent character. The larger the difference in electronegativity between the two types of atoms involved in the bonding, the more ionic (polar) it is. Bonds with partially ionic and partially covalent character are called polar covalent bonds. For example, Na–Cl and Mg–O interactions have a few percent covalency, while Si–O bonds are usually ~50% ionic and ~50% covalent. Pauling estimated that an electronegativity difference of 1.7 (on the Pauling scale) corresponds to 50% ionic character, so that a difference greater than 1.7 corresponds to a bond which is predominantly ionic. Ionic character in covalent bonds can be directly measured for atoms having quadrupolar nuclei (2H, 14N, 81,79Br, 35,37Cl or 127I). These nuclei are generally objects of NQR nuclear quadrupole resonance and NMR nuclear magnetic resonance studies. Interactions between the nuclear quadrupole moments Q and the electric field gradients (EFG) are characterized via the nuclear quadrupole coupling constants QCC = where the eqzz term corresponds to the principal component of the EFG tensor and e is the elementary charge. In turn, the electric field gradient opens the way to description of bonding modes in molecules when the QCC values are accurately determined by NMR or NQR methods. In general, when ionic bonding occurs in the solid (or liquid) state, it is not possible to talk about a single "ionic bond" between two individual atoms, because the cohesive forces that keep the lattice together are of a more collective nature. This is quite different in the case of covalent bonding, where we can often speak of a distinct bond localized between two particular atoms. However, even if ionic bonding is combined with some covalency, the result is not necessarily discrete bonds of a localized character. In such cases, the resulting bonding often requires description in terms of a band structure consisting of gigantic molecular orbitals spanning the entire crystal. Thus, the bonding in the solid often retains its collective rather than localized nature. When the difference in electronegativity is decreased, the bonding may then lead to a semiconductor, a semimetal or eventually a metallic conductor with metallic bonding.
Physical sciences
Chemical bonds
null
14958
https://en.wikipedia.org/wiki/Immune%20system
Immune system
The immune system is a network of biological systems that protects an organism from diseases. It detects and responds to a wide variety of pathogens, from viruses to bacteria, as well as cancer cells, parasitic worms, and also objects such as wood splinters, distinguishing them from the organism's own healthy tissue. Many species have two major subsystems of the immune system. The innate immune system provides a preconfigured response to broad groups of situations and stimuli. The adaptive immune system provides a tailored response to each stimulus by learning to recognize molecules it has previously encountered. Both use molecules and cells to perform their functions. Nearly all organisms have some kind of immune system. Bacteria have a rudimentary immune system in the form of enzymes that protect against viral infections. Other basic immune mechanisms evolved in ancient plants and animals and remain in their modern descendants. These mechanisms include phagocytosis, antimicrobial peptides called defensins, and the complement system. Jawed vertebrates, including humans, have even more sophisticated defense mechanisms, including the ability to adapt to recognize pathogens more efficiently. Adaptive (or acquired) immunity creates an immunological memory leading to an enhanced response to subsequent encounters with that same pathogen. This process of acquired immunity is the basis of vaccination. Dysfunction of the immune system can cause autoimmune diseases, inflammatory diseases and cancer. Immunodeficiency occurs when the immune system is less active than normal, resulting in recurring and life-threatening infections. In humans, immunodeficiency can be the result of a genetic disease such as severe combined immunodeficiency, acquired conditions such as HIV/AIDS, or the use of immunosuppressive medication. Autoimmunity results from a hyperactive immune system attacking normal tissues as if they were foreign organisms. Common autoimmune diseases include Hashimoto's thyroiditis, rheumatoid arthritis, diabetes mellitus type 1, and systemic lupus erythematosus. Immunology covers the study of all aspects of the immune system. Layered defense The immune system protects its host from infection with layered defenses of increasing specificity. Physical barriers prevent pathogens such as bacteria and viruses from entering the organism. If a pathogen breaches these barriers, the innate immune system provides an immediate, but non-specific response. Innate immune systems are found in all animals. If pathogens successfully evade the innate response, vertebrates possess a second layer of protection, the adaptive immune system, which is activated by the innate response. Here, the immune system adapts its response during an infection to improve its recognition of the pathogen. This improved response is then retained after the pathogen has been eliminated, in the form of an immunological memory, and allows the adaptive immune system to mount faster and stronger attacks each time this pathogen is encountered. Both innate and adaptive immunity depend on the ability of the immune system to distinguish between self and non-self molecules. In immunology, self molecules are components of an organism's body that can be distinguished from foreign substances by the immune system. Conversely, non-self molecules are those recognized as foreign molecules. One class of non-self molecules are called antigens (originally named for being antibody generators) and are defined as substances that bind to specific immune receptors and elicit an immune response. Surface barriers Several barriers protect organisms from infection, including mechanical, chemical, and biological barriers. The waxy cuticle of most leaves, the exoskeleton of insects, the shells and membranes of externally deposited eggs, and skin are examples of mechanical barriers that are the first line of defense against infection. Organisms cannot be completely sealed from their environments, so systems act to protect body openings such as the lungs, intestines, and the genitourinary tract. In the lungs, coughing and sneezing mechanically eject pathogens and other irritants from the respiratory tract. The flushing action of tears and urine also mechanically expels pathogens, while mucus secreted by the respiratory and gastrointestinal tract serves to trap and entangle microorganisms. Chemical barriers also protect against infection. The skin and respiratory tract secrete antimicrobial peptides such as the β-defensins. Enzymes such as lysozyme and phospholipase A2 in saliva, tears, and breast milk are also antibacterials. Vaginal secretions serve as a chemical barrier following menarche, when they become slightly acidic, while semen contains defensins and zinc to kill pathogens. In the stomach, gastric acid serves as a chemical defense against ingested pathogens. Within the genitourinary and gastrointestinal tracts, commensal flora serve as biological barriers by competing with pathogenic bacteria for food and space and, in some cases, changing the conditions in their environment, such as pH or available iron. As a result, the probability that pathogens will reach sufficient numbers to cause illness is reduced. Innate immune system Microorganisms or toxins that successfully enter an organism encounter the cells and mechanisms of the innate immune system. The innate response is usually triggered when microbes are identified by pattern recognition receptors, which recognize components that are conserved among broad groups of microorganisms, or when damaged, injured or stressed cells send out alarm signals, many of which are recognized by the same receptors as those that recognize pathogens. Innate immune defenses are non-specific, meaning these systems respond to pathogens in a generic way. This system does not confer long-lasting immunity against a pathogen. The innate immune system is the dominant system of host defense in most organisms, and the only one in plants. Immune sensing Cells in the innate immune system use pattern recognition receptors to recognize molecular structures that are produced by pathogens. They are proteins expressed, mainly, by cells of the innate immune system, such as dendritic cells, macrophages, monocytes, neutrophils, and epithelial cells, to identify two classes of molecules: pathogen-associated molecular patterns (PAMPs), which are associated with microbial pathogens, and damage-associated molecular patterns (DAMPs), which are associated with components of host's cells that are released during cell damage or cell death. Recognition of extracellular or endosomal PAMPs is mediated by transmembrane proteins known as toll-like receptors (TLRs). TLRs share a typical structural motif, the leucine rich repeats (LRRs), which give them a curved shape. Toll-like receptors were first discovered in Drosophila and trigger the synthesis and secretion of cytokines and activation of other host defense programs that are necessary for both innate or adaptive immune responses. Ten toll-like receptors have been described in humans. Cells in the innate immune system have pattern recognition receptors, which detect infection or cell damage, inside. Three major classes of these "cytosolic" receptors are NOD–like receptors, RIG (retinoic acid-inducible gene)-like receptors, and cytosolic DNA sensors. Innate immune cells Some leukocytes (white blood cells) act like independent, single-celled organisms and are the second arm of the innate immune system. The innate leukocytes include the "professional" phagocytes (macrophages, neutrophils, and dendritic cells). These cells identify and eliminate pathogens, either by attacking larger pathogens through contact or by engulfing and then killing microorganisms. The other cells involved in the innate response include innate lymphoid cells, mast cells, eosinophils, basophils, and natural killer cells. Phagocytosis is an important feature of cellular innate immunity performed by cells called phagocytes that engulf pathogens or particles. Phagocytes generally patrol the body searching for pathogens, but can be called to specific locations by cytokines. Once a pathogen has been engulfed by a phagocyte, it becomes trapped in an intracellular vesicle called a phagosome, which subsequently fuses with another vesicle called a lysosome to form a phagolysosome. The pathogen is killed by the activity of digestive enzymes or following a respiratory burst that releases free radicals into the phagolysosome. Phagocytosis evolved as a means of acquiring nutrients, but this role was extended in phagocytes to include engulfment of pathogens as a defense mechanism. Phagocytosis probably represents the oldest form of host defense, as phagocytes have been identified in both vertebrate and invertebrate animals. Neutrophils and macrophages are phagocytes that travel throughout the body in pursuit of invading pathogens. Neutrophils are normally found in the bloodstream and are the most abundant type of phagocyte, representing 50% to 60% of total circulating leukocytes. During the acute phase of inflammation, neutrophils migrate toward the site of inflammation in a process called chemotaxis and are usually the first cells to arrive at the scene of infection. Macrophages are versatile cells that reside within tissues and produce an array of chemicals including enzymes, complement proteins, and cytokines. They can also act as scavengers that rid the body of worn-out cells and other debris and as antigen-presenting cells (APCs) that activate the adaptive immune system. Dendritic cells are phagocytes in tissues that are in contact with the external environment; therefore, they are located mainly in the skin, nose, lungs, stomach, and intestines. They are named for their resemblance to neuronal dendrites, as both have many spine-like projections. Dendritic cells serve as a link between the bodily tissues and the innate and adaptive immune systems, as they present antigens to T cells, one of the key cell types of the adaptive immune system. Granulocytes are leukocytes that have granules in their cytoplasm. In this category are neutrophils, mast cells, basophils, and eosinophils. Mast cells reside in connective tissues and mucous membranes and regulate the inflammatory response. They are most often associated with allergy and anaphylaxis. Basophils and eosinophils are related to neutrophils. They secrete chemical mediators that are involved in defending against parasites and play a role in allergic reactions, such as asthma. Innate lymphoid cells (ILCs) are a group of innate immune cells that are derived from common lymphoid progenitor and belong to the lymphoid lineage. These cells are defined by the absence of antigen-specific B- or T-cell receptor (TCR) because of the lack of recombination activating gene. ILCs do not express myeloid or dendritic cell markers. Natural killer cells (NK cells) are lymphocytes and a component of the innate immune system that does not directly attack invading microbes. Rather, NK cells destroy compromised host cells, such as tumor cells or virus-infected cells, recognizing such cells by a condition known as "missing self". This term describes cells with low levels of a cell-surface marker called MHC I (major histocompatibility complex)—a situation that can arise in viral infections of host cells. Normal body cells are not recognized and attacked by NK cells because they express intact self MHC antigens. Those MHC antigens are recognized by killer cell immunoglobulin receptors, which essentially put the brakes on NK cells. Inflammation Inflammation is one of the first responses of the immune system to infection. The symptoms of inflammation are redness, swelling, heat, and pain, which are caused by increased blood flow into tissue. Inflammation is produced by eicosanoids and cytokines, which are released by injured or infected cells. Eicosanoids include prostaglandins that produce fever and the dilation of blood vessels associated with inflammation and leukotrienes that attract certain white blood cells (leukocytes). Common cytokines include interleukins that are responsible for communication between white blood cells; chemokines that promote chemotaxis; and interferons that have antiviral effects, such as shutting down protein synthesis in the host cell. Growth factors and cytotoxic factors may also be released. These cytokines and other chemicals recruit immune cells to the site of infection and promote the healing of any damaged tissue following the removal of pathogens. The pattern-recognition receptors called inflammasomes are multiprotein complexes (consisting of an NLR, the adaptor protein ASC, and the effector molecule pro-caspase-1) that form in response to cytosolic PAMPs and DAMPs, whose function is to generate active forms of the inflammatory cytokines IL-1β and IL-18. Humoral defenses The complement system is a biochemical cascade that attacks the surfaces of foreign cells. It contains over 20 different proteins and is named for its ability to "complement" the killing of pathogens by antibodies. Complement is the major humoral component of the innate immune response. Many species have complement systems, including non-mammals like plants, fish, and some invertebrates. In humans, this response is activated by complement binding to antibodies that have attached to these microbes or the binding of complement proteins to carbohydrates on the surfaces of microbes. This recognition signal triggers a rapid killing response. The speed of the response is a result of signal amplification that occurs after sequential proteolytic activation of complement molecules, which are also proteases. After complement proteins initially bind to the microbe, they activate their protease activity, which in turn activates other complement proteases, and so on. This produces a catalytic cascade that amplifies the initial signal by controlled positive feedback. The cascade results in the production of peptides that attract immune cells, increase vascular permeability, and opsonize (coat) the surface of a pathogen, marking it for destruction. This deposition of complement can also kill cells directly by disrupting their plasma membrane via the formation of a membrane attack complex. Adaptive immune system The adaptive immune system evolved in early vertebrates and allows for a stronger immune response as well as immunological memory, where each pathogen is "remembered" by a signature antigen. The adaptive immune response is antigen-specific and requires the recognition of specific "non-self" antigens during a process called antigen presentation. Antigen specificity allows for the generation of responses that are tailored to specific pathogens or pathogen-infected cells. The ability to mount these tailored responses is maintained in the body by "memory cells". Should a pathogen infect the body more than once, these specific memory cells are used to quickly eliminate it. Recognition of antigen The cells of the adaptive immune system are special types of leukocytes, called lymphocytes. B cells and T cells are the major types of lymphocytes and are derived from hematopoietic stem cells in the bone marrow. B cells are involved in the humoral immune response, whereas T cells are involved in cell-mediated immune response. Killer T cells only recognize antigens coupled to Class I MHC molecules, while helper T cells and regulatory T cells only recognize antigens coupled to Class II MHC molecules. These two mechanisms of antigen presentation reflect the different roles of the two types of T cell. A third, minor subtype are the γδ T cells that recognize intact antigens that are not bound to MHC receptors. The double-positive T cells are exposed to a wide variety of self-antigens in the thymus, in which iodine is necessary for its thymus development and activity. In contrast, the B cell antigen-specific receptor is an antibody molecule on the B cell surface and recognizes native (unprocessed) antigen without any need for antigen processing. Such antigens may be large molecules found on the surfaces of pathogens, but can also be small haptens (such as penicillin) attached to carrier molecule. Each lineage of B cell expresses a different antibody, so the complete set of B cell antigen receptors represent all the antibodies that the body can manufacture. When B or T cells encounter their related antigens they multiply and many "clones" of the cells are produced that target the same antigen. This is called clonal selection. Antigen presentation to T lymphocytes Both B cells and T cells carry receptor molecules that recognize specific targets. T cells recognize a "non-self" target, such as a pathogen, only after antigens (small fragments of the pathogen) have been processed and presented in combination with a "self" receptor called a major histocompatibility complex (MHC) molecule. Cell mediated immunity There are two major subtypes of T cells: the killer T cell and the helper T cell. In addition there are regulatory T cells which have a role in modulating immune response. Killer T cells Killer T cells are a sub-group of T cells that kill cells that are infected with viruses (and other pathogens), or are otherwise damaged or dysfunctional. As with B cells, each type of T cell recognizes a different antigen. Killer T cells are activated when their T-cell receptor binds to this specific antigen in a complex with the MHC Class I receptor of another cell. Recognition of this MHC:antigen complex is aided by a co-receptor on the T cell, called CD8. The T cell then travels throughout the body in search of cells where the MHC I receptors bear this antigen. When an activated T cell contacts such cells, it releases cytotoxins, such as perforin, which form pores in the target cell's plasma membrane, allowing ions, water and toxins to enter. The entry of another toxin called granulysin (a protease) induces the target cell to undergo apoptosis. T cell killing of host cells is particularly important in preventing the replication of viruses. T cell activation is tightly controlled and generally requires a very strong MHC/antigen activation signal, or additional activation signals provided by "helper" T cells (see below). Helper T cells Helper T cells regulate both the innate and adaptive immune responses and help determine which immune responses the body makes to a particular pathogen. These cells have no cytotoxic activity and do not kill infected cells or clear pathogens directly. They instead control the immune response by directing other cells to perform these tasks. Helper T cells express T cell receptors that recognize antigen bound to Class II MHC molecules. The MHC:antigen complex is also recognized by the helper cell's CD4 co-receptor, which recruits molecules inside the T cell (such as Lck) that are responsible for the T cell's activation. Helper T cells have a weaker association with the MHC:antigen complex than observed for killer T cells, meaning many receptors (around 200–300) on the helper T cell must be bound by an MHC:antigen to activate the helper cell, while killer T cells can be activated by engagement of a single MHC:antigen molecule. Helper T cell activation also requires longer duration of engagement with an antigen-presenting cell. The activation of a resting helper T cell causes it to release cytokines that influence the activity of many cell types. Cytokine signals produced by helper T cells enhance the microbicidal function of macrophages and the activity of killer T cells. In addition, helper T cell activation causes an upregulation of molecules expressed on the T cell's surface, such as CD40 ligand (also called CD154), which provide extra stimulatory signals typically required to activate antibody-producing B cells. Gamma delta T cells Gamma delta T cells (γδ T cells) possess an alternative T-cell receptor (TCR) as opposed to CD4+ and CD8+ (αβ) T cells and share the characteristics of helper T cells, cytotoxic T cells and NK cells. The conditions that produce responses from γδ T cells are not fully understood. Like other 'unconventional' T cell subsets bearing invariant TCRs, such as CD1d-restricted natural killer T cells, γδ T cells straddle the border between innate and adaptive immunity. On one hand, γδ T cells are a component of adaptive immunity as they rearrange TCR genes to produce receptor diversity and can also develop a memory phenotype. On the other hand, the various subsets are also part of the innate immune system, as restricted TCR or NK receptors may be used as pattern recognition receptors. For example, large numbers of human Vγ9/Vδ2 T cells respond within hours to common molecules produced by microbes, and highly restricted Vδ1+ T cells in epithelia respond to stressed epithelial cells. Humoral immune response A B cell identifies pathogens when antibodies on its surface bind to a specific foreign antigen. This antigen/antibody complex is taken up by the B cell and processed by proteolysis into peptides. The B cell then displays these antigenic peptides on its surface MHC class II molecules. This combination of MHC and antigen attracts a matching helper T cell, which releases lymphokines and activates the B cell. As the activated B cell then begins to divide, its offspring (plasma cells) secrete millions of copies of the antibody that recognizes this antigen. These antibodies circulate in blood plasma and lymph, bind to pathogens expressing the antigen and mark them for destruction by complement activation or for uptake and destruction by phagocytes. Antibodies can also neutralize challenges directly, by binding to bacterial toxins or by interfering with the receptors that viruses and bacteria use to infect cells. Newborn infants have no prior exposure to microbes and are particularly vulnerable to infection. Several layers of passive protection are provided by the mother. During pregnancy, a particular type of antibody, called IgG, is transported from mother to baby directly through the placenta, so human babies have high levels of antibodies even at birth, with the same range of antigen specificities as their mother. Breast milk or colostrum also contains antibodies that are transferred to the gut of the infant and protect against bacterial infections until the newborn can synthesize its own antibodies. This is passive immunity because the fetus does not actually make any memory cells or antibodies—it only borrows them. This passive immunity is usually short-term, lasting from a few days up to several months. In medicine, protective passive immunity can also be transferred artificially from one individual to another. Immunological memory When B cells and T cells are activated and begin to replicate, some of their offspring become long-lived memory cells. Throughout the lifetime of an animal, these memory cells remember each specific pathogen encountered and can mount a strong response if the pathogen is detected again. T-cells recognize pathogens by small protein-based infection signals, called antigens, that bind to directly to T-cell surface receptors. B-cells use the protein, immunoglobulin, to recognize pathogens by their antigens. This is "adaptive" because it occurs during the lifetime of an individual as an adaptation to infection with that pathogen and prepares the immune system for future challenges. Immunological memory can be in the form of either passive short-term memory or active long-term memory. Physiological regulation The immune system is involved in many aspects of physiological regulation in the body. The immune system interacts intimately with other systems, such as the endocrine and the nervous systems. The immune system also plays a crucial role in embryogenesis (development of the embryo), as well as in tissue repair and regeneration. Hormones Hormones can act as immunomodulators, altering the sensitivity of the immune system. For example, female sex hormones are known immunostimulators of both adaptive and innate immune responses. Some autoimmune diseases such as lupus erythematosus strike women preferentially, and their onset often coincides with puberty. By contrast, male sex hormones such as testosterone seem to be immunosuppressive. Other hormones appear to regulate the immune system as well, most notably prolactin, growth hormone and vitamin D. Vitamin D Although cellular studies indicate that vitamin D has receptors and probable functions in the immune system, there is no clinical evidence to prove that vitamin D deficiency increases the risk for immune diseases or vitamin D supplementation lowers immune disease risk. A 2011 United States Institute of Medicine report stated that "outcomes related to ... immune functioning and autoimmune disorders, and infections ... could not be linked reliably with calcium or vitamin D intake and were often conflicting." Sleep and rest The immune system is affected by sleep and rest, and sleep deprivation is detrimental to immune function. Complex feedback loops involving cytokines, such as interleukin-1 and tumor necrosis factor-α produced in response to infection, appear to also play a role in the regulation of non-rapid eye movement (REM) sleep. Thus the immune response to infection may result in changes to the sleep cycle, including an increase in slow-wave sleep relative to REM sleep. In people with sleep deprivation, active immunizations may have a diminished effect and may result in lower antibody production, and a lower immune response, than would be noted in a well-rested individual. Additionally, proteins such as NFIL3, which have been shown to be closely intertwined with both T-cell differentiation and circadian rhythms, can be affected through the disturbance of natural light and dark cycles through instances of sleep deprivation. These disruptions can lead to an increase in chronic conditions such as heart disease, chronic pain, and asthma. In addition to the negative consequences of sleep deprivation, sleep and the intertwined circadian system have been shown to have strong regulatory effects on immunological functions affecting both innate and adaptive immunity. First, during the early slow-wave-sleep stage, a sudden drop in blood levels of cortisol, epinephrine, and norepinephrine causes increased blood levels of the hormones leptin, pituitary growth hormone, and prolactin. These signals induce a pro-inflammatory state through the production of the pro-inflammatory cytokines interleukin-1, interleukin-12, TNF-alpha and IFN-gamma. These cytokines then stimulate immune functions such as immune cell activation, proliferation, and differentiation. During this time of a slowly evolving adaptive immune response, there is a peak in undifferentiated or less differentiated cells, like naïve and central memory T cells. In addition to these effects, the milieu of hormones produced at this time (leptin, pituitary growth hormone, and prolactin) supports the interactions between APCs and T-cells, a shift of the Th1/Th2 cytokine balance towards one that supports Th1, an increase in overall Th cell proliferation, and naïve T cell migration to lymph nodes. This is also thought to support the formation of long-lasting immune memory through the initiation of Th1 immune responses. During wake periods, differentiated effector cells, such as cytotoxic natural killer cells and cytotoxic T lymphocytes, peak to elicit an effective response against any intruding pathogens. Anti-inflammatory molecules, such as cortisol and catecholamines, also peak during awake active times. Inflammation would cause serious cognitive and physical impairments if it were to occur during wake times, and inflammation may occur during sleep times due to the presence of melatonin. Inflammation causes a great deal of oxidative stress and the presence of melatonin during sleep times could actively counteract free radical production during this time. Physical exercise Physical exercise has a positive effect on the immune system and depending on the frequency and intensity, the pathogenic effects of diseases caused by bacteria and viruses are moderated. Immediately after intense exercise there is a transient immunodepression, where the number of circulating lymphocytes decreases and antibody production declines. This may give rise to a window of opportunity for infection and reactivation of latent virus infections, but the evidence is inconclusive. Changes at the cellular level During exercise there is an increase in circulating white blood cells of all types. This is caused by the frictional force of blood flowing on the endothelial cell surface and catecholamines affecting β-adrenergic receptors (βARs). The number of neutrophils in the blood increases and remains raised for up to six hours and immature forms are present. Although the increase in neutrophils ("neutrophilia") is similar to that seen during bacterial infections, after exercise the cell population returns to normal by around 24 hours. The number of circulating lymphocytes (mainly natural killer cells) decreases during intense exercise but returns to normal after 4 to 6 hours. Although up to 2% of the cells die most migrate from the blood to the tissues, mainly the intestines and lungs, where pathogens are most likely to be encountered. Some monocytes leave the blood circulation and migrate to the muscles where they differentiate and become macrophages. These cells differentiate into two types: proliferative macrophages, which are responsible for increasing the number of stem cells and restorative macrophages, which are involved their maturing to muscle cells. Repair and regeneration The immune system, particularly the innate component, plays a decisive role in tissue repair after an insult. Key actors include macrophages and neutrophils, but other cellular actors, including γδ T cells, innate lymphoid cells (ILCs), and regulatory T cells (Tregs), are also important. The plasticity of immune cells and the balance between pro-inflammatory and anti-inflammatory signals are crucial aspects of efficient tissue repair. Immune components and pathways are involved in regeneration as well, for example in amphibians such as in axolotl limb regeneration. According to one hypothesis, organisms that can regenerate (e.g., axolotls) could be less immunocompetent than organisms that cannot regenerate. Disorders of human immunity Failures of host defense occur and fall into three broad categories: immunodeficiencies, autoimmunity, and hypersensitivities. Immunodeficiencies Immunodeficiencies occur when one or more of the components of the immune system are inactive. The ability of the immune system to respond to pathogens is diminished in both the young and the elderly, with immune responses beginning to decline at around 50 years of age due to immunosenescence. In developed countries, obesity, alcoholism, and drug use are common causes of poor immune function, while malnutrition is the most common cause of immunodeficiency in developing countries. Diets lacking sufficient protein are associated with impaired cell-mediated immunity, complement activity, phagocyte function, IgA antibody concentrations, and cytokine production. Additionally, the loss of the thymus at an early age through genetic mutation or surgical removal results in severe immunodeficiency and a high susceptibility to infection. Immunodeficiencies can also be inherited or 'acquired'. Severe combined immunodeficiency is a rare genetic disorder characterized by the disturbed development of functional T cells and B cells caused by numerous genetic mutations. Chronic granulomatous disease, where phagocytes have a reduced ability to destroy pathogens, is an example of an inherited, or congenital, immunodeficiency. AIDS and some types of cancer cause acquired immunodeficiency. Autoimmunity Overactive immune responses form the other end of immune dysfunction, particularly the autoimmune diseases. Here, the immune system fails to properly distinguish between self and non-self, and attacks part of the body. Under normal circumstances, many T cells and antibodies react with "self" peptides. One of the functions of specialized cells (located in the thymus and bone marrow) is to present young lymphocytes with self antigens produced throughout the body and to eliminate those cells that recognize self-antigens, preventing autoimmunity. Common autoimmune diseases include Hashimoto's thyroiditis, rheumatoid arthritis, diabetes mellitus type 1, and systemic lupus erythematosus. Hypersensitivity Hypersensitivity is an immune response that damages the body's own tissues. It is divided into four classes (Type I – IV) based on the mechanisms involved and the time course of the hypersensitive reaction. Type I hypersensitivity is an immediate or anaphylactic reaction, often associated with allergy. Symptoms can range from mild discomfort to death. Type I hypersensitivity is mediated by IgE, which triggers degranulation of mast cells and basophils when cross-linked by antigen. Type II hypersensitivity occurs when antibodies bind to antigens on the individual's own cells, marking them for destruction. This is also called antibody-dependent (or cytotoxic) hypersensitivity, and is mediated by IgG and IgM antibodies. Immune complexes (aggregations of antigens, complement proteins, and IgG and IgM antibodies) deposited in various tissues trigger Type III hypersensitivity reactions. Type IV hypersensitivity (also known as cell-mediated or delayed type hypersensitivity) usually takes between two and three days to develop. Type IV reactions are involved in many autoimmune and infectious diseases, but may also involve contact dermatitis. These reactions are mediated by T cells, monocytes, and macrophages. Idiopathic inflammation Inflammation is one of the first responses of the immune system to infection, but it can appear without known cause. Inflammation is produced by eicosanoids and cytokines, which are released by injured or infected cells. Eicosanoids include prostaglandins that produce fever and the dilation of blood vessels associated with inflammation, and leukotrienes that attract certain white blood cells (leukocytes). Common cytokines include interleukins that are responsible for communication between white blood cells; chemokines that promote chemotaxis; and interferons that have anti-viral effects, such as shutting down protein synthesis in the host cell. Growth factors and cytotoxic factors may also be released. These cytokines and other chemicals recruit immune cells to the site of infection and promote healing of any damaged tissue following the removal of pathogens. Manipulation in medicine The immune response can be manipulated to suppress unwanted responses resulting from autoimmunity, allergy, and transplant rejection, and to stimulate protective responses against pathogens that largely elude the immune system (see immunization) or cancer. Immunosuppression Immunosuppressive drugs are used to control autoimmune disorders or inflammation when excessive tissue damage occurs, and to prevent rejection after an organ transplant. Anti-inflammatory drugs are often used to control the effects of inflammation. Glucocorticoids are the most powerful of these drugs and can have many undesirable side effects, such as central obesity, hyperglycemia, and osteoporosis. Their use is tightly controlled. Lower doses of anti-inflammatory drugs are often used in conjunction with cytotoxic or immunosuppressive drugs such as methotrexate or azathioprine. Cytotoxic drugs inhibit the immune response by killing dividing cells such as activated T cells. This killing is indiscriminate and other constantly dividing cells and their organs are affected, which causes toxic side effects. Immunosuppressive drugs such as cyclosporin prevent T cells from responding to signals correctly by inhibiting signal transduction pathways. Immunostimulation Claims made by marketers of various products and alternative health providers, such as chiropractors, homeopaths, and acupuncturists to be able to stimulate or "boost" the immune system generally lack meaningful explanation and evidence of effectiveness. Vaccination Long-term active memory is acquired following infection by activation of B and T cells. Active immunity can also be generated artificially, through vaccination. The principle behind vaccination (also called immunization) is to introduce an antigen from a pathogen to stimulate the immune system and develop specific immunity against that particular pathogen without causing disease associated with that organism. This deliberate induction of an immune response is successful because it exploits the natural specificity of the immune system, as well as its inducibility. With infectious disease remaining one of the leading causes of death in the human population, vaccination represents the most effective manipulation of the immune system mankind has developed. Many vaccines are based on acellular components of micro-organisms, including harmless toxin components. Since many antigens derived from acellular vaccines do not strongly induce the adaptive response, most bacterial vaccines are provided with additional adjuvants that activate the antigen-presenting cells of the innate immune system and maximize immunogenicity. Tumor immunology Another important role of the immune system is to identify and eliminate tumors. This is called immune surveillance. The transformed cells of tumors express antigens that are not found on normal cells. To the immune system, these antigens appear foreign, and their presence causes immune cells to attack the transformed tumor cells. The antigens expressed by tumors have several sources; some are derived from oncogenic viruses like human papillomavirus, which causes cancer of the cervix, vulva, vagina, penis, anus, mouth, and throat, while others are the organism's own proteins that occur at low levels in normal cells but reach high levels in tumor cells. One example is an enzyme called tyrosinase that, when expressed at high levels, transforms certain skin cells (for example, melanocytes) into tumors called melanomas. A third possible source of tumor antigens are proteins normally important for regulating cell growth and survival, that commonly mutate into cancer inducing molecules called oncogenes. The main response of the immune system to tumors is to destroy the abnormal cells using killer T cells, sometimes with the assistance of helper T cells. Tumor antigens are presented on MHC class I molecules in a similar way to viral antigens. This allows killer T cells to recognize the tumor cell as abnormal. NK cells also kill tumorous cells in a similar way, especially if the tumor cells have fewer MHC class I molecules on their surface than normal; this is a common phenomenon with tumors. Sometimes antibodies are generated against tumor cells allowing for their destruction by the complement system. Some tumors evade the immune system and go on to become cancers. Tumor cells often have a reduced number of MHC class I molecules on their surface, thus avoiding detection by killer T cells. Some tumor cells also release products that inhibit the immune response; for example by secreting the cytokine TGF-β, which suppresses the activity of macrophages and lymphocytes. In addition, immunological tolerance may develop against tumor antigens, so the immune system no longer attacks the tumor cells. Paradoxically, macrophages can promote tumor growth when tumor cells send out cytokines that attract macrophages, which then generate cytokines and growth factors such as tumor-necrosis factor alpha that nurture tumor development or promote stem-cell-like plasticity. In addition, a combination of hypoxia in the tumor and a cytokine produced by macrophages induces tumor cells to decrease production of a protein that blocks metastasis and thereby assists spread of cancer cells. Anti-tumor M1 macrophages are recruited in early phases to tumor development but are progressively differentiated to M2 with pro-tumor effect, an immunosuppressor switch. The hypoxia reduces the cytokine production for the anti-tumor response and progressively macrophages acquire pro-tumor M2 functions driven by the tumor microenvironment, including IL-4 and IL-10. Cancer immunotherapy covers the medical ways to stimulate the immune system to attack cancer tumors. Predicting immunogenicity Some drugs can cause a neutralizing immune response, meaning that the immune system produces neutralizing antibodies that counteract the action of the drugs, particularly if the drugs are administered repeatedly, or in larger doses. This limits the effectiveness of drugs based on larger peptides and proteins (which are typically larger than 6000 Da). In some cases, the drug itself is not immunogenic, but may be co-administered with an immunogenic compound, as is sometimes the case for Taxol. Computational methods have been developed to predict the immunogenicity of peptides and proteins, which are particularly useful in designing therapeutic antibodies, assessing likely virulence of mutations in viral coat particles, and validation of proposed peptide-based drug treatments. Early techniques relied mainly on the observation that hydrophilic amino acids are overrepresented in epitope regions than hydrophobic amino acids; however, more recent developments rely on machine learning techniques using databases of existing known epitopes, usually on well-studied virus proteins, as a training set. A publicly accessible database has been established for the cataloguing of epitopes from pathogens known to be recognizable by B cells. The emerging field of bioinformatics-based studies of immunogenicity is referred to as immunoinformatics. Immunoproteomics is the study of large sets of proteins (proteomics) involved in the immune response. Evolution and other mechanisms Evolution of the immune system It is likely that a multicomponent, adaptive immune system arose with the first vertebrates, as invertebrates do not generate lymphocytes or an antibody-based humoral response. Immune systems evolved in deuterostomes as shown in the cladogram. Many species, however, use mechanisms that appear to be precursors of these aspects of vertebrate immunity. Immune systems appear even in the structurally simplest forms of life, with bacteria using a unique defense mechanism, called the restriction modification system to protect themselves from viral pathogens, called bacteriophages. Prokaryotes (bacteria and archea) also possess acquired immunity, through a system that uses CRISPR sequences to retain fragments of the genomes of phage that they have come into contact with in the past, which allows them to block virus replication through a form of RNA interference. Prokaryotes also possess other defense mechanisms. Offensive elements of the immune systems are also present in unicellular eukaryotes, but studies of their roles in defense are few. Pattern recognition receptors are proteins used by nearly all organisms to identify molecules associated with pathogens. Antimicrobial peptides called defensins are an evolutionarily conserved component of the innate immune response found in all animals and plants, and represent the main form of invertebrate systemic immunity. The complement system and phagocytic cells are also used by most forms of invertebrate life. Ribonucleases and the RNA interference pathway are conserved across all eukaryotes, and are thought to play a role in the immune response to viruses. Unlike animals, plants lack phagocytic cells, but many plant immune responses involve systemic chemical signals that are sent through a plant. Individual plant cells respond to molecules associated with pathogens known as pathogen-associated molecular patterns or PAMPs. When a part of a plant becomes infected, the plant produces a localized hypersensitive response, whereby cells at the site of infection undergo rapid apoptosis to prevent the spread of the disease to other parts of the plant. Systemic acquired resistance is a type of defensive response used by plants that renders the entire plant resistant to a particular infectious agent. RNA silencing mechanisms are particularly important in this systemic response as they can block virus replication. Alternative adaptive immune system Evolution of the adaptive immune system occurred in an ancestor of the jawed vertebrates. Many of the classical molecules of the adaptive immune system (for example, immunoglobulins and T-cell receptors) exist only in jawed vertebrates. A distinct lymphocyte-derived molecule has been discovered in primitive jawless vertebrates, such as the lamprey and hagfish. These animals possess a large array of molecules called Variable lymphocyte receptors (VLRs) that, like the antigen receptors of jawed vertebrates, are produced from only a small number (one or two) of genes. These molecules are believed to bind pathogenic antigens in a similar way to antibodies, and with the same degree of specificity. Manipulation by pathogens The success of any pathogen depends on its ability to elude host immune responses. Therefore, pathogens evolved several methods that allow them to successfully infect a host, while evading detection or destruction by the immune system. Bacteria often overcome physical barriers by secreting enzymes that digest the barrier, for example, by using a type II secretion system. Alternatively, using a type III secretion system, they may insert a hollow tube into the host cell, providing a direct route for proteins to move from the pathogen to the host. These proteins are often used to shut down host defenses. An evasion strategy used by several pathogens to avoid the innate immune system is to hide within the cells of their host (also called intracellular pathogenesis). Here, a pathogen spends most of its life-cycle inside host cells, where it is shielded from direct contact with immune cells, antibodies and complement. Some examples of intracellular pathogens include viruses, the food poisoning bacterium Salmonella and the eukaryotic parasites that cause malaria (Plasmodium spp.) and leishmaniasis (Leishmania spp.). Other bacteria, such as Mycobacterium tuberculosis, live inside a protective capsule that prevents lysis by complement. Many pathogens secrete compounds that diminish or misdirect the host's immune response. Some bacteria form biofilms to protect themselves from the cells and proteins of the immune system. Such biofilms are present in many successful infections, such as the chronic Pseudomonas aeruginosa and Burkholderia cenocepacia infections characteristic of cystic fibrosis. Other bacteria generate surface proteins that bind to antibodies, rendering them ineffective; examples include Streptococcus (protein G), Staphylococcus aureus (protein A), and Peptostreptococcus magnus (protein L). The mechanisms used to evade the adaptive immune system are more complicated. The simplest approach is to rapidly change non-essential epitopes (amino acids and/or sugars) on the surface of the pathogen, while keeping essential epitopes concealed. This is called antigenic variation. An example is HIV, which mutates rapidly, so the proteins on its viral envelope that are essential for entry into its host target cell are constantly changing. These frequent changes in antigens may explain the failures of vaccines directed at this virus. The parasite Trypanosoma brucei uses a similar strategy, constantly switching one type of surface protein for another, allowing it to stay one step ahead of the antibody response. Masking antigens with host molecules is another common strategy for avoiding detection by the immune system. In HIV, the envelope that covers the virion is formed from the outermost membrane of the host cell; such "self-cloaked" viruses make it difficult for the immune system to identify them as "non-self" structures. History of immunology Immunology is a science that examines the structure and function of the immune system. It originates from medicine and early studies on the causes of immunity to disease. The earliest known reference to immunity was during the plague of Athens in 430 BC. Thucydides noted that people who had recovered from a previous bout of the disease could nurse the sick without contracting the illness a second time. In the 18th century, Pierre-Louis Moreau de Maupertuis experimented with scorpion venom and observed that certain dogs and mice were immune to this venom. In the 10th century, Persian physician al-Razi (also known as Rhazes) wrote the first recorded theory of acquired immunity, noting that a smallpox bout protected its survivors from future infections. Although he explained the immunity in terms of "excess moisture" being expelled from the blood—therefore preventing a second occurrence of the disease—this theory explained many observations about smallpox known during this time. These and other observations of acquired immunity were later exploited by Louis Pasteur in his development of vaccination and his proposed germ theory of disease. Pasteur's theory was in direct opposition to contemporary theories of disease, such as the miasma theory. It was not until Robert Koch's 1891 proofs, for which he was awarded a Nobel Prize in 1905, that microorganisms were confirmed as the cause of infectious disease. Viruses were confirmed as human pathogens in 1901, with the discovery of the yellow fever virus by Walter Reed. Immunology made a great advance towards the end of the 19th century, through rapid developments in the study of humoral immunity and cellular immunity. Particularly important was the work of Paul Ehrlich, who proposed the side-chain theory to explain the specificity of the antigen-antibody reaction; his contributions to the understanding of humoral immunity were recognized by the award of a joint Nobel Prize in 1908, along with the founder of cellular immunology, Elie Metchnikoff. In 1974, Niels Kaj Jerne developed the immune network theory; he shared a Nobel Prize in 1984 with Georges J. F. Köhler and César Milstein for theories related to the immune system.
Biology and health sciences
Biology
null
14959
https://en.wikipedia.org/wiki/Immunology
Immunology
Immunology is a branch of biology and medicine that covers the study of immune systems in all organisms. Immunology charts, measures, and contextualizes the physiological functioning of the immune system in states of both health and diseases; malfunctions of the immune system in immunological disorders (such as autoimmune diseases, hypersensitivities, immune deficiency, and transplant rejection); and the physical, chemical, and physiological characteristics of the components of the immune system in vitro, in situ, and in vivo. Immunology has applications in numerous disciplines of medicine, particularly in the fields of organ transplantation, oncology, rheumatology, virology, bacteriology, parasitology, psychiatry, and dermatology. The term was coined by Russian biologist Ilya Ilyich Mechnikov, who advanced studies on immunology and received the Nobel Prize for his work in 1908 with Paul Ehrlich "in recognition of their work on immunity". He pinned small thorns into starfish larvae and noticed unusual cells surrounding the thorns. This was the active response of the body trying to maintain its integrity. It was Mechnikov who first observed the phenomenon of phagocytosis, in which the body defends itself against a foreign body. Ehrlich accustomed mice to the poisonous ricin and abrin. After feeding them with small but increasing dosages of ricin he ascertained that they had become "ricin-proof". Ehrlich interpreted this as immunization and observed that it was abruptly initiated after a few days and was still in existence after several months. Prior to the designation of immunity, from the etymological root , which is Latin for 'exempt', early physicians characterized organs that would later be proven as essential components of the immune system. The important lymphoid organs of the immune system are the thymus, bone marrow, and chief lymphatic tissues such as spleen, tonsils, lymph vessels, lymph nodes, adenoids, and liver. However, many components of the immune system are cellular in nature, and not associated with specific organs, but rather embedded or circulating in various tissues located throughout the body. Classical immunology Classical immunology ties in with the fields of epidemiology and medicine. It studies the relationship between the body systems, pathogens, and immunity. The earliest written mention of immunity can be traced back to the plague of Athens in 430 BCE. Thucydides noted that people who had recovered from a previous bout of the disease could nurse the sick without contracting the illness a second time. Many other ancient societies have references to this phenomenon, but it was not until the 19th and 20th centuries before the concept developed into scientific theory. The study of the molecular and cellular components that comprise the immune system, including their function and interaction, is the central science of immunology. The immune system has been divided into a more primitive innate immune system and, in vertebrates, an acquired or adaptive immune system. The latter is further divided into humoral (or antibody) and cell-mediated components. The immune system has the capability of self and non-self-recognition. An antigen is a substance that ignites the immune response. The cells involved in recognizing the antigen are Lymphocytes. Once they recognize, they secrete antibodies. Antibodies are proteins that neutralize the disease-causing microorganisms. Antibodies do not directly kill pathogens, but instead, identify antigens as targets for destruction by other immune cells such as phagocytes or NK cells. The (antibody) response is defined as the interaction between antibodies and antigens. Antibodies are specific proteins released from a certain class of immune cells known as B lymphocytes, while antigens are defined as anything that elicits the generation of antibodies (antibody generators). Immunology rests on an understanding of the properties of these two biological entities and the cellular response to both. It is now getting clear that the immune responses contribute to the development of many common disorders not traditionally viewed as immunologic, including metabolic, cardiovascular, cancer, and neurodegenerative conditions like Alzheimer's disease. Besides, there are direct implications of the immune system in the infectious diseases (tuberculosis, malaria, hepatitis, pneumonia, dysentery, and helminth infestations) as well. Hence, research in the field of immunology is of prime importance for the advancements in the fields of modern medicine, biomedical research, and biotechnology. Immunological research continues to become more specialized, pursuing non-classical models of immunity and functions of cells, organs and systems not previously associated with the immune system (Yemeserach 2010). Diagnostic immunology The specificity of the bond between antibody and antigen has made the antibody an excellent tool for the detection of substances by a variety of diagnostic techniques. Antibodies specific for a desired antigen can be conjugated with an isotopic (radio) or fluorescent label or with a color-forming enzyme in order to detect it. However, the similarity between some antigens can lead to false positives and other errors in such tests by antibodies cross-reacting with antigens that are not exact matches. Immunotherapy The use of immune system components or antigens to treat a disease or disorder is known as immunotherapy. Immunotherapy is most commonly used to treat allergies, autoimmune disorders such as Crohn's disease, Hashimoto's thyroiditis and rheumatoid arthritis, and certain cancers. Immunotherapy is also often used for patients who are immunosuppressed (such as those with HIV) and people with other immune deficiencies. This includes regulating factors such as IL-2, IL-10, GM-CSF B, IFN-α. Clinical immunology Clinical immunology is the study of diseases caused by disorders of the immune system (failure, aberrant action, and malignant growth of the cellular elements of the system). It also involves diseases of other systems, where immune reactions play a part in the pathology and clinical features. The diseases caused by disorders of the immune system fall into two broad categories: immunodeficiency, in which parts of the immune system fail to provide an adequate response (examples include chronic granulomatous disease and primary immune diseases); autoimmunity, in which the immune system attacks its own host's body (examples include systemic lupus erythematosus, rheumatoid arthritis, Hashimoto's disease and myasthenia gravis). Other immune system disorders include various hypersensitivities (such as in asthma and other allergies) that respond inappropriately to otherwise harmless compounds. The most well-known disease that affects the immune system itself is AIDS, an immunodeficiency characterized by the suppression of CD4+ ("helper") T cells, dendritic cells and macrophages by the human immunodeficiency virus (HIV). Clinical immunologists also study ways to prevent the immune system's attempts to destroy allografts (transplant rejection). Clinical immunology and allergy is usually a subspecialty of internal medicine or pediatrics. Fellows in Clinical Immunology are typically exposed to many of the different aspects of the specialty and treat allergic conditions, primary immunodeficiencies and systemic autoimmune and autoinflammatory conditions. As part of their training fellows may do additional rotations in rheumatology, pulmonology, otorhinolaryngology, dermatology and the immunologic lab. Clinical and pathology immunology When health conditions worsen to emergency status, portions of immune system organs, including the thymus, spleen, bone marrow, lymph nodes, and other lymphatic tissues, can be surgically excised for examination while patients are still alive. Theoretical immunology Immunology is strongly experimental in everyday practice but is also characterized by an ongoing theoretical attitude. Many theories have been suggested in immunology from the end of the nineteenth century up to the present time. The end of the 19th century and the beginning of the 20th century saw a battle between "cellular" and "humoral" theories of immunity. According to the cellular theory of immunity, represented in particular by Elie Metchnikoff, it was cells – more precisely, phagocytes – that were responsible for immune responses. In contrast, the humoral theory of immunity, held by Robert Koch and Emil von Behring, among others, stated that the active immune agents were soluble components (molecules) found in the organism's "humors" rather than its cells. In the mid-1950s, Macfarlane Burnet, inspired by a suggestion made by Niels Jerne, formulated the clonal selection theory (CST) of immunity. On the basis of CST, Burnet developed a theory of how an immune response is triggered according to the self/nonself distinction: "self" constituents (constituents of the body) do not trigger destructive immune responses, while "nonself" entities (e.g., pathogens, an allograft) trigger a destructive immune response. The theory was later modified to reflect new discoveries regarding histocompatibility or the complex "two-signal" activation of T cells. The self/nonself theory of immunity and the self/nonself vocabulary have been criticized, but remain very influential. More recently, several theoretical frameworks have been suggested in immunology, including "autopoietic" views, "cognitive immune" views, the "danger model" (or "danger theory"), and the "discontinuity" theory. The danger model, suggested by Polly Matzinger and colleagues, has been very influential, arousing many comments and discussions. Developmental immunology The body's capability to react to antigens depends on a person's age, antigen type, maternal factors and the area where the antigen is presented. Neonates are said to be in a state of physiological immunodeficiency, because both their innate and adaptive immunological responses are greatly suppressed. Once born, a child's immune system responds favorably to protein antigens while not as well to glycoproteins and polysaccharides. In fact, many of the infections acquired by neonates are caused by low virulence organisms like Staphylococcus and Pseudomonas. In neonates, opsonic activity and the ability to activate the complement cascade is very limited. For example, the mean level of C3 in a newborn is approximately 65% of that found in the adult. Phagocytic activity is also greatly impaired in newborns. This is due to lower opsonic activity, as well as diminished up-regulation of integrin and selectin receptors, which limit the ability of neutrophils to interact with adhesion molecules in the endothelium. Their monocytes are slow and have a reduced ATP production, which also limits the newborn's phagocytic activity. Although, the number of total lymphocytes is significantly higher than in adults, the cellular and humoral immunity is also impaired. Antigen-presenting cells in newborns have a reduced capability to activate T cells. Also, T cells of a newborn proliferate poorly and produce very small amounts of cytokines like IL-2, IL-4, IL-5, IL-12, and IFN-g which limits their capacity to activate the humoral response as well as the phagocitic activity of macrophage. B cells develop early during gestation but are not fully active. Maternal factors also play a role in the body's immune response. At birth, most of the immunoglobulin present is maternal IgG. These antibodies are transferred from the placenta to the fetus using the FcRn (neonatal Fc receptor). Because IgM, IgD, IgE and IgA do not cross the placenta, they are almost undetectable at birth. Some IgA is provided by breast milk. These passively-acquired antibodies can protect the newborn for up to 18 months, but their response is usually short-lived and of low affinity. These antibodies can also produce a negative response. If a child is exposed to the antibody for a particular antigen before being exposed to the antigen itself then the child will produce a dampened response. Passively acquired maternal antibodies can suppress the antibody response to active immunization. Similarly, the response of T-cells to vaccination differs in children compared to adults, and vaccines that induce Th1 responses in adults do not readily elicit these same responses in neonates. Between six and nine months after birth, a child's immune system begins to respond more strongly to glycoproteins, but there is usually no marked improvement in their response to polysaccharides until they are at least one year old. This can be the reason for distinct time frames found in vaccination schedules. During adolescence, the human body undergoes various physical, physiological and immunological changes triggered and mediated by hormones, of which the most significant in females is 17-β-estradiol (an estrogen) and, in males, is testosterone. Estradiol usually begins to act around the age of 10 and testosterone some months later. There is evidence that these steroids not only act directly on the primary and secondary sexual characteristics but also have an effect on the development and regulation of the immune system, including an increased risk in developing pubescent and post-pubescent autoimmunity. There is also some evidence that cell surface receptors on B cells and macrophages may detect sex hormones in the system. The female sex hormone 17-β-estradiol has been shown to regulate the level of immunological response, while some male androgens such as testosterone seem to suppress the stress response to infection. Other androgens, however, such as DHEA, increase immune response. As in females, the male sex hormones seem to have more control of the immune system during puberty and post-puberty than during the rest of a male's adult life. Physical changes during puberty such as thymic involution also affect immunological response. Ecoimmunology and behavioural immunity Ecoimmunology, or ecological immunology, explores the relationship between the immune system of an organism and its social, biotic and abiotic environment. More recent ecoimmunological research has focused on host pathogen defences traditionally considered "non-immunological", such as pathogen avoidance, self-medication, symbiont-mediated defenses, and fecundity trade-offs. Behavioural immunity, a phrase coined by Mark Schaller, specifically refers to psychological pathogen avoidance drivers, such as disgust aroused by stimuli encountered around pathogen-infected individuals, such as the smell of vomit. More broadly, "behavioural" ecological immunity has been demonstrated in multiple species. For example, the Monarch butterfly often lays its eggs on certain toxic milkweed species when infected with parasites. These toxins reduce parasite growth in the offspring of the infected Monarch. However, when uninfected Monarch butterflies are forced to feed only on these toxic plants, they suffer a fitness cost as reduced lifespan relative to other uninfected Monarch butterflies. This indicates that laying eggs on toxic plants is a costly behaviour in Monarchs which has probably evolved to reduce the severity of parasite infection. Symbiont-mediated defenses are also heritable across host generations, despite a non-genetic direct basis for the transmission. Aphids, for example, rely on several different symbionts for defense from key parasites, and can vertically transmit their symbionts from parent to offspring. Therefore, a symbiont that successfully confers protection from a parasite is more likely to be passed to the host offspring, allowing coevolution with parasites attacking the host in a way similar to traditional immunity. The preserved immune tissues of extinct species, such as the thylacine (Thylacine cynocephalus), can also provide insights into their biology. Cancer immunology The study of the interaction of the immune system with cancer cells can lead to diagnostic tests and therapies with which to find and fight cancer. The immunology concerned with physiological reaction characteristic of the immune state. Inflammation is an immune response that has been observed in many types of cancers. Reproductive immunology This area of the immunology is devoted to the study of immunological aspects of the reproductive process including fetus acceptance. The term has also been used by fertility clinics to address fertility problems, recurrent miscarriages, premature deliveries and dangerous complications such as pre-eclampsia.
Biology and health sciences
Fields of medicine
null
14968
https://en.wikipedia.org/wiki/Regular%20icosahedron
Regular icosahedron
In geometry, the regular icosahedron (or simply icosahedron) is a convex polyhedron that can be constructed from pentagonal antiprism by attaching two pentagonal pyramids with regular faces to each of its pentagonal faces, or by putting points onto the cube. The resulting polyhedron has 20 equilateral triangles as its faces, 30 edges, and 12 vertices. It is an example of a Platonic solid and of a deltahedron. The icosahedral graph represents the skeleton of a regular icosahedron. Many polyhedra are constructed from the regular icosahedron. For example, most of the Kepler–Poinsot polyhedron is constructed by faceting. Some of the Johnson solids can be constructed by removing the pentagonal pyramids. The regular icosahedron has many relations with other Platonic solids, one of them is the regular dodecahedron as its dual polyhedron and has the historical background on the comparison mensuration. It also has many relations with other polytopes. The appearance of regular icosahedron can be found in nature, such as the virus with icosahedral-shaped shells and radiolarians. Other applications of the regular icosahedron are the usage of its net in cartography, twenty-sided dice that may have been found in ancient times and role-playing games. Construction The regular icosahedron can be constructed like other gyroelongated bipyramids, started from a pentagonal antiprism by attaching two pentagonal pyramids with regular faces to each of its faces. These pyramids cover the pentagonal faces, replacing them with five equilateral triangles, such that the resulting polyhedron has 20 equilateral triangles as its faces. This process construction is known as the gyroelongation. Another way to construct it is by putting two points on each surface of a cube. In each face, draw a segment line between the midpoints of two opposite edges and locate two points with the golden ratio distance from each midpoint. These twelve vertices describe the three mutually perpendicular planes, with edges drawn between each of them. Because of the constructions above, the regular icosahedron is Platonic solid, a family of polyhedra with regular faces. A polyhedron with only equilateral triangles as faces is called a deltahedron. There are only eight different convex deltahedra, one of which is the regular icosahedron. The regular icosahedron can also be constructed starting from a regular octahedron. All triangular faces of a regular octahedron are breaking, twisting at a certain angle, and filling up with other equilateral triangles. This process is known as snub, and the regular icosahedron is also known as snub octahedron. One possible system of Cartesian coordinate for the vertices of a regular icosahedron, giving the edge length 2, is: where denotes the golden ratio. Properties Mensuration The insphere of a convex polyhedron is a sphere inside the polyhedron, touching every face. The circumsphere of a convex polyhedron is a sphere that contains the polyhedron and touches every vertex. The midsphere of a convex polyhedron is a sphere tangent to every edge. Therefore, given that the edge length of a regular icosahedron, the radius of insphere (inradius) , the radius of circumsphere (circumradius) , and the radius of midsphere (midradius) are, respectively: The surface area of a polyhedron is the sum of the areas of its faces. Therefore, the surface area of a regular icosahedron is 20 times that of each of its equilateral triangle faces. The volume of a regular icosahedron can be obtained as 20 times that of a pyramid whose base is one of its faces and whose apex is the icosahedron's center; or as the sum of two uniform pentagonal pyramids and a pentagonal antiprism. The expressions of both are: A problem dating back to the ancient Greeks is determining which of two shapes has a larger volume, an icosahedron inscribed in a sphere, or a dodecahedron inscribed in the same sphere. The problem was solved by Hero, Pappus, and Fibonacci, among others. Apollonius of Perga discovered the curious result that the ratio of volumes of these two shapes is the same as the ratio of their surface areas. Both volumes have formulas involving the golden ratio, but taken to different powers. As it turns out, the icosahedron occupies less of the sphere's volume (60.54%) than the dodecahedron (66.49%). The dihedral angle of a regular icosahedron can be calculated by adding the angle of pentagonal pyramids with regular faces and a pentagonal antiprism. The dihedral angle of a pentagonal antiprism and pentagonal pyramid between two adjacent triangular faces is approximately 38.2°. The dihedral angle of a pentagonal antiprism between pentagon-to-triangle is 100.8°, and the dihedral angle of a pentagonal pyramid between the same faces is 37.4°. Therefore, for the regular icosahedron, the dihedral angle between two adjacent triangles, on the edge where the pentagonal pyramid and pentagonal antiprism are attached is 37.4° + 100.8° = 138.2°. Symmetry The rotational symmetry group of the regular icosahedron is isomorphic to the alternating group on five letters. This non-abelian simple group is the only non-trivial normal subgroup of the symmetric group on five letters. Since the Galois group of the general quintic equation is isomorphic to the symmetric group on five letters, and this normal subgroup is simple and non-abelian, the general quintic equation does not have a solution in radicals. The proof of the Abel–Ruffini theorem uses this simple fact, and Felix Klein wrote a book that made use of the theory of icosahedral symmetries to derive an analytical solution to the general quintic equation. The full symmetry group of the icosahedron (including reflections) is known as the full icosahedral group. It is isomorphic to the product of the rotational symmetry group and the group of size two, which is generated by the reflection through the center of the icosahedron. Icosahedral graph Every Platonic graph, including the icosahedral graph, is a polyhedral graph. This means that they are planar graphs, graphs that can be drawn in the plane without crossing its edges; and they are 3-vertex-connected, meaning that the removal of any two of its vertices leaves a connected subgraph. According to Steinitz theorem, the icosahedral graph endowed with these heretofore properties represents the skeleton of a regular icosahedron. The icosahedral graph is Hamiltonian, meaning that it contains a Hamiltonian cycle, or a cycle that visits each vertex exactly once. Related polyhedra In other Platonic solids Aside from comparing the mensuration between the regular icosahedron and regular dodecahedron, they are dual to each other. An icosahedron can be inscribed in a dodecahedron by placing its vertices at the face centers of the dodecahedron, and vice versa. An icosahedron can be inscribed in an octahedron by placing its 12 vertices on the 12 edges of the octahedron such that they divide each edge into its two golden sections. Because the golden sections are unequal, there are five different ways to do this consistently, so five disjoint icosahedra can be inscribed in each octahedron. An icosahedron of edge length can be inscribed in a unit-edge-length cube by placing six of its edges (3 orthogonal opposite pairs) on the square faces of the cube, centered on the face centers and parallel or perpendicular to the square's edges. Because there are five times as many icosahedron edges as cube faces, there are five ways to do this consistently, so five disjoint icosahedra can be inscribed in each cube. The edge lengths of the cube and the inscribed icosahedron are in the golden ratio. Stellation The icosahedron has a large number of stellations. stated 59 stellations were identified for the regular icosahedron. The first form is the icosahedron itself. One is a regular Kepler–Poinsot polyhedron. Three are regular compound polyhedra. Facetings The small stellated dodecahedron, great dodecahedron, and great icosahedron are three facetings of the regular icosahedron. They share the same vertex arrangement. They all have 30 edges. The regular icosahedron and great dodecahedron share the same edge arrangement but differ in faces (triangles vs pentagons), as do the small stellated dodecahedron and great icosahedron (pentagrams vs triangles). Diminishment A Johnson solid is a polyhedron whose faces are all regular, but which is not uniform. This means the Johnson solids do not include the Archimedean solids, the Catalan solids, the prisms, or the antiprisms. Some of them are constructed involving the removal of the part of a regular icosahedron, a process known as diminishment. They are gyroelongated pentagonal pyramid, metabidiminished icosahedron, and tridiminished icosahedron, which remove one, two, and three pentagonal pyramids from the icosahedron, respectively. The similar dissected regular icosahedron has 2 adjacent vertices diminished, leaving two trapezoidal faces, and a bifastigium has 2 opposite sets of vertices removed and 4 trapezoidal faces. Relations to the 600-cell and other 4-polytopes The icosahedron is the dimensional analogue of the 600-cell, a regular 4-dimensional polytope. The 600-cell has icosahedral cross sections of two sizes, and each of its 120 vertices is an icosahedral pyramid; the icosahedron is the vertex figure of the 600-cell. The unit-radius 600-cell has tetrahedral cells of edge length , 20 of which meet at each vertex to form an icosahedral pyramid (a 4-pyramid with an icosahedron as its base). Thus the 600-cell contains 120 icosahedra of edge length . The 600-cell also contains unit-edge-length cubes and unit-edge-length octahedra as interior features formed by its unit-length chords. In the unit-radius 120-cell (another regular 4-polytope which is both the dual of the 600-cell and a compound of 5 600-cells) we find all three kinds of inscribed icosahedra (in a dodecahedron, in an octahedron, and in a cube). A semiregular 4-polytope, the snub 24-cell, has icosahedral cells. Relations to other uniform polytopes As mentioned above, the regular icosahedron is unique among the Platonic solids in possessing a dihedral angle is approximately Thus, just as hexagons have angles not less than 120° and cannot be used as the faces of a convex regular polyhedron because such a construction would not meet the requirement that at least three faces meet at a vertex and leave a positive defect for folding in three dimensions, icosahedra cannot be used as the cells of a convex regular polychoron because, similarly, at least three cells must meet at an edge and leave a positive defect for folding in four dimensions (in general for a convex polytope in n dimensions, at least three facets must meet at a peak and leave a positive defect for folding in n-space). However, when combined with suitable cells having smaller dihedral angles, icosahedra can be used as cells in semi-regular polychora (for example the snub 24-cell), just as hexagons can be used as faces in semi-regular polyhedra (for example the truncated icosahedron). Finally, non-convex polytopes do not carry the same strict requirements as convex polytopes, and icosahedra are indeed the cells of the icosahedral 120-cell, one of the ten non-convex regular polychora. There are distortions of the icosahedron that, while no longer regular, are nevertheless vertex-uniform. These are invariant under the same rotations as the tetrahedron, and are somewhat analogous to the snub cube and snub dodecahedron, including some forms which are chiral and some with symmetry, i.e. have different planes of symmetry from the tetrahedron. Appearances Dice are the most common objects using different polyhedra, one of them being the regular icosahedron. The twenty-sided die was found in many ancient times. One example is the die from the Ptolemaic of Egypt, which later used Greek letters inscribed on the faces in the period of Greece and Rome. Another example was found in the treasure of Tipu Sultan, which was made out of gold and with numbers written on each face. In several roleplaying games, such as Dungeons & Dragons, the twenty-sided die (labeled as d20) is commonly used in determining success or failure of an action. It may be numbered from "0" to "9" twice, in which form it usually serves as a ten-sided die (d10); most modern versions are labeled from "1" to "20". Scattergories is another board game in which the player names the category entires on a card within a given set time. The naming of such categories is initially with the letters contained in every twenty-sided dice. The regular icosahedron may also appear in many fields of science as follows: In virology, herpes virus have icosahedral shells. The outer protein shell of HIV is enclosed in a regular icosahedron, as is the head of a typical myovirus. Several species of radiolarians discovered by Ernst Haeckel, described its shells as the like-shaped various regular polyhedra; one of which is Circogonia icosahedra, whose skeleton is shaped like a regular icosahedron. In chemistry, the closo-carboranes are compounds with a shape resembling the regular icosahedron. The crystal twinning with icosahedral shapes also occurs in crystals, especially nanoparticles. Many borides and allotropes of boron such as α- and β-rhombohedral contain boron B12 icosahedron as a basic structure unit. In cartography, R. Buckminster Fuller used the net of a regular icosahedron to create a map known as Dymaxion map, by subdividing the net into triangles, followed by calculating the grid on the Earth's surface, and transferring the results from the sphere to the polyhedron. This projection was created during the time that Fuller realized that the Greenland is smaller than South America. In the Thomson problem, concerning the minimum-energy configuration of charged particles on a sphere, and for the Tammes problem of constructing a spherical code maximizing the smallest distance among the points, the minimum solution known for places the points at the vertices of a regular icosahedron, inscribed in a sphere. This configuration is proven optimal for the Tammes problem, but a rigorous solution to this instance of the Thomson problem is unknown. As mentioned above, the regular icosahedron is one of the five Platonic solids. The regular polyhedra have been known since antiquity, but are named after Plato who, in his Timaeus dialogue, identified these with the five elements, whose elementary units were attributed these shapes: fire (tetrahedron), air (octahedron), water (icosahedron), earth (cube) and the shape of the universe as a whole (dodecahedron). Euclid's Elements defined the Platonic solids and solved the problem of finding the ratio of the circumscribed sphere's diameter to the edge length. Following their identification with the elements by Plato, Johannes Kepler in his Harmonices Mundi sketched each of them, in particular, the regular icosahedron. In his Mysterium Cosmographicum, he also proposed a model of the Solar System based on the placement of Platonic solids in a concentric sequence of increasing radius of the inscribed and circumscribed spheres whose radii gave the distance of the six known planets from the common center. The ordering of the solids, from innermost to outermost, consisted of: regular octahedron, regular icosahedron, regular dodecahedron, regular tetrahedron, and cube.
Mathematics
Three-dimensional space
null
14979
https://en.wikipedia.org/wiki/Interstellar%20cloud
Interstellar cloud
An interstellar cloud is generally an accumulation of gas, plasma, and dust in our and other galaxies. But differently, an interstellar cloud is a denser-than-average region of the interstellar medium, the matter and radiation that exists in the space between the star systems in a galaxy. Depending on the density, size, and temperature of a given cloud, its hydrogen can be neutral, making an H I region; ionized, or plasma making it an H II region; or molecular, which are referred to simply as molecular clouds, or sometime dense clouds. Neutral and ionized clouds are sometimes also called diffuse clouds. An interstellar cloud is formed by the gas and dust particles from a red giant in its later life. Chemical compositions The chemical composition of interstellar clouds is determined by studying electromagnetic radiation that they emanate, and we receive – from radio waves through visible light, to gamma rays on the electromagnetic spectrum – that we receive from them. Large radio telescopes scan the intensity in the sky of particular frequencies of electromagnetic radiation, which are characteristic of certain molecules' spectra. Some interstellar clouds are cold and tend to give out electromagnetic radiation of large wavelengths. A map of the abundance of these molecules can be made, enabling an understanding of the varying composition of the clouds. In hot clouds, there are often ions of many elements, whose spectra can be seen in visible and ultraviolet light. Radio telescopes can also scan over the frequencies from one point in the map, recording the intensities of each type of molecule. Peaks of frequencies mean that an abundance of that molecule or atom is present in the cloud. The height of the peak is proportional to the relative percentage that it makes up. Unexpected chemicals detected in interstellar clouds Until recently, the rates of reactions in interstellar clouds were expected to be very slow, with minimal products being produced due to the low temperature and density of the clouds. However, organic molecules were observed in the spectra that scientists would not have expected to find under these conditions, such as formaldehyde, methanol, and vinyl alcohol. The reactions needed to create such substances are familiar to scientists only at the much higher temperatures and pressures of earth and earth-based laboratories. The fact that they were found indicates that these chemical reactions in interstellar clouds take place faster than suspected, likely in gas-phase reactions unfamiliar to organic chemistry as observed on earth. These reactions are studied in the CRESU experiment. Interstellar clouds also provide a medium to study the presence and proportions of metals in space. The presence and ratios of these elements may help develop theories on the means of their production, especially when their proportions are inconsistent with those expected to arise from stars as a result of fusion and thereby suggest alternate means, such as cosmic ray spallation. High-velocity cloud These interstellar clouds possess a velocity higher than can be explained by the rotation of the Milky Way. By definition, these clouds must have a vlsr greater than 90 km s−1, where vlsr is the local standard rest velocity. They are detected primarily in the 21 cm line of neutral hydrogen, and typically have a lower portion of heavy elements than is normal for interstellar clouds in the Milky Way. Theories intended to explain these unusual clouds include materials left over from the formation of the galaxy, or tidally-displaced matter drawn away from other galaxies or members of the Local Group. An example of the latter is the Magellanic Stream. To narrow down the origin of these clouds, a better understanding of their distances and metallicity is needed. High-velocity clouds are identified with an HVC prefix, as with HVC 127-41-330.
Physical sciences
Basics_2
Astronomy
15022
https://en.wikipedia.org/wiki/Infrared
Infrared
Infrared (IR; sometimes called infrared light) is electromagnetic radiation (EMR) with wavelengths longer than that of visible light but shorter than microwaves. The infrared spectral band begins with waves that are just longer than those of red light (the longest waves in the visible spectrum), so IR is invisible to the human eye. IR is generally understood to include wavelengths from around to . IR is commonly divided between longer-wavelength thermal IR, emitted from terrestrial sources, and shorter-wavelength IR or near-IR, part of the solar spectrum. Longer IR wavelengths (30–100 μm) are sometimes included as part of the terahertz radiation band. Almost all black-body radiation from objects near room temperature is in the IR band. As a form of EMR, IR carries energy and momentum, exerts radiation pressure, and has properties corresponding to both those of a wave and of a particle, the photon. It was long known that fires emit invisible heat; in 1681 the pioneering experimenter Edme Mariotte showed that glass, though transparent to sunlight, obstructed radiant heat. In 1800 the astronomer Sir William Herschel discovered that infrared radiation is a type of invisible radiation in the spectrum lower in energy than red light, by means of its effect on a thermometer. Slightly more than half of the energy from the Sun was eventually found, through Herschel's studies, to arrive on Earth in the form of infrared. The balance between absorbed and emitted infrared radiation has an important effect on Earth's climate. Infrared radiation is emitted or absorbed by molecules when changing rotational-vibrational movements. It excites vibrational modes in a molecule through a change in the dipole moment, making it a useful frequency range for study of these energy states for molecules of the proper symmetry. Infrared spectroscopy examines absorption and transmission of photons in the infrared range. Infrared radiation is used in industrial, scientific, military, commercial, and medical applications. Night-vision devices using active near-infrared illumination allow people or animals to be observed without the observer being detected. Infrared astronomy uses sensor-equipped telescopes to penetrate dusty regions of space such as molecular clouds, to detect objects such as planets, and to view highly red-shifted objects from the early days of the universe. Infrared thermal-imaging cameras are used to detect heat loss in insulated systems, to observe changing blood flow in the skin, to assist firefighting, and to detect the overheating of electrical components. Military and civilian applications include target acquisition, surveillance, night vision, homing, and tracking. Humans at normal body temperature radiate chiefly at wavelengths around 10 μm. Non-military uses include thermal efficiency analysis, environmental monitoring, industrial facility inspections, detection of grow-ops, remote temperature sensing, short-range wireless communication, spectroscopy, and weather forecasting. Definition and relationship to the electromagnetic spectrum There is no universally accepted definition of the range of infrared radiation. Typically, it is taken to extend from the nominal red edge of the visible spectrum at 780 nm to 1 mm. This range of wavelengths corresponds to a frequency range of approximately 430 THz down to 300 GHz. Beyond infrared is the microwave portion of the electromagnetic spectrum. Increasingly, terahertz radiation is counted as part of the microwave band, not infrared, moving the band edge of infrared to 0.1 mm (3 THz). Nature Sunlight, at an effective temperature of 5,780 K (5,510 °C, 9,940 °F), is composed of near-thermal-spectrum radiation that is slightly more than half infrared. At zenith, sunlight provides an irradiance of just over 1 kW per square meter at sea level. Of this energy, 527 W is infrared radiation, 445 W is visible light, and 32 W is ultraviolet radiation. Nearly all the infrared radiation in sunlight is near infrared, shorter than 4 μm. On the surface of Earth, at far lower temperatures than the surface of the Sun, some thermal radiation consists of infrared in the mid-infrared region, much longer than in sunlight. Black-body, or thermal, radiation is continuous: it radiates at all wavelengths. Of these natural thermal radiation processes, only lightning and natural fires are hot enough to produce much visible energy, and fires produce far more infrared than visible-light energy. Regions In general, objects emit infrared radiation across a spectrum of wavelengths, but sometimes only a limited region of the spectrum is of interest because sensors usually collect radiation only within a specific bandwidth. Thermal infrared radiation also has a maximum emission wavelength, which is inversely proportional to the absolute temperature of object, in accordance with Wien's displacement law. The infrared band is often subdivided into smaller sections, although how the IR spectrum is thereby divided varies between different areas in which IR is employed. Visible limit Infrared radiation is generally considered to begin with wavelengths longer than visible by the human eye. There is no hard wavelength limit to what is visible, as the eye's sensitivity decreases rapidly but smoothly, for wavelengths exceeding about 700 nm. Therefore wavelengths just longer than that can be seen if they are sufficiently bright, though they may still be classified as infrared according to usual definitions. Light from a near-IR laser may thus appear dim red and can present a hazard since it may actually be quite bright. Even IR at wavelengths up to 1,050 nm from pulsed lasers can be seen by humans under certain conditions. Commonly used subdivision scheme A commonly used subdivision scheme is: NIR and SWIR together is sometimes called "reflected infrared", whereas MWIR and LWIR is sometimes referred to as "thermal infrared". CIE division scheme The International Commission on Illumination (CIE) recommended the division of infrared radiation into the following three bands: ISO 20473 scheme ISO 20473 specifies the following scheme: Astronomy division scheme Astronomers typically divide the infrared spectrum as follows: These divisions are not precise and can vary depending on the publication. The three regions are used for observation of different temperature ranges, and hence different environments in space. The most common photometric system used in astronomy allocates capital letters to different spectral regions according to filters used; I, J, H, and K cover the near-infrared wavelengths; L, M, N, and Q refer to the mid-infrared region. These letters are commonly understood in reference to atmospheric windows and appear, for instance, in the titles of many papers. Sensor response division scheme A third scheme divides up the band based on the response of various detectors: Near-infrared: from 0.7 to 1.0 μm (from the approximate end of the response of the human eye to that of silicon). Short-wave infrared: 1.0 to 3 μm (from the cut-off of silicon to that of the MWIR atmospheric window). InGaAs covers to about 1.8 μm; the less sensitive lead salts cover this region. Cryogenically cooled MCT detectors can cover the region of 1.0–2.5μm. Mid-wave infrared: 3 to 5 μm (defined by the atmospheric window and covered by indium antimonide, InSb and mercury cadmium telluride, HgCdTe, and partially by lead selenide, PbSe). Long-wave infrared: 8 to 12, or 7 to 14 μm (this is the atmospheric window covered by HgCdTe and microbolometers). Very-long wave infrared (VLWIR) (12 to about 30 μm, covered by doped silicon). Near-infrared is the region closest in wavelength to the radiation detectable by the human eye. mid- and far-infrared are progressively further from the visible spectrum. Other definitions follow different physical mechanisms (emission peaks, vs. bands, water absorption) and the newest follow technical reasons (the common silicon detectors are sensitive to about 1,050 nm, while InGaAs's sensitivity starts around 950 nm and ends between 1,700 and 2,600 nm, depending on the specific configuration). No international standards for these specifications are currently available. The onset of infrared is defined (according to different standards) at various values typically between 700 nm and 800 nm, but the boundary between visible and infrared light is not precisely defined. The human eye is markedly less sensitive to light above 700 nm wavelength, so longer wavelengths make insignificant contributions to scenes illuminated by common light sources. Particularly intense near-IR light (e.g., from lasers, LEDs or bright daylight with the visible light filtered out) can be detected up to approximately 780 nm, and will be perceived as red light. Intense light sources providing wavelengths as long as 1,050 nm can be seen as a dull red glow, causing some difficulty in near-IR illumination of scenes in the dark (usually this practical problem is solved by indirect illumination). Leaves are particularly bright in the near IR, and if all visible light leaks from around an IR-filter are blocked, and the eye is given a moment to adjust to the extremely dim image coming through a visually opaque IR-passing photographic filter, it is possible to see the Wood effect that consists of IR-glowing foliage. Telecommunication bands In optical communications, the part of the infrared spectrum that is used is divided into seven bands based on availability of light sources, transmitting/absorbing materials (fibers), and detectors: The C-band is the dominant band for long-distance telecommunications networks. The S and L bands are based on less well established technology, and are not as widely deployed. Heat Infrared radiation is popularly known as "heat radiation", but light and electromagnetic waves of any frequency will heat surfaces that absorb them. Infrared light from the Sun accounts for 49% of the heating of Earth, with the rest being caused by visible light that is absorbed then re-radiated at longer wavelengths. Visible light or ultraviolet-emitting lasers can char paper and incandescently hot objects emit visible radiation. Objects at room temperature will emit radiation concentrated mostly in the 8 to 25 μm band, but this is not distinct from the emission of visible light by incandescent objects and ultraviolet by even hotter objects (see black body and Wien's displacement law). Heat is energy in transit that flows due to a temperature difference. Unlike heat transmitted by thermal conduction or thermal convection, thermal radiation can propagate through a vacuum. Thermal radiation is characterized by a particular spectrum of many wavelengths that are associated with emission from an object, due to the vibration of its molecules at a given temperature. Thermal radiation can be emitted from objects at any wavelength, and at very high temperatures such radiation is associated with spectra far above the infrared, extending into visible, ultraviolet, and even X-ray regions (e.g. the solar corona). Thus, the popular association of infrared radiation with thermal radiation is only a coincidence based on typical (comparatively low) temperatures often found near the surface of planet Earth. The concept of emissivity is important in understanding the infrared emissions of objects. This is a property of a surface that describes how its thermal emissions deviate from the ideal of a black body. To further explain, two objects at the same physical temperature may not show the same infrared image if they have differing emissivity. For example, for any pre-set emissivity value, objects with higher emissivity will appear hotter, and those with a lower emissivity will appear cooler (assuming, as is often the case, that the surrounding environment is cooler than the objects being viewed). When an object has less than perfect emissivity, it obtains properties of reflectivity and/or transparency, and so the temperature of the surrounding environment is partially reflected by and/or transmitted through the object. If the object were in a hotter environment, then a lower emissivity object at the same temperature would likely appear to be hotter than a more emissive one. For that reason, incorrect selection of emissivity and not accounting for environmental temperatures will give inaccurate results when using infrared cameras and pyrometers. Applications Night vision Infrared is used in night vision equipment when there is insufficient visible light to see. Night vision devices operate through a process involving the conversion of ambient light photons into electrons that are then amplified by a chemical and electrical process and then converted back into visible light. Infrared light sources can be used to augment the available ambient light for conversion by night vision devices, increasing in-the-dark visibility without actually using a visible light source. The use of infrared light and night vision devices should not be confused with thermal imaging, which creates images based on differences in surface temperature by detecting infrared radiation (heat) that emanates from objects and their surrounding environment. Thermography Infrared radiation can be used to remotely determine the temperature of objects (if the emissivity is known). This is termed thermography, or in the case of very hot objects in the NIR or visible it is termed pyrometry. Thermography (thermal imaging) is mainly used in military and industrial applications but the technology is reaching the public market in the form of infrared cameras on cars due to greatly reduced production costs. Thermographic cameras detect radiation in the infrared range of the electromagnetic spectrum (roughly 9,000–14,000 nm or 9–14 μm) and produce images of that radiation. Since infrared radiation is emitted by all objects based on their temperatures, according to the black-body radiation law, thermography makes it possible to "see" one's environment with or without visible illumination. The amount of radiation emitted by an object increases with temperature, therefore thermography allows one to see variations in temperature (hence the name). Hyperspectral imaging A hyperspectral image is a "picture" containing continuous spectrum through a wide spectral range at each pixel. Hyperspectral imaging is gaining importance in the field of applied spectroscopy particularly with NIR, SWIR, MWIR, and LWIR spectral regions. Typical applications include biological, mineralogical, defence, and industrial measurements. Thermal infrared hyperspectral imaging can be similarly performed using a thermographic camera, with the fundamental difference that each pixel contains a full LWIR spectrum. Consequently, chemical identification of the object can be performed without a need for an external light source such as the Sun or the Moon. Such cameras are typically applied for geological measurements, outdoor surveillance and UAV applications. Other imaging In infrared photography, infrared filters are used to capture the near-infrared spectrum. Digital cameras often use infrared blockers. Cheaper digital cameras and camera phones have less effective filters and can view intense near-infrared, appearing as a bright purple-white color. This is especially pronounced when taking pictures of subjects near IR-bright areas (such as near a lamp), where the resulting infrared interference can wash out the image. There is also a technique called 'T-ray' imaging, which is imaging using far-infrared or terahertz radiation. Lack of bright sources can make terahertz photography more challenging than most other infrared imaging techniques. Recently T-ray imaging has been of considerable interest due to a number of new developments such as terahertz time-domain spectroscopy. Tracking Infrared tracking, also known as infrared homing, refers to a passive missile guidance system, which uses the emission from a target of electromagnetic radiation in the infrared part of the spectrum to track it. Missiles that use infrared seeking are often referred to as "heat-seekers" since infrared (IR) is just below the visible spectrum of light in frequency and is radiated strongly by hot bodies. Many objects such as people, vehicle engines, and aircraft generate and retain heat, and as such, are especially visible in the infrared wavelengths of light compared to objects in the background. Heating Infrared radiation can be used as a deliberate heating source. For example, it is used in infrared saunas to heat the occupants. It may also be used in other heating applications, such as to remove ice from the wings of aircraft (de-icing). Infrared heating is also becoming more popular in industrial manufacturing processes, e.g. curing of coatings, forming of plastics, annealing, plastic welding, and print drying. In these applications, infrared heaters replace convection ovens and contact heating. Cooling A variety of technologies or proposed technologies take advantage of infrared emissions to cool buildings or other systems. The LWIR (8–15 μm) region is especially useful since some radiation at these wavelengths can escape into space through the atmosphere's infrared window. This is how passive daytime radiative cooling (PDRC) surfaces are able to achieve sub-ambient cooling temperatures under direct solar intensity, enhancing terrestrial heat flow to outer space with zero energy consumption or pollution. PDRC surfaces maximize shortwave solar reflectance to lessen heat gain while maintaining strong longwave infrared (LWIR) thermal radiation heat transfer. When imagined on a worldwide scale, this cooling method has been proposed as a way to slow and even reverse global warming, with some estimates proposing a global surface area coverage of 1-2% to balance global heat fluxes. Communications IR data transmission is also employed in short-range communication among computer peripherals and personal digital assistants. These devices usually conform to standards published by IrDA, the Infrared Data Association. Remote controls and IrDA devices use infrared light-emitting diodes (LEDs) to emit infrared radiation that may be concentrated by a lens into a beam that the user aims at the detector. The beam is modulated, i.e. switched on and off, according to a code which the receiver interprets. Usually very near-IR is used (below 800 nm) for practical reasons. This wavelength is efficiently detected by inexpensive silicon photodiodes, which the receiver uses to convert the detected radiation to an electric current. That electrical signal is passed through a high-pass filter which retains the rapid pulsations due to the IR transmitter but filters out slowly changing infrared radiation from ambient light. Infrared communications are useful for indoor use in areas of high population density. IR does not penetrate walls and so does not interfere with other devices in adjoining rooms. Infrared is the most common way for remote controls to command appliances. Infrared remote control protocols like RC-5, SIRC, are used to communicate with infrared. Free-space optical communication using infrared lasers can be a relatively inexpensive way to install a communications link in an urban area operating at up to 4 gigabit/s, compared to the cost of burying fiber optic cable, except for the radiation damage. "Since the eye cannot detect IR, blinking or closing the eyes to help prevent or reduce damage may not happen." Infrared lasers are used to provide the light for optical fiber communications systems. Wavelengths around 1,330 nm (least dispersion) or 1,550 nm (best transmission) are the best choices for standard silica fibers. IR data transmission of audio versions of printed signs is being researched as an aid for visually impaired people through the Remote infrared audible signage project. Transmitting IR data from one device to another is sometimes referred to as beaming. IR is sometimes used for assistive audio as an alternative to an audio induction loop. Spectroscopy Infrared vibrational spectroscopy (see also near-infrared spectroscopy) is a technique that can be used to identify molecules by analysis of their constituent bonds. Each chemical bond in a molecule vibrates at a frequency characteristic of that bond. A group of atoms in a molecule (e.g., CH2) may have multiple modes of oscillation caused by the stretching and bending motions of the group as a whole. If an oscillation leads to a change in dipole in the molecule then it will absorb a photon that has the same frequency. The vibrational frequencies of most molecules correspond to the frequencies of infrared light. Typically, the technique is used to study organic compounds using light radiation from the mid-infrared, 4,000–400 cm−1. A spectrum of all the frequencies of absorption in a sample is recorded. This can be used to gain information about the sample composition in terms of chemical groups present and also its purity (for example, a wet sample will show a broad O-H absorption around 3200 cm−1). The unit for expressing radiation in this application, cm−1, is the spectroscopic wavenumber. It is the frequency divided by the speed of light in vacuum. Thin film metrology In the semiconductor industry, infrared light can be used to characterize materials such as thin films and periodic trench structures. By measuring the reflectance of light from the surface of a semiconductor wafer, the index of refraction (n) and the extinction Coefficient (k) can be determined via the Forouhi–Bloomer dispersion equations. The reflectance from the infrared light can also be used to determine the critical dimension, depth, and sidewall angle of high aspect ratio trench structures. Meteorology Weather satellites equipped with scanning radiometers produce thermal or infrared images, which can then enable a trained analyst to determine cloud heights and types, to calculate land and surface water temperatures, and to locate ocean surface features. The scanning is typically in the range 10.3–12.5 μm (IR4 and IR5 channels). Clouds with high and cold tops, such as cyclones or cumulonimbus clouds, are often displayed as red or black, lower warmer clouds such as stratus or stratocumulus are displayed as blue or grey, with intermediate clouds shaded accordingly. Hot land surfaces are shown as dark-grey or black. One disadvantage of infrared imagery is that low clouds such as stratus or fog can have a temperature similar to the surrounding land or sea surface and do not show up. However, using the difference in brightness of the IR4 channel (10.3–11.5 μm) and the near-infrared channel (1.58–1.64 μm), low clouds can be distinguished, producing a fog satellite picture. The main advantage of infrared is that images can be produced at night, allowing a continuous sequence of weather to be studied. These infrared pictures can depict ocean eddies or vortices and map currents such as the Gulf Stream, which are valuable to the shipping industry. Fishermen and farmers are interested in knowing land and water temperatures to protect their crops against frost or increase their catch from the sea. Even El Niño phenomena can be spotted. Using color-digitized techniques, the gray-shaded thermal images can be converted to color for easier identification of desired information. The main water vapour channel at 6.40 to 7.08 μm can be imaged by some weather satellites and shows the amount of moisture in the atmosphere. Climatology In the field of climatology, atmospheric infrared radiation is monitored to detect trends in the energy exchange between the Earth and the atmosphere. These trends provide information on long-term changes in Earth's climate. It is one of the primary parameters studied in research into global warming, together with solar radiation. A pyrgeometer is utilized in this field of research to perform continuous outdoor measurements. This is a broadband infrared radiometer with sensitivity for infrared radiation between approximately 4.5 μm and 50 μm. Astronomy Astronomers observe objects in the infrared portion of the electromagnetic spectrum using optical components, including mirrors, lenses and solid state digital detectors. For this reason it is classified as part of optical astronomy. To form an image, the components of an infrared telescope need to be carefully shielded from heat sources, and the detectors are chilled using liquid helium. The sensitivity of Earth-based infrared telescopes is significantly limited by water vapor in the atmosphere, which absorbs a portion of the infrared radiation arriving from space outside of selected atmospheric windows. This limitation can be partially alleviated by placing the telescope observatory at a high altitude, or by carrying the telescope aloft with a balloon or an aircraft. Space telescopes do not suffer from this handicap, and so outer space is considered the ideal location for infrared astronomy. The infrared portion of the spectrum has several useful benefits for astronomers. Cold, dark molecular clouds of gas and dust in our galaxy will glow with radiated heat as they are irradiated by imbedded stars. Infrared can also be used to detect protostars before they begin to emit visible light. Stars emit a smaller portion of their energy in the infrared spectrum, so nearby cool objects such as planets can be more readily detected. (In the visible light spectrum, the glare from the star will drown out the reflected light from a planet.) Infrared light is also useful for observing the cores of active galaxies, which are often cloaked in gas and dust. Distant galaxies with a high redshift will have the peak portion of their spectrum shifted toward longer wavelengths, so they are more readily observed in the infrared. Cleaning Infrared cleaning is a technique used by some motion picture film scanners, film scanners and flatbed scanners to reduce or remove the effect of dust and scratches upon the finished scan. It works by collecting an additional infrared channel from the scan at the same position and resolution as the three visible color channels (red, green, and blue). The infrared channel, in combination with the other channels, is used to detect the location of scratches and dust. Once located, those defects can be corrected by scaling or replaced by inpainting. Art conservation and analysis Infrared reflectography can be applied to paintings to reveal underlying layers in a non-destructive manner, in particular the artist's underdrawing or outline drawn as a guide. Art conservators use the technique to examine how the visible layers of paint differ from the underdrawing or layers in between (such alterations are called pentimenti when made by the original artist). This is very useful information in deciding whether a painting is the prime version by the original artist or a copy, and whether it has been altered by over-enthusiastic restoration work. In general, the more pentimenti, the more likely a painting is to be the prime version. It also gives useful insights into working practices. Reflectography often reveals the artist's use of carbon black, which shows up well in reflectograms, as long as it has not also been used in the ground underlying the whole painting. Recent progress in the design of infrared-sensitive cameras makes it possible to discover and depict not only underpaintings and pentimenti, but entire paintings that were later overpainted by the artist. Notable examples are Picasso's Woman Ironing and Blue Room, where in both cases a portrait of a man has been made visible under the painting as it is known today. Similar uses of infrared are made by conservators and scientists on various types of objects, especially very old written documents such as the Dead Sea Scrolls, the Roman works in the Villa of the Papyri, and the Silk Road texts found in the Dunhuang Caves. Carbon black used in ink can show up extremely well. Biological systems The pit viper has a pair of infrared sensory pits on its head. There is uncertainty regarding the exact thermal sensitivity of this biological infrared detection system. Other organisms that have thermoreceptive organs are pythons (family Pythonidae), some boas (family Boidae), the Common Vampire Bat (Desmodus rotundus), a variety of jewel beetles (Melanophila acuminata), darkly pigmented butterflies (Pachliopta aristolochiae and Troides rhadamantus plateni), and possibly blood-sucking bugs (Triatoma infestans). By detecting the heat that their prey emits, crotaline and boid snakes identify and capture their prey using their IR-sensitive pit organs. Comparably, IR-sensitive pits on the Common Vampire Bat (Desmodus rotundus) aid in the identification of blood-rich regions on its warm-blooded victim. The jewel beetle, Melanophila acuminata, locates forest fires via infrared pit organs, where on recently burnt trees, they deposit their eggs. Thermoreceptors on the wings and antennae of butterflies with dark pigmentation, such Pachliopta aristolochiae and Troides rhadamantus plateni, shield them from heat damage as they sunbathe in the sun. Additionally, it's hypothesised that thermoreceptors let bloodsucking bugs (Triatoma infestans) locate their warm-blooded victims by sensing their body heat. Some fungi like Venturia inaequalis require near-infrared light for ejection. Although near-infrared vision (780–1,000 nm) has long been deemed impossible due to noise in visual pigments, sensation of near-infrared light was reported in the common carp and in three cichlid species. Fish use NIR to capture prey and for phototactic swimming orientation. NIR sensation in fish may be relevant under poor lighting conditions during twilight and in turbid surface waters. Photobiomodulation Near-infrared light, or photobiomodulation, is used for treatment of chemotherapy-induced oral ulceration as well as wound healing. There is some work relating to anti-herpes virus treatment. Research projects include work on central nervous system healing effects via cytochrome c oxidase upregulation and other possible mechanisms. Health hazards Strong infrared radiation in certain industry high-heat settings may be hazardous to the eyes, resulting in damage or blindness to the user. Since the radiation is invisible, special IR-proof goggles must be worn in such places. Scientific history The discovery of infrared radiation is ascribed to William Herschel, the astronomer, in the early 19th century. Herschel published his results in 1800 before the Royal Society of London. Herschel used a prism to refract light from the sun and detected the infrared, beyond the red part of the spectrum, through an increase in the temperature recorded on a thermometer. He was surprised at the result and called them "Calorific Rays". The term "infrared" did not appear until late 19th century. An earlier experiment in 1790 by Marc-Auguste Pictet demonstrated the reflection and focusing of radiant heat via mirrors in the absence of visible light. Other important dates include: 1830: Leopoldo Nobili made the first thermopile IR detector. 1840: John Herschel produces the first thermal image, called a thermogram. 1860: Gustav Kirchhoff formulated the blackbody theorem . 1873: Willoughby Smith discovered the photoconductivity of selenium. 1878: Samuel Pierpont Langley invents the first bolometer, a device which is able to measure small temperature fluctuations, and thus the power of far infrared sources. 1879: Stefan–Boltzmann law formulated empirically that the power radiated by a blackbody is proportional to T4. 1880s and 1890s: Lord Rayleigh and Wilhelm Wien solved part of the blackbody equation, but both solutions diverged in parts of the electromagnetic spectrum. This problem was called the "ultraviolet catastrophe and infrared catastrophe". 1892: Willem Henri Julius published infrared spectra of 20 organic compounds measured with a bolometer in units of angular displacement. 1901: Max Planck published the blackbody equation and theorem. He solved the problem by quantizing the allowable energy transitions. 1905: Albert Einstein developed the theory of the photoelectric effect. 1905–1908: William Coblentz published infrared spectra in units of wavelength (micrometers) for several chemical compounds in Investigations of Infra-Red Spectra. 1917: Theodore Case developed the thallous sulfide detector, which helped produce the first infrared search and track device able to detect aircraft at a range of one mile (1.6 km). 1935: Lead salts – early missile guidance in World War II. 1938: Yeou Ta predicted that the pyroelectric effect could be used to detect infrared radiation. 1945: The Zielgerät 1229 "Vampir" infrared weapon system was introduced as the first portable infrared device for military applications. 1952: Heinrich Welker grew synthetic InSb crystals. 1950s and 1960s: Nomenclature and radiometric units defined by Fred Nicodemenus, G. J. Zissis and R. Clark; Robert Clark Jones defined D*. 1958: W. D. Lawson (Royal Radar Establishment in Malvern) discovered IR detection properties of Mercury cadmium telluride (HgCdTe). 1958: Falcon and Sidewinder missiles were developed using infrared technology. 1960s: Paul Kruse and his colleagues at Honeywell Research Center demonstrate the use of HgCdTe as an effective compound for infrared detection. 1962: J. Cooper demonstrated pyroelectric detection. 1964: W. G. Evans discovered infrared thermoreceptors in a pyrophile beetle. 1965: First IR handbook; first commercial imagers (Barnes, Agema (now part of FLIR Systems Inc.)); Richard Hudson's landmark text; F4 TRAM FLIR by Hughes; phenomenology pioneered by Fred Simmons and A. T. Stair; U.S. Army's night vision lab formed (now Night Vision and Electronic Sensors Directorate (NVESD)), and Rachets develops detection, recognition and identification modeling there. 1970: Willard Boyle and George E. Smith proposed CCD at Bell Labs for picture phone. 1973: Common module program started by NVESD. 1978: Infrared imaging astronomy came of age, observatories planned, IRTF on Mauna Kea opened; 32 × 32 and 64 × 64 arrays produced using InSb, HgCdTe and other materials. 2013: On 14 February, researchers developed a neural implant that gives rats the ability to sense infrared light, which for the first time provides living creatures with new abilities, instead of simply replacing or augmenting existing abilities.
Physical sciences
Electrodynamics
null
15029
https://en.wikipedia.org/wiki/Industry%20Standard%20Architecture
Industry Standard Architecture
Industry Standard Architecture (ISA) is the 16-bit internal bus of IBM PC/AT and similar computers based on the Intel 80286 and its immediate successors during the 1980s. The bus was (largely) backward compatible with the 8-bit bus of the 8088-based IBM PC, including the IBM PC/XT as well as IBM PC compatibles. Originally referred to as the PC bus (8-bit) or AT bus (16-bit), it was also termed I/O Channel by IBM. The ISA term was coined as a retronym by IBM PC clone manufacturers in the late 1980s or early 1990s as a reaction to IBM attempts to replace the AT bus with its new and incompatible Micro Channel architecture. The 16-bit ISA bus was also used with 32-bit processors for several years. An attempt to extend it to 32 bits, called Extended Industry Standard Architecture (EISA), was not very successful, however. Later buses such as VESA Local Bus and PCI were used instead, often along with ISA slots on the same mainboard. Derivatives of the AT bus structure were and still are used in ATA/IDE, the PCMCIA standard, CompactFlash, the PC/104 bus, and internally within Super I/O chips. Even though ISA disappeared from consumer desktops many years ago, it is still used in industrial PCs, where certain specialized expansion cards that never transitioned to PCI and PCI Express are used. History The original PC bus was developed by a team led by Mark Dean at IBM as part of the IBM PC project in 1981. It was an 8-bit bus based on the I/O bus of the IBM System/23 Datamaster system - it used the same physical connector, and a similar signal protocol and pinout. A 16-bit version, the IBM AT bus, was introduced with the release of the IBM PC/AT in 1984. The AT bus was a mostly backward-compatible extension of the PC bus—the AT bus connector was a superset of the PC bus connector. In 1988, the 32-bit EISA standard was proposed by the "Gang of Nine" group of PC-compatible manufacturers that included Compaq. Compaq created the term Industry Standard Architecture (ISA) to replace PC compatible. In the process, they retroactively renamed the AT bus to ISA to avoid infringing IBM's trademark on its PC and PC/AT systems (and to avoid giving their major competitor, IBM, free advertisement). IBM designed the 8-bit version as a buffered interface to the motherboard buses of the Intel 8088 (16/8 bit) CPU in the IBM PC and PC/XT, augmented with prioritized interrupts and DMA channels. The 16-bit version was an upgrade for the motherboard buses of the Intel 80286 CPU (and expanded interrupt and DMA facilities) used in the IBM AT, with improved support for bus mastering. The ISA bus was therefore synchronous with the CPU clock until sophisticated buffering methods were implemented by chipsets to interface ISA to much faster CPUs. ISA was designed to connect peripheral cards to the motherboard and allows for bus mastering. Only the first 16 MB of main memory is addressable. The original 8-bit bus ran from the 4.77 MHz clock of the 8088 CPU in the IBM PC and PC/XT. The original 16-bit bus ran from the CPU clock of the 80286 in IBM PC/AT computers, which was 6 MHz in the first models and 8 MHz in later models. The IBM RT PC also used the 16-bit bus. ISA was also used in some non-IBM compatible machines such as Motorola 68k-based Apollo (68020) and Amiga 3000 (68030) workstations, the short-lived AT&T Hobbit and the later PowerPC-based BeBox. Companies like Dell improved the AT bus's performance but in 1987, IBM replaced the AT bus with its proprietary Micro Channel Architecture (MCA). MCA overcame many of the limitations then apparent in ISA but was also an effort by IBM to regain control of the PC architecture and the PC market. MCA was far more advanced than ISA and had many features that would later appear in PCI. However, MCA was also a closed standard whereas IBM had released full specifications and circuit schematics for ISA. Computer manufacturers responded to MCA by developing the Extended Industry Standard Architecture (EISA) and the later VESA Local Bus (VLB). VLB used some electronic parts originally intended for MCA because component manufacturers were already equipped to manufacture them. Both EISA and VLB were backward-compatible expansions of the AT (ISA) bus. Users of ISA-based machines had to know special information about the hardware they were adding to the system. While a handful of devices were essentially plug-n-play, this was rare. Users frequently had to configure parameters when adding a new device, such as the IRQ line, I/O address, or DMA channel. MCA had done away with this complication and PCI actually incorporated many of the ideas first explored with MCA, though it was more directly descended from EISA. This trouble with configuration eventually led to the creation of ISA PnP, a plug-n-play system that used a combination of modifications to hardware, the system BIOS, and operating system software to automatically manage resource allocations. In reality, ISA PnP could be troublesome and did not become well-supported until the architecture was in its final days. A PnP ISA, EISA or VLB device may have a 5-byte EISA ID (3-byte manufacturer ID + 2-byte hex number) to identify the device. For example, CTL0044 corresponds to Creative Sound Blaster 16 / 32 PnP. PCI slots were the first physically incompatible expansion ports to directly squeeze ISA off the motherboard. At first, motherboards were largely ISA, including a few PCI slots. By the mid-1990s, the two slot types were roughly balanced, and ISA slots soon were in the minority of consumer systems. Microsoft's PC-99 specification recommended that ISA slots be removed entirely, though the system architecture still required ISA to be present in some vestigial way internally to handle the floppy drive, serial ports, etc., which was why the software compatible LPC bus was created. ISA slots remained for a few more years and towards the turn of the century it was common to see systems with an Accelerated Graphics Port (AGP) sitting near the central processing unit, an array of PCI slots, and one or two ISA slots near the end. In late 2008, even floppy disk drives and serial ports were disappearing, and the extinction of vestigial ISA (by then the LPC bus) from chipsets was on the horizon. PCI slots are rotated compared to their ISA counterparts—PCI cards were essentially inserted upside-down, allowing ISA and PCI connectors to squeeze together on the motherboard. Only one of the two connectors can be used in each slot at a time, but this allowed for greater flexibility. The AT Attachment (ATA) hard disk interface is directly descended from the 16-bit ISA of the PC/AT. ATA has its origins in the IBM Personal Computer Fixed Disk and Diskette Adapter, the standard dual-function floppy disk controller and hard disk controller card for the IBM PC AT; the fixed disk controller on this card implemented the register set and the basic command set which became the basis of the ATA interface (and which differed greatly from the interface of IBM's fixed disk controller card for the PC XT). Direct precursors to ATA were third-party ISA hardcards that integrated a hard disk drive (HDD) and a hard disk controller (HDC) onto one card. This was at best awkward and at worst damaging to the motherboard, as ISA slots were not designed to support such heavy devices as HDDs. The next generation of Integrated Drive Electronics drives moved both the drive and controller to a drive bay and used a ribbon cable and a very simple interface board to connect it to an ISA slot. ATA is basically a standardization of this arrangement plus a uniform command structure for software to interface with the HDC within the drive. ATA has since been separated from the ISA bus and connected directly to the local bus, usually by integration into the chipset, for much higher clock rates and data throughput than ISA could support. ATA has clear characteristics of 16-bit ISA, such as a 16-bit transfer size, signal timing in the PIO modes and the interrupt and DMA mechanisms. ISA bus architecture The PC/XT-bus is an eight-bit ISA bus used by Intel 8086 and Intel 8088 systems in the IBM PC and IBM PC XT in the 1980s. Among its 62 pins were demultiplexed and electrically buffered versions of the 8 data and 20 address lines of the 8088 processor, along with power lines, clocks, read/write strobes, interrupt lines, etc. Power lines included −5 V and ±12 V in order to directly support pMOS and enhancement mode nMOS circuits such as dynamic RAMs among other things. The XT bus architecture uses a single Intel 8259 PIC, giving eight vectorized and prioritized interrupt lines. It has four DMA channels originally provided by the Intel 8237. Three of the DMA channels are brought out to the XT bus expansion slots; of these, 2 are normally already allocated to machine functions (diskette drive and hard disk controller): The PC/AT-bus, a 16-bit (or 80286-) version of the PC/XT bus, was introduced with the IBM PC/AT. This bus was officially termed I/O Channel by IBM. It extends the XT-bus by adding a second shorter edge connector in-line with the eight-bit XT-bus connector, which is unchanged, retaining compatibility with most 8-bit cards. The second connector adds four additional address lines for a total of 24, and 8 additional data lines for a total of 16. It also adds new interrupt lines connected to a second 8259 PIC (connected to one of the lines of the first) and 4 × 16-bit DMA channels, as well as control lines to select 8- or 16-bit transfers. The 16-bit AT bus slot originally used two standard edge connector sockets in early IBM PC/AT machines. However, with the popularity of the AT architecture and the 16-bit ISA bus, manufacturers introduced specialized 98-pin connectors that integrated the two sockets into one unit. These can be found in almost every AT-class PC manufactured after the mid-1980s. The ISA slot connector is typically black (distinguishing it from the brown EISA connectors and white PCI connectors). Number of devices Motherboard devices have dedicated IRQs (not present in the slots). 16-bit devices can use either PC-bus or PC/AT-bus IRQs. It is therefore possible to connect up to 6 devices that use one 8-bit IRQ each and up to 5 devices that use one 16-bit IRQ each. At the same time, up to 4 devices may use one 8-bit DMA channel each, while up to 3 devices can use one 16-bit DMA channel each. Varying bus speeds Originally, the bus clock was synchronous with the CPU clock, resulting in varying bus clock frequencies among the many different IBM clones on the market (sometimes as high as 16 or 20 MHz), leading to software or electrical timing problems for certain ISA cards at bus speeds they were not designed for. Later motherboards or integrated chipsets used a separate clock generator, or a clock divider which either fixed the ISA bus frequency at 4, 6, or 8 MHz or allowed the user to adjust the frequency via the BIOS setup. When used at a higher bus frequency, some ISA cards (certain Hercules-compatible video cards, for instance), could show significant performance improvements. 8/16-bit incompatibilities Memory address decoding for the selection of 8 or 16-bit transfer mode was limited to 128 KB sections, leading to problems when mixing 8- and 16-bit cards as they could not co-exist in the same 128 KB area. This is because the MEMCS16 line is required to be set based on the value of LA17-23 only. Past and current use ISA is still used today for specialized industrial purposes. In 2008, IEI Technologies released a modern motherboard for Intel Core 2 Duo processors which, in addition to other special I/O features, is equipped with two ISA slots. It was marketed to industrial and military users who had invested in expensive specialized ISA bus adaptors, which were not available in PCI bus versions. Similarly, ADEK Industrial Computers released a modern motherboard in early 2013 for Intel Core i3/i5/i7 processors, which contains one (non-DMA) ISA slot. Also, MSI released a modern motherboard with one ISA slot in 2020. The PC/104 bus, used in industrial and embedded applications, is a derivative of the ISA bus, utilizing the same signal lines with different connectors. The LPC bus has replaced the ISA bus as the connection to the legacy I/O devices on current motherboards; while physically quite different, LPC looks just like ISA to software, so the peculiarities of ISA such as the 16 MiB DMA limit (which corresponds to the full address space of the Intel 80286 CPU used in the original IBM AT) are likely to stick around for a while. ATA As explained in the History section, ISA was the basis for development of the ATA interface, used for ATA (a.k.a. IDE) hard disks. Physically, ATA is essentially a simple subset of ISA, with 16 data bits, support for exactly one IRQ and one DMA channel, and 3 address bits. To this ISA subset, ATA adds two IDE address select ("chip select") lines (i.e. address decodes, effectively equivalent to address bits) and a few unique signal lines specific to ATA/IDE hard disks (such as the Cable Select/Spindle Sync. line.) In addition to the physical interface channel, ATA goes beyond and far outside the scope of ISA by also specifying a set of physical device registers to be implemented on every ATA (IDE) drive and a full set of protocols and device commands for controlling fixed disk drives using these registers. The ATA device registers are accessed using the address bits and address select signals in the ATA physical interface channel, and all operations of ATA hard disks are performed using the ATA-specified protocols through the ATA command set. The earliest versions of the ATA standard featured a few simple protocols and a basic command set comparable to the command sets of MFM and RLL controllers (which preceded ATA controllers), but the latest ATA standards have much more complex protocols and instruction sets that include optional commands and protocols providing such advanced optional-use features as sizable hidden system storage areas, password security locking, and programmable geometry translation. In the mid-1990s, the ATA host controller (usually integrated into the chipset) was moved to PCI form. A further deviation between ISA and ATA is that while the ISA bus remained locked into a single standard clock rate (for backward hardware compatibility), the ATA interface offered many different speed modes, could select among them to match the maximum speed supported by the attached drives, and kept adding faster speeds with later versions of the ATA standard (up to for ATA-6, the latest.) In most forms, ATA ran much faster than ISA, provided it was connected directly to a local bus (e.g. southbridge-integrated IDE interfaces) faster than the ISA bus. XT-IDE Before the 16-bit ATA/IDE interface, there was an 8-bit XT-IDE (also known as XTA) interface for hard disks. It was not nearly as popular as ATA has become, and XT-IDE hardware is now fairly hard to find. Some XT-IDE adapters were available as 8-bit ISA cards, and XTA sockets were also present on the motherboards of Amstrad's later XT clones as well as a short-lived line of Philips units. The XTA pinout was very similar to ATA, but only eight data lines and two address lines were used, and the physical device registers had completely different meanings. A few hard drives (such as the Seagate ST351A/X) could support either type of interface, selected with a jumper. Many later AT (and AT successor) motherboards had no integrated hard drive interface but relied on a separate hard drive interface plugged into an ISA/EISA/VLB slot. There were even a few 80486-based units shipped with MFM/RLL interfaces and drives instead of the increasingly common AT-IDE. Commodore built the XT-IDE-based peripheral hard drive and memory expansion unit A590 for their Amiga 500 and 500+ computers that also supported a SCSI drive. Later models – the A600, A1200, and the Amiga 4000 series – use AT-IDE drives. PCMCIA The PCMCIA specification can be seen as a superset of ATA. The standard for PCMCIA hard disk interfaces, which included PCMCIA flash drives, allows for the mutual configuration of the port and the drive in an ATA mode. As a de facto extension, most PCMCIA flash drives additionally allow for a simple ATA mode that is enabled by pulling a single pin low, so that PCMCIA hardware and firmware are unnecessary to use them as an ATA drive connected to an ATA port. PCMCIA flash drive to ATA adapters are thus simple and inexpensive but are not guaranteed to work with any and every standard PCMCIA flash drive. Further, such adapters cannot be used as generic PCMCIA ports, as the PCMCIA interface is much more complex than ATA. Emulation by embedded chips Although most modern computers do not have physical ISA buses, almost all PCs — IA-32, and x86-64 — have ISA buses allocated in physical address space. Some Southbridges and some CPUs themselves provide services such as temperature monitoring and voltage readings through ISA buses as ISA devices. Standardization IEEE started a standardization of the ISA bus in 1985, called the P996 specification. However, despite books being published on the P996 specification, it never officially progressed past draft status. Modern ISA cards There still is an existing user base with old computers, so some ISA cards are still manufactured, e.g. with USB ports or complete single-board computers based on modern processors, USB 3.0, and SATA.
Technology
Computer hardware
null
15032
https://en.wikipedia.org/wiki/IBM%20Personal%20Computer
IBM Personal Computer
The IBM Personal Computer (model 5150, commonly known as the IBM PC) is the first microcomputer released in the IBM PC model line and the basis for the IBM PC compatible de facto standard. Released on August 12, 1981, it was created by a team of engineers and designers at International Business Machines (IBM), directed by William C. Lowe and Philip Don Estridge in Boca Raton, Florida. Powered by an x86-architecture Intel 8088 processor, the machine was based on open architecture and third-party peripherals. Over time, expansion cards and software technology increased to support it. The PC had a substantial influence on the personal computer market; the specifications of the IBM PC became one of the most popular computer design standards in the world. The only significant competition it faced from a non-compatible platform throughout the 1980s was from Apple's Macintosh product line, as well as consumer-grade platforms created by companies like Commodore and Atari. Most present-day personal computers share architectural features in common with the original IBM PC, including the Intel-based Mac computers manufactured from 2006 to 2022. History Prior to the 1980s, IBM had largely been known as a provider of business computer systems. As the 1980s opened, their market share in the growing minicomputer market failed to keep up with competitors, while other manufacturers were beginning to see impressive profits in the microcomputer space. The market for personal computers was dominated at the time by Tandy, Commodore, and Apple, whose machines sold for several hundred dollars each and had become very popular. The microcomputer market was large enough for IBM's attention, with $15 billion in sales by 1979 and projected annual growth of more than 40% during the early 1980s. Other large technology companies had entered it, such as Hewlett-Packard, Texas Instruments and Data General, and some large IBM customers were buying Apples. As early as 1980 there were rumors of IBM developing a personal computer, possibly a miniaturized version of the IBM System/370, and Matsushita acknowledged publicly that it had discussed with IBM the possibility of manufacturing a personal computer in partnership, although this project was abandoned. The public responded to these rumors with skepticism, owing to IBM's tendency towards slow-moving, bureaucratic business practices tailored towards the production of large, sophisticated and expensive business systems. As with other large computer companies, its new products typically required about four to five years for development, and a well publicized quote from an industry analyst was, "IBM bringing out a personal computer would be like teaching an elephant to tap dance." IBM had previously produced microcomputers, such as 1975's IBM 5100, but targeted them towards businesses; the 5100 had a price tag as high as $20,000. Their entry into the home computer market needed to be competitively priced. In the summer of 1979, Ron Mion, IBM’s Senior Business Trends Advisor for entry-level systems, proposed a plan for IBM to enter the emerging microcomputer market. At that time, the likes of Apple and Tandy were starting to encroach on the small-business marketplace that IBM intended to dominate. Mion believed that that market would grow significantly and that IBM should aggressively pursue it. However, he felt that they wouldn’t be successful unless IBM departed from its long-standing business model. Mion’s plan called for three major departures from how IBM traditionally did business. Mion felt that, if IBM wanted to compete in the microcomputer market, it would need to: a) Greatly reduce manufacturing costs by using standard, off-the-shelf components (e.g., disk drives, CRTs, power supplies, keyboards) in order to produce a competitively priced microcomputer b) Use a low-cost, third-party operating system. Mion felt that this was imperative in order to foster a cottage industry that could develop a broad array of applications that would help small businesses justify the purchase of a computer. Mion recommended Digital Research’s CP/M and a new O/S called MS-DOS from a little-known company named Microsoft. c) Allow its microcomputers to be sold and serviced by a distribution channel consisting of independent resellers. (At that time, IBM had been experimenting with a chain of IBM Business Systems Center storefronts but their least-expensive computer cost $14,000.) That plan made its way up the chain of command but was ultimately rejected in the fall. The top IBM executives reaffirmed that all “IBM” computers, and their major components, must be developed, manufactured, sold, and serviced by IBM. In January of 1980, Tandy released their Annual Report and, as was predicted in Mion's plan, it confirmed that their 1979 shipments had exceeded 100,000 TRS-80s (about $50 million worth). IBM quickly dusted off Mion’s marketing plan. In 1980, IBM president John Opel, recognizing the value of entering this growing market, assigned William C. Lowe and Philip Don Estridge as heads of the new Entry Level Systems unit in Boca Raton, Florida. Market research found that computer dealers were very interested in selling an IBM product, but they insisted the company use a design based on standard parts, not IBM-designed ones so that stores could perform their own repairs rather than requiring customers to send machines back to IBM for service. Another source cites time pressure as the reason for the decision to use third-party components. Atari proposed to IBM in 1980 that it act as original equipment manufacturer for an IBM microcomputer, a potential solution to IBM's known inability to move quickly to meet a rapidly changing market. The idea of acquiring Atari was considered but rejected in favor of a proposal by Lowe that by forming an independent internal working group and abandoning all traditional IBM methods, a design could be delivered within a year and a prototype within 30 days. The prototype worked poorly but was presented with a detailed business plan which proposed that the new computer have an open architecture, use non-proprietary components and software, and be sold through retail stores, all contrary to IBM practice. It also estimated sales of 220,000 computers over three years, more than IBM's entire installed base. This swayed the Corporate Management Committee, which converted the group into a business unit named "Project Chess", and provided the necessary funding and authority to do whatever was needed to develop the computer in the given timeframe. The team received permission to expand to 150 people by the end of 1980, and in one day more than 500 IBM employees called in asking to join. Design process The design process was kept under a policy of strict secrecy, with all other IBM divisions kept in the dark about the project. Several CPUs were considered, including the Texas Instruments TMS9900, Motorola 68000 and Intel 8088. The 68000 was considered the best choice, but was not production-ready like the others. The IBM 801 RISC processor was also considered, since it was considerably more powerful than the other options, but rejected due to the design constraint to use off-the-shelf parts. The TMS9900 was rejected as it was inferior to the Intel 8088. IBM chose the 8088 over the similar but superior 8086 because Intel offered a better price for the former and could provide more units, and the 8088's 8-bit bus reduced the cost of the rest of the computer. The 8088 had the advantage that IBM already had familiarity with the 8085 from designing the IBM System/23 Datamaster. The 62-pin expansion bus slots were also designed to be similar to the Datamaster slots, and its keyboard design and layout became the Model F keyboard shipped with the PC, but otherwise the PC design differed in many ways. The 8088 motherboard was designed in 40 days, with a working prototype created in four months, demonstrated in January 1981. The design was essentially complete by April 1981, when it was handed off to the manufacturing team. PCs were assembled in an IBM plant in Boca Raton, with components made at various IBM and third party factories. The monitor was an existing design from IBM Japan; the printer was manufactured by Epson. Because none of the functional components were designed by IBM, they obtained only a handful of patents on the PC, covering such features as the bytecoding for color monitors, DMA access operation, and the keyboard interface. They were never enforced. Many of the designers were computer hobbyists who owned their own computers, including many Apple II owners, which influenced the decisions to design the computer with an open architecture and publish technical information so others could create compatible software and expansion slot peripherals. During the design process IBM avoided vertical integration as much as possible, for example choosing to license Microsoft BASIC rather than utilizing the in-house version of BASIC used for mainframes due to the better existing public familiarity with the Microsoft version. Debut The IBM PC debuted on August 12, 1981, after a twelve-month development. Pricing started at $1,565 for a configuration with 16 KB RAM, Color Graphics Adapter, keyboard, and no disk drives. The price was designed to compete with comparable machines in the market. For comparison, the Datamaster, announced two weeks earlier as IBM's least expensive computer, cost $10,000. IBM's marketing campaign licensed the likeness of Charlie Chaplin's character "The Little Tramp" for a series of advertisements based on Chaplin's movies, played by Billy Scudder. The PC was IBM's first attempt to sell a computer through retail channels rather than directly to customers. Because IBM did not have retail experience, they partnered with the retail chains ComputerLand and Sears, who provided important knowledge of the marketplace and became the main outlets for the PC. More than 190 ComputerLand stores already existed, while Sears was in the process of creating a handful of in-store computer centers for sale of the new product. Reception was overwhelmingly positive, with analysts estimating sales volume in the billions of dollars in the first few years after release. After release, IBM's PC immediately became the talk of the entire computing industry. Dealers were overwhelmed with orders, including customers offering pre-payment for machines with no guaranteed delivery date. By the time the machine began shipping, the term "PC" was becoming a household name. Success Sales exceeded IBM's expectations by as much as 800% (9x), with the company at one point shipping as many as 40,000 PCs per month. IBM estimated that home users made up 50 to 70% of purchases from retail stores. In 1983, IBM sold more than 750,000 machines, while Digital Equipment Corporation, one of the companies whose success had spurred IBM to enter the market, sold only 69,000. Software support from the industry grew rapidly, with the IBM nearly instantly becoming the primary target for most microcomputer software development. One publication counted 753 software packages available a year after the PC's release, four times as many as were available for the Macintosh a year after its launch. Hardware support also grew rapidly, with 30–40 companies competing to sell memory expansion cards within a year. By 1984, IBM's revenue from the PC market was $4 billion, more than twice that of Apple. A 1983 study of corporate customers found that two thirds of large customers standardizing on one computer chose the PC, while only 9% chose Apple. A 1985 Fortune survey found that 56% of American companies with personal computers used PCs while 16% used Apple. Almost as soon as the PC reached the market, rumors of clones began, and the first legal PC-compatible clone—the MPC 1600 by Columbia Data Products—was released in June 1982, less than a year after the PC's debut. Eventually, IBM sold its PC business to Lenovo in 2004. Hardware For low cost and a quick design turnaround time, the hardware design of the IBM PC used entirely "off-the-shelf" parts from third party manufacturers, rather than unique hardware designed by IBM. The PC is housed in a wide, short steel chassis intended to support the weight of a CRT monitor. The front panel is made of plastic, with an opening where one or two disk drives can be installed. The back panel houses a power inlet and switch, a keyboard connector, a cassette connector and a series of tall vertical slots with blank metal panels which can be removed in order to install expansion cards. Internally, the chassis is dominated by a motherboard which houses the CPU, built-in RAM, expansion RAM sockets, and slots for expansion cards. The IBM PC was highly expandable and upgradeable, but the base factory configuration included: Motherboard The PC is built around a single large circuit board called a motherboard which carries the processor, built-in RAM, expansion slots, keyboard and cassette ports, and the various peripheral integrated circuits that connected and controlled the components of the machine. The peripheral chips included an Intel 8259 PIC, an Intel 8237 DMA controller, and an Intel 8253 PIT. The PIT provides clock "ticks" and dynamic memory refresh timing. CPU and RAM The CPU is an Intel 8088, a cost-reduced form of the Intel 8086 which largely retains the 8086's internal 16-bit logic, but exposes only an 8-bit bus. The CPU is clocked at 4.77 MHz, which would eventually become an issue when clones and later PC models offered higher CPU speeds that broke compatibility with software developed for the original PC. The single base clock frequency for the system was 14.31818 MHz, which when divided by 3, yielded the 4.77 MHz for the CPU (which was considered close enough to the then 5 MHz limit of the 8088), and when divided by 4, yielded the required 3.579545 MHz for the NTSC color carrier frequency. The PC motherboard included a second, empty socket, described by IBM simply as an "auxiliary processor socket", although the most obvious use was the addition of an Intel 8087 math coprocessor, which improved floating-point math performance. PC mainboards were manufactured with the first memory bank of initially Mostek 4116-compatible, or later 4164-compatible DIP DRAMs soldered to the board, for a minimum configuration of first just 16 KB, or later 64 KB of RAM.Memory upgrades were provided by IBM and third parties both for socketed installation in three further onboard banks, and as ISA expansion cards. The early 16 KB mainboards could be upgraded to a maximum of 64 KB onboard, and the more common 64 KB revision to a maximum of 256 KB on the motherboard.RAM cards could upgrade either variant further, for a total of 640 KB conventional memory, and possibly several megabytes of expanded memory beyond that, though on PC/XT-class machines, the latter was a very expensive third-party hardware option only available later in the IBM 5150's lifecycle and only usable with dedicated software support (i.e. only accessible via a RAM window in the Upper Memory Area); this was relatively rarely equipped and utilized on the original IBM PC, much less fully so, thus the machine's maximum RAM configuration as commonly understood was 640 KB. ROM BIOS The BIOS is the firmware of the IBM PC, occupying one 8 KB chip on the motherboard. It provides bootstrap code and a library of common functions that all software can use for many purposes, such as video output, keyboard input, disk access, interrupt handling, testing memory, and other functions. IBM shipped three versions of the BIOS throughout the PC's lifespan. Display While most home computers had built-in video output hardware, IBM took the unusual approach of offering two different graphics options, the MDA and CGA cards. The former provided high-resolution monochrome text, but could not display anything except text, while the latter provided medium- and low-resolution color graphics and text. CGA used the same scan rate as NTSC television, allowing it to provide a composite video output which could be used with any compatible television or composite monitor, as well as a direct-drive TTL output suitable for use with any RGBI monitor using an NTSC scan rate. IBM also sold the 5153 color monitor for this purpose, but it was not available at release and was not released until March 1983. MDA scanned at a higher frequency and required a proprietary monitor, the IBM 5151. The card also included a built-in printer port. Both cards could also be installed simultaneously for mixed graphics and text applications. For instance, AutoCAD, Lotus 1-2-3 and other software allowed use of a CGA Monitor for graphics and a separate monochrome monitor for text menus. Third parties went on to provide an enormous variety of aftermarket graphics adapters, such as the Hercules Graphics Card. The software and hardware of the PC, at release, was designed around a single 8-bit adaptation of the ASCII character set, now known as code page 437. Storage The two bays in the front of the machine could be populated with one or two 5.25″ floppy disk drives, storing 160 KB per disk side for a total of 320 KB of storage on one disk. The floppy drives require a controller card inserted in an expansion slot, and connect with a single ribbon cable with two edge connectors. The IBM floppy controller card provides an external 37-pin D-sub connector for attachment of an external disk drive, although IBM did not offer one for purchase until 1986. As was common for home computers of the era, the IBM PC offered a port for connecting a cassette data recorder. Unlike the typical home computer however, this was never a major avenue for software distribution, probably because very few PCs were sold without floppy drives. The port was removed on the very next PC model, the XT. At release, IBM did not offer any hard disk drive option and adding one was difficult - the PC's stock power supply had inadequate power to run a hard drive, the motherboard did not support BIOS expansion ROMs which was needed to support a hard drive controller, and both PC DOS and the BIOS had no support for hard disks. After the XT was released, IBM altered the design of the 5150 to add most of these capabilities, except for the upgraded power supply. At this point adding a hard drive was possible, but required the purchase of the IBM 5161 Expansion Unit, which contained a dedicated power supply and included a hard drive. Although official hard drive support did not exist, the third party market did provide early hard drives that connected to the floppy disk controller, but required a patched version of PC DOS to support the larger disk sizes. Human interface The only option for human interface provided in the base PC was the built-in keyboard port, meant to connect to the included Model F keyboard. The Model F was initially developed for the IBM Datamaster, and was substantially better than the keyboards provided with virtually all home computers on the market at that time in many regards - number of keys, reliability and ergonomics. While some home computers of the time utilized chiclet keyboards or inexpensive mechanical designs, the IBM keyboard provided good ergonomics, reliable and positive tactile key mechanisms and flip-up feet to adjust its angle. Public reception of the keyboard was extremely positive, with some sources describing it as a major selling point of the PC and even as "the best keyboard available on any microcomputer." At release, IBM provided a Game Control Adapter which offered a 15-pin port intended for the connection of up to two joysticks, each having two analog axes and two buttons. (The early PCs predated the advent of the "Windows, Icons, Mouse, Pointer" concept and so did not have a mouse.) Communications Connectivity to other computers and peripherals was initially provided through serial and parallel ports. IBM provided a serial card based on an 8250 UART. The BIOS supports up to two serial ports. IBM provided two different options for connecting Centronics-compatible parallel printers. One was the IBM Printer Adapter, and the other was integrated into the MDA as the IBM Monochrome Display and Printer Adapter. Expansion The expansion capability of the IBM PC was very significant to its success in the market. Some publications highlighted IBM's uncharacteristic decision to publish complete, thorough specifications of the system bus and memory map immediately on release, with the intention of fostering a market of compatible third-party hardware and software. The motherboard includes five 62-pin card edge connectors which are connected to the CPU's I/O lines. IBM referred to these as "I/O slots", but after the expansion of the PC clone industry they became retroactively known as the ISA bus. At the back of the machine is a metal panel, integrated into the steel chassis of the system unit, with a series of vertical slots lined up with each card slot. Most expansion cards have a matching metal bracket which slots into one of these openings, serving two purposes. First, a screw inserted through a tab on the bracket into the chassis fastens the card securely in place, preventing the card from wiggling out of place. Second, any ports the card provides for external attachment are bolted to the bracket, keeping them secured in place as well. The PC expansion slots can accept an enormous variety of expansion hardware, adding capabilities such as: Graphics Sound Mouse support Expanded memory Joystick port Additional serial or parallel ports Networking Connection to proprietary industrial or scientific equipment The market reacted as IBM had intended, and within a year or two of the PC's release the available options for expansion hardware were immense. 5161 Expansion Unit The expandability of the PC was important, but had significant limitations. One major limitation was the inability to install a hard drive, as described above. Another was that there were only five expansion slots, which tended to get filled up by essential hardware - a PC with a graphics card, memory expansion, parallel card and serial card was left with only one open slot, for instance. IBM rectified these problems in the later XT, which included more slots and support for an internal hard drive, but at the same time released the 5161 Expansion Unit, which could be used with either the XT or the original PC. The 5161 connected to the PC system unit using a cable and a card plugged into an expansion slot, and provided a second system chassis with more expansion slots and a hard drive. Software IBM initially announced intent to support multiple operating systems: CP/M-86, UCSD p-System, and an in-house product called IBM PC DOS, based on 86-DOS from Seattle Computer Products and provided by Microsoft. In practice, IBM's expectation and intent was for the market to primarily use PC DOS. CP/M-86 was not available for six months after the PC's release and received extremely few orders once it was, and p-System was also not available at release. PC DOS rapidly established itself as the standard OS for the PC and remained the standard for over a decade, with a variant being sold by Microsoft themselves as MS-DOS. The PC included BASIC in ROM (four 8 KB chips), a common feature of 1980s home computers. Its ROM BASIC supported the cassette tape interface, but PC DOS did not, limiting use of that interface to BASIC only. PC DOS version 1.00 supported only 160 KB SSDD floppies, but version 1.1, which was released nine months after the PC's introduction, supported 160 KB SSDD and 320 KB DSDD floppies. Support for the slightly larger nine sector per track 180 KB and 360 KB formats was added in March 1983. Third-party software support grew extremely quickly, and within a year the PC platform was supplied with a vast array of titles for any conceivable purpose. Reception Reception of the IBM PC was extremely positive. Even before its release reviewers were impressed by the advertised specifications of the machine, and upon its release reviews praised virtually every aspect of its design both in comparison to contemporary machines and with regards to new and unexpected features. Praise was directed at the build quality of the PC, in particular its keyboard, IBM's decision to use open specifications to encourage third party software and hardware development, their speed at delivering documentation and the quality therein, the quality of the video display, and the use of commodity components from established suppliers in the electronics industry. The price was considered extremely competitive compared to the value per dollar of competing machines. Two years after its release, Byte magazine retrospectively concluded that the PC had succeeded both because of its features – an 80-column screen, open architecture, and high-quality keyboard – and the failure of other computer manufacturers to achieve these features first: Creative Computing that year named the PC the best desktop computer between $2,000 and $4,000, praising its vast hardware and software selection, manufacturer support, and resale value. Many IBM PCs remained in service long after their technology became largely obsolete. For instance, as of June 2006 (23–25 years after release) IBM PC and XT models were still in use at the majority of U.S. National Weather Service upper-air observing sites, processing data returned from radiosondes attached to weather balloons. Due to its status as the first entry in the extremely influential PC industry, the original IBM PC remains valuable as a collector's item. , the system had a market value of $50–$500. Model line IBM sold a number of computers under the "Personal Computer" or "PC" name throughout the 1980s. The name was not used for several years before being reused for the IBM PC Series in the 1990s and early 2000s. As with all PC-derived systems, all IBM PC models are nominally software-compatible, although some timing-sensitive software will not run correctly on models with faster CPUs. Clones Because the IBM PC was based on commodity hardware rather than unique IBM components, and because its operation was extensively documented by IBM, creating machines that were fully compatible with the PC offered few challenges other than the creation of a compatible BIOS ROM. Simple duplication of the IBM PC BIOS was a direct violation of copyright law, but soon into the PC's life the BIOS was reverse-engineered by companies like Compaq, Phoenix Software Associates, American Megatrends and Award, who either built their own computers that could run the same software and use the same expansion hardware as the PC, or sold their BIOS code to other manufacturers who wished to build their own machines. These machines became known as IBM compatibles or "clones", and software was widely marketed as compatible with "IBM PC or 100% compatible". Shortly thereafter, clone manufacturers began to make improvements and extensions to the hardware, such as by using faster processors like the NEC V20, which executed the same software as the 8088 at a higher speed up to 10 MHz. The clone market eventually became so large that it lost its associations with the original PC and became a set of de facto standards established by various hardware manufacturers. Timeline
Technology
Early computers
null
15043
https://en.wikipedia.org/wiki/International%20Space%20Station
International Space Station
The International Space Station (ISS) is a large space station that was assembled and is maintained in low Earth orbit by a collaboration of five space agencies and their contractors: NASA (United States), Roscosmos (Russia), ESA (Europe), JAXA (Japan), and CSA (Canada). As the largest space station ever constructed, it primarily serves as a platform for conducting scientific experiments in microgravity and studying the space environment. The station is divided into two main sections: the Russian Orbital Segment (ROS), developed by Roscosmos, and the US Orbital Segment (USOS), built by NASA, ESA, JAXA, and CSA. A striking feature of the ISS is the Integrated Truss Structure, which connect the station’s vast system of solar panels and radiators to its pressurized modules. These modules support diverse functions, including scientific research, crew habitation, storage, spacecraft control, and airlock operations. The ISS has eight docking and berthing ports for visiting spacecraft. The station orbits the Earth at an average altitude of and circles the Earth in roughly 93 minutes, completing orbits per day. The ISS programme combines two previously planned crewed Earth-orbiting stations: the United States' Space Station Freedom and the Soviet Union's Mir-2. The first ISS module was launched in 1998, with major components delivered by Proton and Soyuz rockets and the Space Shuttle. Long-term occupancy began on 2 November 2000, with the arrival of the Expedition 1 crew. Since then, the ISS has remained continuously inhabited for , the longest continuous human presence in space. , 279 individuals from 22 countries had visited the station. Future plans for the ISS include the addition of at least one module, Axiom Space's Payload Power Thermal Module. The station is expected to remain operational until the end of 2030, after which it will be de-orbited using a dedicated NASA spacecraft. Conception Purpose The ISS was originally intended to be a laboratory, observatory, and factory while providing transportation, maintenance, and a low Earth orbit staging base for possible future missions to the Moon, Mars, and asteroids. However, not all of the uses envisioned in the initial memorandum of understanding between NASA and Roscosmos have been realised. In the 2010 United States National Space Policy, the ISS was given additional roles of serving commercial, diplomatic, and educational purposes. Scientific research The ISS provides a platform to conduct scientific research, with power, data, cooling, and crew available to support experiments. Small uncrewed spacecraft can also provide platforms for experiments, especially those involving zero gravity and exposure to space, but space stations offer a long-term environment where studies can be performed potentially for decades, combined with ready access by human researchers. The ISS simplifies individual experiments by allowing groups of experiments to share the same launches and crew time. Research is conducted in a wide variety of fields, including astrobiology, astronomy, physical sciences, materials science, space weather, meteorology, and human research including space medicine and the life sciences. Scientists on Earth have timely access to the data and can suggest experimental modifications to the crew. If follow-on experiments are necessary, the routinely scheduled launches of resupply craft allows new hardware to be launched with relative ease. Crews fly expeditions of several months' duration, providing approximately 160 man-hours per week of labour with a crew of six. However, a considerable amount of crew time is taken up by station maintenance. Perhaps the most notable ISS experiment is the Alpha Magnetic Spectrometer (AMS), which is intended to detect dark matter and answer other fundamental questions about our universe. According to NASA, the AMS is as important as the Hubble Space Telescope. Currently docked on station, it could not have been easily accommodated on a free flying satellite platform because of its power and bandwidth needs. On 3 April 2013, scientists reported that hints of dark matter may have been detected by the AMS. According to the scientists, "The first results from the space-borne Alpha Magnetic Spectrometer confirm an unexplained excess of high-energy positrons in Earth-bound cosmic rays". The space environment is hostile to life. Unprotected presence in space is characterised by an intense radiation field (consisting primarily of protons and other subatomic charged particles from the solar wind, in addition to cosmic rays), high vacuum, extreme temperatures, and microgravity. Some simple forms of life called extremophiles, as well as small invertebrates called tardigrades can survive in this environment in an extremely dry state through desiccation. Medical research improves knowledge about the effects of long-term space exposure on the human body, including muscle atrophy, bone loss, and fluid shift. These data will be used to determine whether high duration human spaceflight and space colonisation are feasible. In 2006, data on bone loss and muscular atrophy suggested that there would be a significant risk of fractures and movement problems if astronauts landed on a planet after a lengthy interplanetary cruise, such as the six-month interval required to travel to Mars. Medical studies are conducted aboard the ISS on behalf of the National Space Biomedical Research Institute (NSBRI). Prominent among these is the Advanced Diagnostic Ultrasound in Microgravity study in which astronauts perform ultrasound scans under the guidance of remote experts. The study considers the diagnosis and treatment of medical conditions in space. Usually, there is no physician on board the ISS and diagnosis of medical conditions is a challenge. It is anticipated that remotely guided ultrasound scans will have application on Earth in emergency and rural care situations where access to a trained physician is difficult. In August 2020, scientists reported that bacteria from Earth, particularly Deinococcus radiodurans bacteria, which is highly resistant to environmental hazards, were found to survive for three years in outer space, based on studies conducted on the International Space Station. These findings supported the notion of panspermia, the hypothesis that life exists throughout the Universe, distributed in various ways, including space dust, meteoroids, asteroids, comets, planetoids or contaminated spacecraft. Remote sensing of the Earth, astronomy, and deep space research on the ISS have significantly increased during the 2010s after the completion of the US Orbital Segment in 2011. Throughout the more than 20 years of the ISS program, researchers aboard the ISS and on the ground have examined aerosols, ozone, lightning, and oxides in Earth's atmosphere, as well as the Sun, cosmic rays, cosmic dust, antimatter, and dark matter in the universe. Examples of Earth-viewing remote sensing experiments that have flown on the ISS are the Orbiting Carbon Observatory 3, ISS-RapidScat, ECOSTRESS, the Global Ecosystem Dynamics Investigation, and the Cloud Aerosol Transport System. ISS-based astronomy telescopes and experiments include SOLAR, the Neutron Star Interior Composition Explorer, the Calorimetric Electron Telescope, the Monitor of All-sky X-ray Image (MAXI), and the Alpha Magnetic Spectrometer. Freefall Gravity at the altitude of the ISS is approximately 90% as strong as at Earth's surface, but objects in orbit are in a continuous state of freefall, resulting in an apparent state of weightlessness. This perceived weightlessness is disturbed by five effects: Drag from the residual atmosphere. Vibration from the movements of mechanical systems and the crew. Actuation of the on-board attitude control moment gyroscopes. Thruster firings for attitude or orbital changes. Gravity-gradient effects, also known as tidal effects. Items at different locations within the ISS would, if not attached to the station, follow slightly different orbits. Being mechanically connected, these items experience small forces that keep the station moving as a rigid body. Researchers are investigating the effect of the station's near-weightless environment on the evolution, development, growth and internal processes of plants and animals. In response to some of the data, NASA wants to investigate microgravity's effects on the growth of three-dimensional, human-like tissues and the unusual protein crystals that can be formed in space. Investigating the physics of fluids in microgravity will provide better models of the behaviour of fluids. Because fluids can be almost completely combined in microgravity, physicists investigate fluids that do not mix well on Earth. Examining reactions that are slowed by low gravity and low temperatures will improve our understanding of superconductivity. The study of materials science is an important ISS research activity, with the objective of reaping economic benefits through the improvement of techniques used on Earth. Other areas of interest include the effect of low gravity on combustion, through the study of the efficiency of burning and control of emissions and pollutants. These findings may improve knowledge about energy production and lead to economic and environmental benefits. Exploration The ISS provides a location in the relative safety of low Earth orbit to test spacecraft systems that will be required for long-duration missions to the Moon and Mars. This provides experience in operations, maintenance, and repair and replacement activities on-orbit. This will help develop essential skills in operating spacecraft farther from Earth, reduce mission risks, and advance the capabilities of interplanetary spacecraft. Referring to the MARS-500 experiment, a crew isolation experiment conducted on Earth, ESA states, "Whereas the ISS is essential for answering questions concerning the possible impact of weightlessness, radiation and other space-specific factors, aspects such as the effect of long-term isolation and confinement can be more appropriately addressed via ground-based simulations". Sergey Krasnov, the head of human space flight programmes for Russia's space agency, Roscosmos, in 2011 suggested a "shorter version" of MARS-500 may be carried out on the ISS. In 2009, noting the value of the partnership framework itself, Sergey Krasnov wrote, "When compared with partners acting separately, partners developing complementary abilities and resources could give us much more assurance of the success and safety of space exploration. The ISS is helping further advance near-Earth space exploration and realisation of prospective programmes of research and exploration of the Solar system, including the Moon and Mars." A crewed mission to Mars may be a multinational effort involving space agencies and countries outside the current ISS partnership. In 2010, ESA Director-General Jean-Jacques Dordain stated his agency was ready to propose to the other four partners that China, India, and South Korea be invited to join the ISS partnership. NASA chief Charles Bolden stated in February 2011, "Any mission to Mars is likely to be a global effort." Currently, US federal legislation prevents NASA co-operation with China on space projects without approval by the FBI and Congress. Education and cultural outreach The ISS crew provides opportunities for students on Earth by running student-developed experiments, making educational demonstrations, allowing for student participation in classroom versions of ISS experiments, and directly engaging students using radio, and email. ESA offers a wide range of free teaching materials that can be downloaded for use in classrooms. In one lesson, students can navigate a 3D model of the interior and exterior of the ISS, and face spontaneous challenges to solve in real time. The Japanese Aerospace Exploration Agency (JAXA) aims to inspire children to "pursue craftsmanship" and to heighten their "awareness of the importance of life and their responsibilities in society". Through a series of education guides, students develop a deeper understanding of the past and near-term future of crewed space flight, as well as that of Earth and life. In the JAXA "Seeds in Space" experiments, the mutation effects of spaceflight on plant seeds aboard the ISS are explored by growing sunflower seeds that have flown on the ISS for about nine months. In the first phase of Kibō utilisation from 2008 to mid-2010, researchers from more than a dozen Japanese universities conducted experiments in diverse fields. Cultural activities are another major objective of the ISS programme. Tetsuo Tanaka, the director of JAXA's Space Environment and Utilization Center, has said: "There is something about space that touches even people who are not interested in science." Amateur Radio on the ISS (ARISS) is a volunteer programme that encourages students worldwide to pursue careers in science, technology, engineering, and mathematics, through amateur radio communications opportunities with the ISS crew. ARISS is an international working group, consisting of delegations from nine countries including several in Europe, as well as Japan, Russia, Canada, and the United States. In areas where radio equipment cannot be used, speakerphones connect students to ground stations which then connect the calls to the space station. First Orbit is a 2011 feature-length documentary film about Vostok 1, the first crewed space flight around the Earth. By matching the orbit of the ISS to that of Vostok 1 as closely as possible, in terms of ground path and time of day, documentary filmmaker Christopher Riley and ESA astronaut Paolo Nespoli were able to film the view that Yuri Gagarin saw on his pioneering orbital space flight. This new footage was cut together with the original Vostok 1 mission audio recordings sourced from the Russian State Archive. Nespoli is credited as the director of photography for this documentary film, as he recorded the majority of the footage himself during Expedition 26/27. The film was streamed in a global YouTube premiere in 2011 under a free licence through the website firstorbit.org. In May 2013, commander Chris Hadfield shot a music video of David Bowie's "Space Oddity" on board the station, which was released on YouTube. It was the first music video filmed in space. In November 2017, while participating in Expedition 52/53 on the ISS, Paolo Nespoli made two recordings of his spoken voice (one in English and the other in his native Italian), for use on Wikipedia articles. These were the first content made in space specifically for Wikipedia. In November 2021, a virtual reality exhibit called The Infinite featuring life aboard the ISS was announced. Construction Manufacturing The International Space Station is a product of global collaboration, with its components manufactured across the world. The modules of the Russian Orbital Segment, including Zarya and Zvezda, were produced at the Khrunichev State Research and Production Space Center in Moscow. Zvezda was initially manufactured in 1985 as a component for the Mir-2 space station, which was never launched. Much of the US Orbital Segment, including the Destiny and Unity modules, the Integrated Truss Structure, and solar arrays, were built at NASA's Marshall Space Flight Center in Huntsville, Alabama and Michoud Assembly Facility in New Orleans. These components underwent final assembly and processing for launch at the Operations and Checkout Building and the Space Station Processing Facility (SSPF) at the Kennedy Space Center in Florida. The US Orbital Segment also hosts the Columbus module contributed by the European Space Agency and built in Germany, the Kibō module contributed by Japan and built at the Tsukuba Space Center and the Institute of Space and Astronautical Science, along with the Canadarm2 and Dextre, a joint Canadian-U.S. endeavor. All of these components were shipped to the SSPF for launch processing. Assembly The assembly of the International Space Station, a major endeavour in space architecture, began in November 1998. Modules in the Russian segment launched and docked autonomously, with the exception of Rassvet. Other modules and components were delivered by the Space Shuttle, which then had to be installed by astronauts either remotely using robotic arms or during spacewalks, more formally known as extra-vehicular activities (EVAs). By 5 June 2011 astronauts had made over 159 EVAs to add components to the station, totaling more than 1,000 hours in space. The foundation for the ISS was laid with the launch of the Russian-built Zarya module atop a Proton rocket on 20 November 1998. Zarya provided propulsion, attitude control, communications, and electrical power. Two weeks later on 4 December 1998, the American-made Unity was ferried aboard Space Shuttle Endeavour on STS-88 and joined with Zarya. Unity provided the connection between the Russian and US segments of the station and would provide ports to connect future modules and visiting spacecraft. While the connection of two modules built on different continents, by nations that were once bitter rivals was a significant milestone, these two initial modules lacked life support systems and the ISS remained unmanned for the next two years. At the time, the Russian station Mir was still inhabited. The turning point arrived in July 2000 with the launch of the Zvezda module. Equipped with living quarters and life-support systems, Zvezda enabled continuous human presence aboard the station. The first crew, Expedition 1, arrived that November aboard Soyuz TM-31. The ISS grew steadily over the following years, with modules delivered by both Russian rockets and the Space Shuttle. Expedition 1 arrived midway between the Space Shuttle flights of missions STS-92 and STS-97. These two flights each added segments of the station's Integrated Truss Structure, which provided the station with Ku band communications, additional attitude control needed for the additional mass of the USOS, and additional solar arrays. Over the next two years, the station continued to expand. A Soyuz-U rocket delivered the Pirs docking compartment. The Space Shuttles Discovery, Atlantis, and Endeavour delivered the American Destiny laboratory and Quest airlock, in addition to the station's main robot arm, the Canadarm2, and several more segments of the Integrated Truss Structure. Tragedy struck in 2003 with the loss of the Space Shuttle Columbia, which grounded the rest of the Shuttle fleet, halting construction of the ISS.Assembly resumed in 2006 with the arrival of STS-115 with Atlantis, which delivered the station's second set of solar arrays. Several more truss segments and a third set of arrays were delivered on STS-116, STS-117, and STS-118. As a result of the major expansion of the station's power-generating capabilities, more modules could be accommodated, and the US Harmony module and Columbus European laboratory were added. These were soon followed by the first two components of the Japanese Kibō laboratory. In March 2009, STS-119 completed the Integrated Truss Structure with the installation of the fourth and final set of solar arrays. The final section of Kibō was delivered in July 2009 on STS-127, followed by the Russian Poisk module. The US Tranquility module was delivered in February 2010 during STS-130, alongside the Cupola, followed by the penultimate Russian module, Rassvet, in May 2010. Rassvet was delivered by Space Shuttle Atlantis on STS-132 in exchange for the Russian Proton delivery of the US-funded Zarya module in 1998. The last pressurised module of the USOS, Leonardo, was brought to the station in February 2011 on the final flight of Discovery, STS-133. Russia's new primary research module Nauka docked in July 2021, along with the European Robotic Arm which can relocate itself to different parts of the Russian modules of the station. Russia's latest addition, the Prichal module, docked in November 2021. As of November 2021, the station consists of 18 pressurised modules (including airlocks) and the Integrated Truss Structure. Structure The ISS functions as a modular space station, enabling the addition or removal of modules from its structure for increased adaptability. Below is a diagram of major station components. The Unity node joins directly to the Destiny laboratory; for clarity, they are shown apart. Similar cases are also seen in other parts of the structure. Key to box background colors: Pressurised component, accessible by the crew without using spacesuits Docking/berthing port, pressurized when a visiting spacecraft is present Airlock, to move people or material between pressurized and unpressurized environment Unpressurised station superstructure Unpressurised component Temporarily defunct or non-commissioned component Former, no longer installed component Future, not yet installed component Pressurised modules Zarya Zarya (), also known as the Functional Cargo Block (), was the inaugural component of the ISS. Launched in 1998, it initially served as the ISS's power source, storage, propulsion, and guidance system. As the station has grown, Zaryas role has transitioned primarily to storage, both internally and in its external fuel tanks. A descendant of the TKS spacecraft used in the Salyut programme, Zarya was built in Russia but is owned by the United States. Its name symbolizes the beginning of a new era of international space cooperation. Unity Unity, also known as Node 1, is the inaugural U.S.-built component of the ISS. Serving as the connection between the Russian and U.S. segments, this cylindrical module features six Common Berthing Mechanism locations (forward, aft, port, starboard, zenith, and nadir) for attaching additional modules. Measuring in diameter and in length, Unity was constructed of steel by Boeing for NASA at the Marshall Space Flight Center in Huntsville, Alabama. It was the first of three connecting nodes – Unity, Harmony, and Tranquility – that forms the structural backbone of the U.S. segment of the ISS. Zvezda Zvezda () launched in July 2000, is the core of the Russian Orbital Segment of the ISS. Initially providing essential living quarters and life support systems, it enabled the first continuous human presence aboard the station. While additional modules have expanded the ISS's capabilities, Zvezda remains the command and control center for the Russian segment and it is where crews gather during emergencies. A descendant of the Salyut programme's DOS spacecraft, Zvezda was built by RKK Energia and launched atop a Proton rocket. Destiny The Destiny laboratory is the primary research facility for U.S. experiments on the ISS. NASA's first permanent orbital research station since Skylab, the module was built by Boeing and launched aboard during STS-98. Attached to Unity over a period of five days in February 2001, Destiny has been a hub for scientific research ever since. Within Destiny, astronauts conduct experiments in fields such as medicine, engineering, biotechnology, physics, materials science, and Earth science. Researchers worldwide benefit from these studies. The module also houses life support systems, including the Oxygen Generating System. Quest Joint Airlock The Quest Joint Airlock enables extravehicular activities (EVAs) using either the U.S. Extravehicular Mobility Unit (EMU) or the Russian Orlan space suit. Before its installation, conducting EVAs from the ISS was challenging due to a variety of system and design differences. Only the Orlan suit could be used from the Transfer Chamber on the Zvezda module (which was not a purpose-built airlock) and the EMU could only be used from the airlock on a visiting Space Shuttle, which could not accommodate the Orlan. Launched aboard during STS-104 in July 2001 and attached to the Unity module, Quest is a , structure built by Boeing. It houses the crew airlock for astronaut egress, an equipment airlock for suit storage, and has facilities to accommodate astronauts during their overnight pre-breathe procedures to prevent decompression sickness. The crew airlock, derived from the Space Shuttle, features essential equipment like lighting, handrails, and an Umbilical Interface Assembly (UIA) that provides life support and communication systems for up to two spacesuits simultaneously. These can be either two EMUs, two Orlan suits, or one of each design. Poisk Poisk (), also known as the Mini-Research Module 2 (), serves as both a secondary airlock on the Russian segment of the ISS and supports docking for Soyuz and Progress spacecraft, facilitates propellant transfers from the latter. Launched on 10 November 2009 attached to a modified Progress spacecraft, called Progress M-MIM2. Poisk provides facilities to maintain Orlan spacesuits and is equipped with two inward-opening hatches, a design change from Mir, which encountered a dangerous situation caused by an outward-opening hatch that opened too quickly because of a small amount of air pressure remaining in the airlock. Since the departure of Pirs in 2021, it's become the sole airlock on the Russian segment. Harmony Harmony, or Node 2, is the central connecting hub of the US segment of the ISS, linking the U.S., European, and Japanese laboratory modules. It's also been called the "utility hub" of the ISS as it provides essential power, data, and life support systems. The module also houses sleeping quarters for four crew members. Launched on 23 October 2007 aboard on STS-120, Harmony was initially attached to the Unity before being relocated to its permanent position at the front of the Destiny laboratory on 14 November 2007. This expansion added significant living space to the ISS, marking a key milestone in the construction of the U.S. segment. Tranquility Tranquility, also known as Node 3, is a module of the ISS. It contains environmental control systems, life support systems, a toilet, exercise equipment, and an observation cupola. The European Space Agency and the Italian Space Agency had Tranquility manufactured by Thales Alenia Space. A ceremony on 20 November 2009 transferred ownership of the module to NASA. On 8 February 2010, NASA launched the module on the Space Shuttle's STS-130 mission. Columbus Columbus is a science laboratory that is part of the ISS and is the largest single contribution to the station made by the European Space Agency. Like the Harmony and Tranquility modules, the Columbus laboratory was constructed in Turin, Italy by Thales Alenia Space. The functional equipment and software of the lab was designed by EADS in Bremen, Germany. It was also integrated in Bremen before being flown to the Kennedy Space Center in Florida in an Airbus Beluga jet. It was launched aboard Space Shuttle Atlantis on 7 February 2008, on flight STS-122. It is designed for ten years of operation. The module is controlled by the Columbus Control Centre, located at the German Space Operations Center, part of the German Aerospace Center in Oberpfaffenhofen near Munich, Germany. The European Space Agency has spent €1.4 billion (about US$1.6 billion) on building Columbus, including the experiments it carries and the ground control infrastructure necessary to operate them. Kibō , also known as the Japanese Experiment Module, is Japan's research facility on the ISS. It is the largest single module on the ISS, consisting of a pressurized lab, an exposed facility for conducting experiments in the space environment, two storage compartments, and a robotic arm. Attached to the Harmony module, Kibō was assembled in space over three Space Shuttle missions: STS-123, STS-124 and STS-127. Cupola The Cupola is an ESA-built observatory module of the ISS. Its name derives from the Italian word , which means "dome". Its seven windows are used to conduct experiments, dockings and observations of Earth. It was launched aboard Space Shuttle mission STS-130 on 8 February 2010 and attached to the Tranquility (Node 3) module. With the Cupola attached, ISS assembly reached 85 per cent completion. The Cupola central window has a diameter of . Rassvet Rassvet (), also known as the Mini-Research Module 1 () and formerly known as the Docking Cargo Module is primarily used for cargo storage and as a docking port for visiting spacecraft on the Russian segment of the ISS. Rassvet replaced the cancelled Docking and Storage Module and used a design largely based on the Mir Docking Module built in 1995. Rassvet was delivered in on 14 May 2010 on STS-132 in exchange for the Russian Proton delivery of the US-funded Zarya module in 1998. Rassvet was attached to Zarya shortly thereafter. Leonardo The Leonardo Permanent Multipurpose Module (PMM) is a module of the International Space Station. It was flown into space aboard the Space Shuttle on STS-133 on 24 February 2011 and installed on 1 March. Leonardo is primarily used for storage of spares, supplies and waste on the ISS, which was until then stored in many different places within the space station. It is also the personal hygiene area for the astronauts who live in the US Orbital Segment. The Leonardo PMM was a Multi-Purpose Logistics Module (MPLM) before 2011, but was modified into its current configuration. It was formerly one of two MPLM used for bringing cargo to and from the ISS with the Space Shuttle. The module was named for Italian polymath Leonardo da Vinci. Bigelow Expandable Activity Module The Bigelow Expandable Activity Module (BEAM) is an experimental expandable space station module developed by Bigelow Aerospace, under contract to NASA, for testing as a temporary module on the International Space Station (ISS) from 2016 to at least 2020. It arrived at the ISS on 10 April 2016, was berthed to the station on 16 April at Tranquility Node 3, and was expanded and pressurized on 28 May 2016. In December 2021, Bigelow Aerospace conveyed ownership of the module to NASA, as a result of Bigelow's cessation of activity. International Docking Adapters The International Docking Adapter (IDA) is a spacecraft docking system adapter developed to convert APAS-95 to the NASA Docking System (NDS). An IDA is placed on each of the ISS's two open Pressurized Mating Adapters (PMAs), both of which are connected to the Harmony module. Two International Docking Adapters are currently installed aboard the Station. Originally, IDA-1 was planned to be installed on PMA-2, located at Harmony's forward port, and IDA-2 would be installed on PMA-3 at Harmony's zenith. After IDA 1 was destroyed in a launch incident, IDA-2 was installed on PMA-2 on 19 August 2016, while IDA-3 was later installed on PMA-3 on 21 August 2019. Bishop Airlock Module The NanoRacks Bishop Airlock Module is a commercially funded airlock module launched to the ISS on SpaceX CRS-21 on 6 December 2020. The module was built by NanoRacks, Thales Alenia Space, and Boeing. It will be used to deploy CubeSats, small satellites, and other external payloads for NASA, CASIS, and other commercial and governmental customers. Nauka Nauka (), also known as the Multipurpose Laboratory Module, Upgrade (), is a Roscosmos-funded component of the ISS that was launched on 21 July 2021, 14:58 UTC. In the original ISS plans, Nauka was to use the location of the Docking and Stowage Module (DSM), but the DSM was later replaced by the Rassvet module and moved to Zaryas nadir port. Nauka was successfully docked to Zvezdas nadir port on 29 July 2021, 13:29 UTC, replacing the Pirs module. It had a temporary docking adapter on its nadir port for crewed and uncrewed missions until Prichal arrival, where just before its arrival it was removed by a departing Progress spacecraft. Prichal Prichal () is a spherical module that serves as a docking hub for the Russian segment of the ISS. Launched in November 2021, Prichal provides additional docking ports for Soyuz and Progress spacecraft, as well as potential future modules. Prichal features six docking ports: forward, aft, port, starboard, zenith, and nadir. One of these ports, equipped with an active hybrid docking system, enabled it to dock with the Nauka module. The remaining five ports are passive hybrids, allowing for docking of Soyuz, Progress, and heavier modules, as well as future spacecraft with modified docking systems. As of 2024, the forward, aft, port and starboard docking ports remain covered. Prichal was initially intended to be an element of the now canceled Orbital Piloted Assembly and Experiment Complex. Unpressurised elements The ISS has a large number of external components that do not require pressurisation. The largest of these is the Integrated Truss Structure (ITS), to which the station's main solar arrays and thermal radiators are mounted. The ITS consists of ten separate segments forming a structure long. The station was intended to have several smaller external components, such as six robotic arms, three External Stowage Platforms (ESPs) and four ExPRESS Logistics Carriers (ELCs). While these platforms allow experiments (including MISSE, the STP-H3 and the Robotic Refueling Mission) to be deployed and conducted in the vacuum of space by providing electricity and processing experimental data locally, their primary function is to store spare Orbital Replacement Units (ORUs). ORUs are parts that can be replaced when they fail or pass their design life, including pumps, storage tanks, antennas, and battery units. Such units are replaced either by astronauts during EVA or by robotic arms. Several shuttle missions were dedicated to the delivery of ORUs, including STS-129, STS-133 and STS-134. , only one other mode of transportation of ORUs had been usedthe Japanese cargo vessel HTV-2which delivered an FHRC and CTC-2 via its Exposed Pallet (EP). There are also smaller exposure facilities mounted directly to laboratory modules; the Kibō Exposed Facility serves as an external "porch" for the Kibō complex, and a facility on the European Columbus laboratory provides power and data connections for experiments such as the European Technology Exposure Facility and the Atomic Clock Ensemble in Space. A remote sensing instrument, SAGE III-ISS, was delivered to the station in February 2017 aboard CRS-10, and the NICER experiment was delivered aboard CRS-11 in June 2017. The largest scientific payload externally mounted to the ISS is the Alpha Magnetic Spectrometer (AMS), a particle physics experiment launched on STS-134 in May 2011, and mounted externally on the ITS. The AMS measures cosmic rays to look for evidence of dark matter and antimatter. The commercial Bartolomeo External Payload Hosting Platform, manufactured by Airbus, was launched on 6 March 2020 aboard CRS-20 and attached to the European Columbus module. It will provide an additional 12 external payload slots, supplementing the eight on the ExPRESS Logistics Carriers, ten on Kibō, and four on Columbus. The system is designed to be robotically serviced and will require no astronaut intervention. It is named after Christopher Columbus's younger brother. MLM outfittings In May 2010, equipment for Nauka was launched on STS-132 (as part of an agreement with NASA) and delivered by Space Shuttle Atlantis. Weighing 1.4 metric tons, the equipment was attached to the outside of Rassvet (MRM-1). It included a spare elbow joint for the European Robotic Arm (ERA) (which was launched with Nauka) and an ERA-portable workpost used during EVAs, as well as RTOd add-on heat radiator and internal hardware alongside the pressurized experiment airlock. The RTOd radiator adds additional cooling capability to Nauka, which enables the module to host more scientific experiments. The ERA was used to remove the RTOd radiator from Rassvet and transferred over to Nauka during VKD-56 spacewalk. Later it was activated and fully deployed on VKD-58 spacewalk. This process took several months. A portable work platform was also transferred over in August 2023 during VKD-60 spacewalk, which can attach to the end of the ERA to allow cosmonauts to "ride" on the end of the arm during spacewalks. However, even after several months of outfitting EVAs and RTOd heat radiator installation, six months later, the RTOd radiator malfunctioned before active use of Nauka (the purpose of RTOd installation is to radiate heat from Nauka experiments). The malfunction, a leak, rendered the RTOd radiator unusable for Nauka. This is the third ISS radiator leak after Soyuz MS-22 and Progress MS-21 radiator leaks. If a spare RTOd is not available, Nauka experiments will have to rely on Nauka's main launch radiator and the module could never be used to its full capacity. Another MLM outfitting is a 4 segment external payload interface called means of attachment of large payloads (Sredstva Krepleniya Krupnogabaritnykh Obyektov, SKKO). Delivered in two parts to Nauka by Progress MS-18 (LCCS part) and Progress MS-21 (SCCCS part) as part of the module activation outfitting process. It was taken outside and installed on the ERA aft facing base point on Nauka during the VKD-55 spacewalk. Robotic arms and cargo cranes The Integrated Truss Structure (ITS) serves as a base for the station's primary remote manipulator system, the Mobile Servicing System (MSS), which is composed of three main components: Canadarm2, the largest robotic arm on the ISS, has a mass of and is used to: dock and manipulate spacecraft and modules on the USOS; hold crew members and equipment in place during EVAs; and move Dextre to perform tasks. Dextre is a robotic manipulator that has two arms and a rotating torso, with power tools, lights, and video for replacing orbital replacement units (ORUs) and performing other tasks requiring fine control. The Mobile Base System (MBS) is a platform that rides on rails along the length of the station's main truss, which serves as a mobile base for Canadarm2 and Dextre, allowing the robotic arms to reach all parts of the USOS. A grapple fixture was added to Zarya on STS-134 to enable Canadarm2 to inchworm itself onto the ROS. Also installed during STS-134 was the Orbiter Boom Sensor System (OBSS), which had been used to inspect heat shield tiles on Space Shuttle missions and which can be used on the station to increase the reach of the MSS. Staff on Earth or the ISS can operate the MSS components using remote control, performing work outside the station without the need for space walks. Japan's Remote Manipulator System, which services the Kibō Exposed Facility, was launched on STS-124 and is attached to the Kibō Pressurised Module. The arm is similar to the Space Shuttle arm as it is permanently attached at one end and has a latching end effector for standard grapple fixtures at the other. The European Robotic Arm, which will service the ROS, was launched alongside the Nauka module. The ROS does not require spacecraft or modules to be manipulated, as all spacecraft and modules dock automatically and may be discarded the same way. Crew use the two Strela () cargo cranes during EVAs for moving crew and equipment around the ROS. Each Strela crane has a mass of . Former module Pirs Pirs (Russian: Пирс, lit. 'Pier') was launched on 14 September 2001, as ISS Assembly Mission 4R, on a Russian Soyuz-U rocket, using a modified Progress spacecraft, Progress M-SO1, as an upper stage. Pirs was undocked by Progress MS-16 on 26 July 2021, 10:56 UTC, and deorbited on the same day at 14:51 UTC to make room for Nauka module to be attached to the space station. Prior to its departure, Pirs served as the primary Russian airlock on the station, being used to store and refurbish the Russian Orlan spacesuits. Planned components Axiom segment In January 2020, NASA awarded Axiom Space a contract to build a commercial module for the ISS. The contract is under the NextSTEP2 program. NASA negotiated with Axiom on a firm fixed-price contract basis to build and deliver the module, which will attach to the forward port of the space station's Harmony (Node 2) module. Although NASA only commissioned one module, Axiom planned to build an entire segment consisting of five modules, including a node module, an orbital research and manufacturing facility, a crew habitat, and a "large-windowed Earth observatory". The Axiom segment was expected to greatly increase the capabilities and value of the space station, allowing for larger crews and private spaceflight by other organisations. Axiom planned to convert the segment into a stand-alone space station once the ISS is decommissioned, with the intention that this would act as a successor to the ISS. Canadarm2 is planned to continue its operations on Axiom Station after the retirement of ISS in 2030. In December 2024, Axiom Space revised their station assembly plans to require only one module to dock with the ISS before assembling Axiom Station in an independent orbit. , Axiom Space expects to launch one module, the Payload Power Thermal Module (PPTM), to the ISS no earlier than 2027. PPTM is expected to remain at the ISS until the launch of Axiom's Habitat One (Hab-1) module about one year later, after which it will detach from the ISS to join with Hab-1. US Deorbit Vehicle The US Deorbit Vehicle (USDV) is a NASA-provided spacecraft intended to perform a controlled de-orbit and demise of the station after the end of its operational life in 2030. In June 2024, NASA awarded SpaceX a contract to build the Deorbit Vehicle. NASA plans to de-orbit ISS as soon as they have the "minimum capability" in orbit: "the USDV and at least one commercial station." Cancelled components Several modules developed or planned for the station were cancelled over the course of the ISS programme. Reasons include budgetary constraints, the modules becoming unnecessary, and station redesigns after the 2003 Columbia disaster. The US Centrifuge Accommodations Module would have hosted science experiments in varying levels of artificial gravity. The US Habitation Module would have served as the station's living quarters. Instead, the living quarters are now spread throughout the station. The US Interim Control Module and ISS Propulsion Module would have replaced the functions of Zvezda in case of a launch failure. Two Russian Research Modules were planned for scientific research. They would have docked to a Russian Universal Docking Module. The Russian Science Power Platform would have supplied power to the Russian Orbital Segment independent of the ITS solar arrays. Science Power Modules 1 and 2 (Repurposed Components)Science Power Module 1 (SPM-1, also known as NEM-1) and Science Power Module 2 (SPM-2, also known as NEM-2''') are modules that were originally planned to arrive at the ISS no earlier than 2024, and dock to the Prichal module, which is docked to the Nauka module. In April 2021, Roscosmos announced that NEM-1 would be repurposed to function as the core module of the proposed Russian Orbital Service Station (ROSS), launching no earlier than 2027 and docking to the free-flying Nauka module. NEM-2 may be converted into another core "base" module, which would be launched in 2028. Xbase Designed by Bigelow Aerospace. In August 2016, Bigelow negotiated an agreement with NASA to develop a full-size ground prototype Deep Space Habitation based on the B330 under the second phase of Next Space Technologies for Exploration Partnerships. The module was called the Expandable Bigelow Advanced Station Enhancement (XBASE), as Bigelow hoped to test the module by attaching it to the International Space Station. However, in March 2020, Bigelow laid off all 88 of its employees, and the company remains dormant and is considered defunct, making it appear unlikely that the XBASE module will ever be launched. Nautilus-X Centrifuge Demonstration A proposal was put forward in 2011 for a first in-space demonstration of a sufficiently scaled centrifuge for artificial partial-g gravity effects. It was designed to become a sleep module for the ISS crew. The project was cancelled in favour of other projects due to budget constraints. Onboard systems Life support The critical systems are the atmosphere control system, the water supply system, the food supply facilities, the sanitation and hygiene equipment, and fire detection and suppression equipment. The Russian Orbital Segment's life support systems are contained in the Zvezda service module. Some of these systems are supplemented by equipment in the USOS. The Nauka laboratory has a complete set of life support systems. Atmospheric control systems The atmosphere on board the ISS is similar to that of Earth. Normal air pressure on the ISS is ; the same as at sea level on Earth. An Earth-like atmosphere offers benefits for crew comfort, and is much safer than a pure oxygen atmosphere, because of the increased risk of a fire such as that responsible for the deaths of the Apollo 1 crew. Earth-like atmospheric conditions have been maintained on all Russian and Soviet spacecraft. The Elektron system aboard Zvezda and a similar system in Destiny generate oxygen aboard the station. The crew has a backup option in the form of bottled oxygen and Solid Fuel Oxygen Generation (SFOG) canisters, a chemical oxygen generator system. Carbon dioxide is removed from the air by the Vozdukh system in Zvezda. Other by-products of human metabolism, such as methane from the intestines and ammonia from sweat, are removed by activated charcoal filters. Part of the ROS atmosphere control system is the oxygen supply. Triple-redundancy is provided by the Elektron unit, solid fuel generators, and stored oxygen. The primary supply of oxygen is the Elektron unit which produces and by electrolysis of water and vents overboard. The system uses approximately one litre of water per crew member per day. This water is either brought from Earth or recycled from other systems. Mir was the first spacecraft to use recycled water for oxygen production. The secondary oxygen supply is provided by burning oxygen-producing Vika cartridges (see also ISS ECLSS). Each 'candle' takes 5–20 minutes to decompose at , producing of . This unit is manually operated. The US Orbital Segment (USOS) has redundant supplies of oxygen, from a pressurised storage tank on the Quest airlock module delivered in 2001, supplemented ten years later by ESA-built Advanced Closed-Loop System (ACLS) in the Tranquility module (Node 3), which produces by electrolysis. Hydrogen produced is combined with carbon dioxide from the cabin atmosphere and converted to water and methane. Power and thermal control Double-sided solar arrays provide electrical power to the ISS. These bifacial cells collect direct sunlight on one side and light reflected off from the Earth on the other, and are more efficient and operate at a lower temperature than single-sided cells commonly used on Earth. The Russian segment of the station, like most spacecraft, uses 28 V low voltage DC from two rotating solar arrays mounted on Zvezda. The USOS uses 130–180 V DC from the USOS PV array. Power is stabilised and distributed at 160 V DC and converted to the user-required 124 V DC. The higher distribution voltage allows smaller, lighter conductors, at the expense of crew safety. The two station segments share power with converters. The USOS solar arrays are arranged as four wing pairs, for a total production of 75 to 90 kilowatts. These arrays normally track the Sun to maximise power generation. Each array is about in area and long. In the complete configuration, the solar arrays track the Sun by rotating the alpha gimbal once per orbit; the beta gimbal follows slower changes in the angle of the Sun to the orbital plane. The Night Glider mode aligns the solar arrays parallel to the ground at night to reduce the significant aerodynamic drag at the station's relatively low orbital altitude. The station originally used rechargeable nickel–hydrogen batteries () for continuous power during the 45 minutes of every 90-minute orbit that it is eclipsed by the Earth. The batteries are recharged on the day side of the orbit. They had a 6.5-year lifetime (over 37,000 charge/discharge cycles) and were regularly replaced over the anticipated 20-year life of the station. Starting in 2016, the nickel–hydrogen batteries were replaced by lithium-ion batteries, which are expected to last until the end of the ISS program. The station's large solar panels generate a high potential voltage difference between the station and the ionosphere. This could cause arcing through insulating surfaces and sputtering of conductive surfaces as ions are accelerated by the spacecraft plasma sheath. To mitigate this, plasma contactor units create current paths between the station and the ambient space plasma. The station's systems and experiments consume a large amount of electrical power, almost all of which is converted to heat. To keep the internal temperature within workable limits, a passive thermal control system (PTCS) is made of external surface materials, insulation such as MLI, and heat pipes. If the PTCS cannot keep up with the heat load, an External Active Thermal Control System (EATCS) maintains the temperature. The EATCS consists of an internal, non-toxic, water coolant loop used to cool and dehumidify the atmosphere, which transfers collected heat into an external liquid ammonia loop. From the heat exchangers, ammonia is pumped into external radiators that emit heat as infrared radiation, then the ammonia is cycled back to the station. The EATCS provides cooling for all the US pressurised modules, including Kibō and Columbus, as well as the main power distribution electronics of the S0, S1 and P1 trusses. It can reject up to 70 kW. This is much more than the 14 kW of the Early External Active Thermal Control System (EEATCS) via the Early Ammonia Servicer (EAS), which was launched on STS-105 and installed onto the P6 Truss. Communications and computers The ISS relies on various radio communication systems to provide telemetry and scientific data links between the station and mission control centres. Radio links are also used during rendezvous and docking procedures and for audio and video communication between crew members, flight controllers and family members. As a result, the ISS is equipped with internal and external communication systems used for different purposes. The Russian Orbital Segment primarily uses the Lira antenna mounted on Zvezda for direct ground communication. It also had the capability to utilize the Luch data relay satellite system, which was in a state of disrepair when the station was built, but was restored to operational status in 2011 and 2012 with the launch of Luch-5A and Luch-5B. Additionally, the Voskhod-M system provides internal telephone communications and VHF radio links to ground control. The US Orbital Segment (USOS) makes use of two separate radio links: S band (audio, telemetry, commanding – located on the P1/S1 truss) and Ku band (audio, video and data – located on the Z1 truss) systems. These transmissions are routed via the United States Tracking and Data Relay Satellite System (TDRSS) in geostationary orbit, allowing for almost continuous real-time communications with Christopher C. Kraft Jr. Mission Control Center (MCC-H) in Houston, Texas. Data channels for the Canadarm2, European Columbus laboratory and Japanese Kibō modules were originally also routed via the S band and Ku band systems, with the European Data Relay System and a similar Japanese system intended to eventually complement the TDRSS in this role. UHF radio is used by astronauts and cosmonauts conducting EVAs and other spacecraft that dock to or undock from the station. Automated spacecraft are fitted with their own communications equipment; the ATV used a laser attached to the spacecraft and the Proximity Communications Equipment attached to Zvezda to accurately dock with the station. The US Orbital Segment of the ISS is equipped with approximately 100 commercial off-the-shelf laptops running Windows or Linux. These devices are modified to use the station's 28V DC power system and with additional ventilation since heat generated by the devices can stagnate in the weightless environment. NASA prefers to keep a high commonality between laptops and spare parts are kept on the station so astronauts can repair laptops when needed. The laptops are divided into two groups: the Portable Computer System (PCS) and Station Support Computers (SSC). PCS laptops run Linux and are used for connecting to the station's primary Command & Control computer (C&C MDM), which runs on Debian Linux, a switch made from Windows in 2013 for reliability and flexibility. The primary computer supervises the critical systems that keep the station in orbit and supporting life. Since the primary computer has no display or keyboards, astronauts use a PCS laptop to connect as remote terminals via a USB to 1553 adapter. The primary computer experienced failures in 2001, 2007, and 2017. The 2017 failure required a spacewalk to replace external components. SSC laptops are used for everything else on the station, including reviewing procedures, managing scientific experiments, communicating over e-mail or video chat, and for entertainment during downtime. SSC laptops connect to the station's wireless LAN via Wi-Fi, which connects to the ground via the Ku band. While originally this provided speeds of 10 Mbit/s download and 3 Mbit/s upload from the station, NASA upgraded the system in 2019 and increased the speeds to 600 Mbit/s. ISS crew members have access to the internet. Operations Expeditions Each permanent crew is given an expedition number. Expeditions run up to six months, from launch until undocking, an 'increment' covers the same time period, but includes cargo spacecraft and all activities. Expeditions 1 to 6 consisted of three-person crews. After the destruction of NASA's Space Shuttle Columbia, Expeditions 7 to 12 were reduced to two-person "caretaker" crews who could maintain the station, because a larger crew could not be fully resupplied by the small Russian Progress cargo spacecraft. After the Shuttle fleet returned to flight, three person crews also returned to the ISS beginning with Expedition 13. As the Shuttle flights expanded the station, crew sizes also expanded, eventually reaching six around 2010. With the arrival of crew on larger US commercial spacecraft beginning in 2020, crew size has been increased to seven, the number for which ISS was originally designed. Oleg Kononenko of Roscosmos holds the record for the longest time spent in space and at the ISS, accumulating nearly 1,111 days in space over the course of five long-duration missions on the ISS (Expedition 17, 30/31, 44/45, 57/58/59 and 69/70/71). He also served as commander three times (Expedition 31, 58/59 and 70/71). Peggy Whitson of NASA and Axiom Space has spent the most time in space of any American, accumulating over 675 days in space during her time on Expeditions 5, 16, and 50/51/52 and Axiom Mission 2. Private flights Travellers who pay for their own passage into space are termed spaceflight participants by Roscosmos and NASA, and are sometimes referred to as "space tourists", a term they generally dislike. , thirteen space tourists have visited the ISS; nine were transported to the ISS on Russian Soyuz spacecraft, and four were transported on American SpaceX Dragon 2 spacecraft. For one-tourist missions, when professional crews change over in numbers not divisible by the three seats in a Soyuz, and a short-stay crewmember is not sent, the spare seat is sold by MirCorp through Space Adventures. Space tourism was halted in 2011 when the Space Shuttle was retired and the station's crew size was reduced to six, as the partners relied on Russian transport seats for access to the station. Soyuz flight schedules increased after 2013, allowing five Soyuz flights (15 seats) with only two expeditions (12 seats) required. The remaining seats were to be sold for around US$40 million each to members of the public who could pass a medical exam. ESA and NASA criticised private spaceflight at the beginning of the ISS, and NASA initially resisted training Dennis Tito, the first person to pay for his own passage to the ISS. Anousheh Ansari became the first self-funded woman to fly to the ISS as well as the first Iranian in space. Officials reported that her education and experience made her much more than a tourist, and her performance in training had been "excellent." She did Russian and European studies involving medicine and microbiology during her 10-day stay. The 2009 documentary Space Tourists follows her journey to the station, where she fulfilled "an age-old dream of man: to leave our planet as a 'normal person' and travel into outer space." In 2008, spaceflight participant Richard Garriott placed a geocache aboard the ISS during his flight. This is currently the only non-terrestrial geocache in existence. At the same time, the Immortality Drive, an electronic record of eight digitised human DNA sequences, was placed aboard the ISS. After a 12-year hiatus, the first two wholly space tourism-dedicated private spaceflights to the ISS were undertaken. Soyuz MS-20 launched in December 2021, carrying visiting Roscosmos cosmonaut Alexander Misurkin and two Japanese space tourists under the aegis of the private company Space Adventures; in April 2022, the company Axiom Space chartered a SpaceX Dragon 2 spacecraft and sent its own employee astronaut Michael Lopez-Alegria and three space tourists to the ISS for Axiom Mission 1, followed in May 2023 by one more tourist, John Shoffner, alongside employee astronaut Peggy Whitson and two Saudi astronauts for the Axiom Mission 2. Fleet operations Various crewed and uncrewed spacecraft have supported the station's activities. Flights to the ISS include 37 Space Shuttle, 90 Progress, 71 Soyuz, 5 ATV, 9 HTV, 2 Boeing Starliner, 45 SpaceX Dragon and 20 Cygnus missions. There are currently eight docking ports for visiting spacecraft, with four additional ports installed but not yet put into service: Harmony forward (with PMA 2 & IDA 2) Harmony zenith (with PMA 3 & IDA 3) Harmony nadir (CBM port) Unity nadir (CBM port) Prichal aft Prichal forward Prichal nadir Prichal port Prichal starboard Poisk zenith Rassvet nadir Zvezda aft Forward ports are at the front of the station according to its normal direction of travel and orientation (attitude). Aft is at the rear of the station. Nadir is Earth facing, zenith faced away from Earth. Port is to the left if pointing one's feet towards the Earth and looking in the direction of travel and starboard is to the right. Cargo spacecraft that will perform an orbital re-boost of the station will typically dock at an aft, forward or nadir-facing port. Crewed , 281 people representing 23 countries had visited the space station, many of them multiple times. The United States has sent 167 people, Russia has 61, Japan has sent 11, Canada has sent nine, Italy has sent six, France and Germany have each sent four, Saudi Arabia, Sweden and the United Arab Emirates have each sent two, and there has been one person from Belarus, Belgium, Brazil, Denmark, Israel, Kazakhstan, Malaysia, Netherlands, South Africa, South Korea, Spain, Turkey and the United Kingdom. Uncrewed Uncrewed spaceflights are made primarily to deliver cargo, however several Russian modules have also docked to the outpost following uncrewed launches. Resupply missions typically use the Russian Progress spacecraft, former European ATVs, Japanese Kounotori vehicles, and the American Dragon and Cygnus spacecraft. Currently docked/berthed All dates are UTC. Departure dates are the earliest possible () and may change. Scheduled missions All dates are UTC. Launch dates are the earliest possible () and may change. Docking and berthing of spacecraft The Russian spacecraft and can autonomously rendezvous and dock with the station without human intervention. Once within approximately , the spacecraft begins receiving radio signals from the Kurs docking navigation system on the station. As the spacecraft nears the station, laser-based optical equipment precisely aligns the craft with the docking port and controls the final approach. While the crew on the ISS and spacecraft monitor the procedure, their role is primarily supervisory, with intervention limited to issuing abort commands in emergencies. Although initial development costs were substantial, the system's reliability and standardized components have yielded significant cost reductions for subsequent missions. The American SpaceX Dragon 2 cargo and crewed spacecraft can autonomously rendezvous and dock with the station without human intervention. However, on crewed Dragon missions, the astronauts have the capability to intervene and fly the vehicle manually. Other automated cargo spacecraft typically use a semi-automated process when arriving and departing from the station. These spacecraft are instructed to approach and park near the station. Once the crew on board the station is ready, the spacecraft is commanded to come close to the station, so that it can be grappled by an astronaut using the Mobile Servicing System robotic arm. The final mating of the spacecraft to the station is achieved using the robotic arm (a process known as berthing). Spacecraft using this semi-automated process include the American Cygnus and the Japanese HTV-X. The now-retired American SpaceX Dragon 1, European ATV and Japanese HTV also used this process. Launch and docking windows Prior to a spacecraft's docking to the ISS, navigation and attitude control (GNC) is handed over to the ground control of the spacecraft's country of origin. GNC is set to allow the station to drift in space, rather than fire its thrusters or turn using gyroscopes. The solar panels of the station are turned edge-on to the incoming spacecraft, so residue from its thrusters does not damage the cells. Before its retirement, Shuttle launches were often given priority over Soyuz, with occasional priority given to Soyuz arrivals carrying crew and time-critical cargoes, such as biological experiment materials. Repairs Orbital Replacement Units (ORUs) are spare parts that can be readily replaced when a unit either passes its design life or fails. Examples of ORUs are pumps, storage tanks, controller boxes, antennas, and battery units. Some units can be replaced using robotic arms. Most are stored outside the station, either on small pallets called ExPRESS Logistics Carriers (ELCs) or share larger platforms called External Stowage Platforms (ESPs) which also hold science experiments. Both kinds of pallets provide electricity for many parts that could be damaged by the cold of space and require heating. The larger logistics carriers also have local area network (LAN) connections for telemetry to connect experiments. A heavy emphasis on stocking the USOS with ORU's occurred around 2011, before the end of the NASA shuttle programme, as its commercial replacements, Cygnus and Dragon, carry one tenth to one quarter the payload. Unexpected problems and failures have impacted the station's assembly time-line and work schedules leading to periods of reduced capabilities and, in some cases, could have forced abandonment of the station for safety reasons. Serious problems include an air leak from the USOS in 2004, the venting of fumes from an Elektron oxygen generator in 2006, and the failure of the computers in the ROS in 2007 during STS-117 that left the station without thruster, Elektron, Vozdukh and other environmental control system operations. In the latter case, the root cause was found to be condensation inside electrical connectors leading to a short circuit. During STS-120 in 2007 and following the relocation of the P6 truss and solar arrays, it was noted during unfurling that the solar array had torn and was not deploying properly. An EVA was carried out by Scott Parazynski, assisted by Douglas Wheelock. Extra precautions were taken to reduce the risk of electric shock, as the repairs were carried out with the solar array exposed to sunlight. The issues with the array were followed in the same year by problems with the starboard Solar Alpha Rotary Joint (SARJ), which rotates the arrays on the starboard side of the station. Excessive vibration and high-current spikes in the array drive motor were noted, resulting in a decision to substantially curtail motion of the starboard SARJ until the cause was understood. Inspections during EVAs on STS-120 and STS-123 showed extensive contamination from metallic shavings and debris in the large drive gear and confirmed damage to the large metallic bearing surfaces, so the joint was locked to prevent further damage. Repairs to the joints were carried out during STS-126 with lubrication and the replacement of 11 out of 12 trundle bearings on the joint. In September 2008, damage to the S1 radiator was first noticed in Soyuz imagery. The problem was initially not thought to be serious. The imagery showed that the surface of one sub-panel had peeled back from the underlying central structure, possibly because of micro-meteoroid or debris impact. On 15 May 2009, the damaged radiator panel's ammonia tubing was mechanically shut off from the rest of the cooling system by the computer-controlled closure of a valve. The same valve was then used to vent the ammonia from the damaged panel, eliminating the possibility of an ammonia leak. It is also known that a Service Module thruster cover struck the S1 radiator after being jettisoned during an EVA in 2008, but its effect, if any, has not been determined. In the early hours of 1 August 2010, a failure in cooling Loop A (starboard side), one of two external cooling loops, left the station with only half of its normal cooling capacity and zero redundancy in some systems. The problem appeared to be in the ammonia pump module that circulates the ammonia cooling fluid. Several subsystems, including two of the four CMGs, were shut down. Planned operations on the ISS were interrupted through a series of EVAs to address the cooling system issue. A first EVA on 7 August 2010, to replace the failed pump module, was not fully completed because of an ammonia leak in one of four quick-disconnects. A second EVA on 11 August removed the failed pump module. A third EVA was required to restore Loop A to normal functionality. The USOS's cooling system is largely built by the US company Boeing, which is also the manufacturer of the failed pump. The four Main Bus Switching Units (MBSUs, located in the S0 truss), control the routing of power from the four solar array wings to the rest of the ISS. Each MBSU has two power channels that feed 160V DC from the arrays to two DC-to-DC power converters (DDCUs) that supply the 124V power used in the station. In late 2011, MBSU-1 ceased responding to commands or sending data confirming its health. While still routing power correctly, it was scheduled to be swapped out at the next available EVA. A spare MBSU was already on board, but a 30 August 2012 EVA failed to be completed when a bolt being tightened to finish installation of the spare unit jammed before the electrical connection was secured. The loss of MBSU-1 limited the station to 75% of its normal power capacity, requiring minor limitations in normal operations until the problem could be addressed. On 5 September 2012, in a second six-hour EVA, astronauts Sunita Williams and Akihiko Hoshide successfully replaced MBSU-1 and restored the ISS to 100% power. On 24 December 2013, astronauts installed a new ammonia pump for the station's cooling system. The faulty cooling system had failed earlier in the month, halting many of the station's science experiments. Astronauts had to brave a "mini blizzard" of ammonia while installing the new pump. It was only the second Christmas Eve spacewalk in NASA history. Mission control centres The components of the ISS are operated and monitored by their respective space agencies at mission control centres across the globe, primarily the Christopher C. Kraft Jr. Mission Control Center in Houston and the RKA Mission Control Center (TsUP) in Moscow, with support from Tsukuba Space Center in Japan, Payload Operations and Integration Center in Huntsville, Alabama, U.S., Columbus Control Center in Munich, Germany and Mobile Servicing System Control at the Canadian Space Agency's headquarters in Saint-Hubert, Quebec. Life aboard Living quarters The living and working space aboard the International Space Station (ISS) is larger than a six-bedroom house, equipped with seven private sleeping quarters, three bathrooms, two dining rooms, a gym, and a panoramic 360-degree-view bay window. The station provides dedicated crew quarters for long-term crew members. Two "sleep stations" are located in the Zvezda module, one in Nauka, and four in Harmony. These soundproof, person-sized booths offer privacy, ventilation, and basic amenities such as a sleeping bag, a reading lamp, a desktop, a shelf, and storage for personal items. The quarters in Zvezda include a small window but have less ventilation and soundproofing. Visiting crew members use tethered sleeping bags attached to available wall space. While it is possible to sleep floating freely, this is generally avoided to prevent collisions with sensitive equipment. Proper ventilation is critical, as astronauts risk oxygen deprivation if exhaled carbon dioxide accumulates in a bubble around their heads. The station’s lighting system is adjustable, allowing for dimming, switching off, and colour temperature changes to support crew activities and rest. Crew activities The ISS operates on Coordinated Universal Time (UTC). A typical day aboard the ISS begins at 06:00 with wake-up, post-sleep routines, and a morning inspection of the station. After breakfast, the crew holds a daily planning conference with Mission Control, starting work around 08:10. Morning tasks include scheduled exercise, scientific experiments, maintenance, or operational duties. Following a one-hour lunch break at 13:05, the crew resumes their afternoon schedule of work and exercise. Pre-sleep activities, including dinner and a crew conference, begin at 19:30, with the scheduled sleep period starting at 21:30. The crew works approximately 10 hours on weekdays and 5 hours on Saturdays, with the remaining time allocated for relaxation or catching up on tasks. Free time often involves enjoying personal hobbies, communicating with family, or gazing out at Earth through the station’s windows. When the Space Shuttle was operating, the ISS crew aligned with the shuttle crew's Mission Elapsed Time, a flexible schedule based on the shuttle's launch. To simulate night conditions, the station’s windows are covered during designated sleep periods, as the ISS experiences 16 sunrises and sunsets daily due to its orbital speed. Reflection and material culture Reflection of individual and crew characteristics are found particularly in the decoration of the station and expressions in general, such as religion. The latter has produced a certain material economy between the station and Russia in particular. The micro-society of the station, as well as wider society, and possibly the emergence of distinct station cultures, is being studied by analyzing many aspects, from art to dust accumulation, as well as archaeologically how material of the ISS has been discarded. Food and personal hygiene On the USOS, most of the food aboard is vacuum sealed in plastic bags; cans are rare because they are heavy and expensive to transport. Preserved food is not highly regarded by the crew and taste is reduced in microgravity, so efforts are taken to make the food more palatable, including using more spices than in regular cooking. The crew looks forward to the arrival of any spacecraft from Earth as they bring fresh fruit and vegetables. Care is taken that foods do not create crumbs, and liquid condiments are preferred over solid to avoid contaminating station equipment. Each crew member has individual food packages and cooks them in the galley, which has two food warmers, a refrigerator (added in November 2008), and a water dispenser that provides heated and unheated water. Drinks are provided as dehydrated powder that is mixed with water before consumption. Drinks and soups are sipped from plastic bags with straws, while solid food is eaten with a knife and fork attached to a tray with magnets to prevent them from floating away. Any food that floats away, including crumbs, must be collected to prevent it from clogging the station's air filters and other equipment. Showers on space stations were introduced in the early 1970s on Skylab and Salyut 3. By Salyut 6, in the early 1980s, the crew complained of the complexity of showering in space, which was a monthly activity. The ISS does not feature a shower; instead, crewmembers wash using a water jet and wet wipes, with soap dispensed from a toothpaste tube-like container. Crews are also provided with rinseless shampoo and edible toothpaste to save water. There are two space toilets on the ISS, both of Russian design, located in Zvezda and Tranquility. These Waste and Hygiene Compartments use a fan-driven suction system similar to the Space Shuttle Waste Collection System. Astronauts first fasten themselves to the toilet seat, which is equipped with spring-loaded restraining bars to ensure a good seal. A lever operates a powerful fan and a suction hole slides open: the air stream carries the waste away. Solid waste is collected in individual bags which are stored in an aluminium container. Full containers are transferred to Progress spacecraft for disposal. Liquid waste is evacuated by a hose connected to the front of the toilet, with anatomically correct "urine funnel adapters" attached to the tube so that men and women can use the same toilet. The diverted urine is collected and transferred to the Water Recovery System, where it is recycled into drinking water. In 2021, the arrival of the Nauka module also brought a third toilet to the ISS. Crew health and safety Overall On 12 April 2019, NASA reported medical results from the Astronaut Twin Study. Astronaut Scott Kelly spent a year in space on the ISS, while his identical twin spent the year on Earth. Several long-lasting changes were observed, including those related to alterations in DNA and cognition, when one twin was compared with the other. In November 2019, researchers reported that astronauts experienced serious blood flow and clot problems while on board the ISS, based on a six-month study of 11 healthy astronauts. The results may influence long-term spaceflight, including a mission to the planet Mars, according to the researchers. Radiation The ISS is partially protected from the space environment by Earth's magnetic field. From an average distance of about from the Earth's surface, depending on Solar activity, the magnetosphere begins to deflect solar wind around Earth and the space station. Solar flares are still a hazard to the crew, who may receive only a few minutes warning. In 2005, during the initial "proton storm" of an X-3 class solar flare, the crew of Expedition 10 took shelter in a more heavily shielded part of the ROS designed for this purpose. Subatomic charged particles, primarily protons from cosmic rays and solar wind, are normally absorbed by Earth's atmosphere. When they interact in sufficient quantity, their effect is visible to the naked eye in a phenomenon called an aurora. Outside Earth's atmosphere, ISS crews are exposed to approximately one millisievert each day (about a year's worth of natural exposure on Earth), resulting in a higher risk of cancer. Radiation can penetrate living tissue and damage the DNA and chromosomes of lymphocytes; being central to the immune system, any damage to these cells could contribute to the lower immunity experienced by astronauts. Radiation has also been linked to a higher incidence of cataracts in astronauts. Protective shielding and medications may lower the risks to an acceptable level. Radiation levels on the ISS are between 12 and 28.8 milli rads per day, about five times greater than those experienced by airline passengers and crew, as Earth's electromagnetic field provides almost the same level of protection against solar and other types of radiation in low Earth orbit as in the stratosphere. For example, on a 12-hour flight, an airline passenger would experience 0.1 millisieverts of radiation, or a rate of 0.2 millisieverts per day; this is one fifth the rate experienced by an astronaut in LEO. Additionally, airline passengers experience this level of radiation for a few hours of flight, while the ISS crew are exposed for their whole stay on board the station. Stress There is considerable evidence that psychosocial stressors are among the most important impediments to optimal crew morale and performance. Cosmonaut Valery Ryumin wrote in his journal during a particularly difficult period on board the Salyut 6 space station: "All the conditions necessary for murder are met if you shut two men in a cabin measuring 18 feet by 20 [5.5 m × 6 m] and leave them together for two months." NASA's interest in psychological stress caused by space travel, initially studied when their crewed missions began, was rekindled when astronauts joined cosmonauts on the Russian space station Mir. Common sources of stress in early US missions included maintaining high performance under public scrutiny and isolation from peers and family. The latter is still often a cause of stress on the ISS, such as when the mother of NASA astronaut Daniel Tani died in a car accident, and when Michael Fincke was forced to miss the birth of his second child. A study of the longest spaceflight concluded that the first three weeks are a critical period where attention is adversely affected because of the demand to adjust to the extreme change of environment. ISS crew flights typically last about five to six months. The ISS working environment includes further stress caused by living and working in cramped conditions with people from very different cultures who speak a different language. First-generation space stations had crews who spoke a single language; second- and third-generation stations have crew from many cultures who speak many languages. Astronauts must speak English and Russian, and knowing additional languages is even better. Due to the lack of gravity, confusion often occurs. Even though there is no up and down in space, some crew members feel like they are oriented upside down. They may also have difficulty measuring distances. This can cause problems like getting lost inside the space station, pulling switches in the wrong direction or misjudging the speed of an approaching vehicle during docking. Medical The physiological effects of long-term weightlessness include muscle atrophy, deterioration of the skeleton (osteopenia), fluid redistribution, a slowing of the cardiovascular system, decreased production of red blood cells, balance disorders, and a weakening of the immune system. Lesser symptoms include loss of body mass, and puffiness of the face. Sleep is regularly disturbed on the ISS because of mission demands, such as incoming or departing spacecraft. Sound levels in the station are unavoidably high. The atmosphere is unable to thermosiphon naturally, so fans are required at all times to process the air which would stagnate in the freefall (zero-G) environment. To prevent some of the adverse effects on the body, the station is equipped with: two TVIS treadmills (including the COLBERT); the ARED (Advanced Resistive Exercise Device), which enables various weightlifting exercises that add muscle without raising (or compensating for) the astronauts' reduced bone density; and a stationary bicycle. Each astronaut spends at least two hours per day exercising on the equipment. Astronauts use bungee cords to strap themselves to the treadmill. Microbiological environmental hazards Hazardous molds that can foul air and water filters may develop aboard space stations. They can produce acids that degrade metal, glass, and rubber. They can also be harmful to the crew's health. Microbiological hazards have led to a development of the LOCAD-PTS (a portable test system) which identifies common bacteria and molds faster than standard methods of culturing, which may require a sample to be sent back to Earth. Researchers in 2018 reported, after detecting the presence of five Enterobacter bugandensis bacterial strains on the ISS (none of which are pathogenic to humans), that microorganisms on the ISS should be carefully monitored to continue assuring a medically healthy environment for astronauts. Contamination on space stations can be prevented by reduced humidity, and by using paint that contains mold-killing chemicals, as well as the use of antiseptic solutions. All materials used in the ISS are tested for resistance against fungi. Since 2016, a series of ESA-sponsored experiments have been conducted to test the anti-bacterial properties of various materials, with the goal of developing "smart surfaces" that mitigate bacterial growth in multiple ways, using the best method for a particular circumstance. Dubbed "Microbial Aerosol Tethering on Innovative Surfaces" (MATISS), the programme involves deployment of small plaques containing an array of glass squares covered with different test coatings. They remain on the station for six months before being returned to earth for analysis. The most recent and final experiment of the series was launched on 5 June 2023 aboard the SpaceX CRS-28 cargo mission to ISS, comprising four plaques. Whereas previous experiments in the series were limited to analysis by light microsocopy, this experiment uses quartz glass made of pure silica, which will allow spectrographic analysis. Two of the plaques were returned after eight months and the remaining two after 16 months. In April 2019, NASA reported that a comprehensive study had been conducted into the microorganisms and fungi present on the ISS. The experiment was performed over a period of 14 months on three different flight missions, and involved taking samples from 8 predefined locations inside the station, then returning them to earth for analysis. In prior experiments, analysis was limited to culture-based methods, thus overlooking microbes which cannot be grown in culture. The present study used molecular-based methods in addition to culturing, resulting in a more complete catalog. The results may be useful in improving the health and safety conditions for astronauts, as well as better understanding other closed-in environments on Earth such as clean rooms used by the pharmaceutical and medical industries. Noise Space flight is not inherently quiet, with noise levels exceeding acoustic standards as far back as the Apollo missions. For this reason, NASA and the International Space Station international partners have developed noise control and hearing loss prevention goals as part of the health program for crew members. Specifically, these goals have been the primary focus of the ISS Multilateral Medical Operations Panel (MMOP) Acoustics Subgroup since the first days of ISS assembly and operations. The effort includes contributions from acoustical engineers, audiologists, industrial hygienists, and physicians who comprise the subgroup's membership from NASA, Roscosmos, the European Space Agency (ESA), the Japanese Aerospace Exploration Agency (JAXA), and the Canadian Space Agency (CSA). When compared to terrestrial environments, the noise levels incurred by astronauts and cosmonauts on the ISS may seem insignificant and typically occur at levels that would not be of major concern to the Occupational Safety and Health Administration – rarely reaching 85 dBA. But crew members are exposed to these levels 24 hours a day, seven days a week, with current missions averaging six months in duration. These levels of noise also impose risks to crew health and performance in the form of sleep interference and communication, as well as reduced alarm audibility. Over the 19 plus year history of the ISS, significant efforts have been put forth to limit and reduce noise levels on the ISS. During design and pre-flight activities, members of the Acoustic Subgroup have written acoustic limits and verification requirements, consulted to design and choose the quietest available payloads, and then conducted acoustic verification tests prior to launch. During spaceflights, the Acoustics Subgroup has assessed each ISS module's in flight sound levels, produced by a large number of vehicle and science experiment noise sources, to assure compliance with strict acoustic standards. The acoustic environment on ISS changed when additional modules were added during its construction, and as additional spacecraft arrive at the ISS. The Acoustics Subgroup has responded to this dynamic operations schedule by successfully designing and employing acoustic covers, absorptive materials, noise barriers, and vibration isolators to reduce noise levels. Moreover, when pumps, fans, and ventilation systems age and show increased noise levels, this Acoustics Subgroup has guided ISS managers to replace the older, noisier instruments with quiet fan and pump technologies, significantly reducing ambient noise levels. NASA has adopted most-conservative damage risk criteria (based on recommendations from the National Institute for Occupational Safety and Health and the World Health Organization), in order to protect all crew members. The MMOP Acoustics Subgroup has adjusted its approach to managing noise risks in this unique environment by applying, or modifying, terrestrial approaches for hearing loss prevention to set these conservative limits. One innovative approach has been NASA's Noise Exposure Estimation Tool (NEET), in which noise exposures are calculated in a task-based approach to determine the need for hearing protection devices (HPDs). Guidance for use of HPDs, either mandatory use or recommended, is then documented in the Noise Hazard Inventory, and posted for crew reference during their missions. The Acoustics Subgroup also tracks spacecraft noise exceedances, applies engineering controls, and recommends hearing protective devices to reduce crew noise exposures. Finally, hearing thresholds are monitored on-orbit, during missions. There have been no persistent mission-related hearing threshold shifts among US Orbital Segment crewmembers (JAXA, CSA, ESA, NASA) during what is approaching 20 years of ISS mission operations, or nearly 175,000 work hours. In 2020, the MMOP Acoustics Subgroup received the Safe-In-Sound Award for Innovation for their combined efforts to mitigate any health effects of noise. Fire and toxic gases An onboard fire or a toxic gas leak are other potential hazards. Ammonia is used in the external radiators of the station and could potentially leak into the pressurised modules. Orbit, environment, debris and visibility Altitude and orbital inclination The ISS is currently maintained in a nearly circular orbit with a minimum mean altitude of and a maximum of , in the centre of the thermosphere, at an inclination of 51.6 degrees to Earth's equator with an eccentricity of 0.007. This orbit was selected because it is the lowest inclination that can be directly reached by Russian Soyuz and Progress spacecraft launched from Baikonur Cosmodrome at 46° N latitude without overflying China or dropping spent rocket stages in inhabited areas. It travels at an average speed of , and completes orbits per day (93 minutes per orbit). The station's altitude was allowed to fall around the time of each NASA shuttle flight to permit heavier loads to be transferred to the station. After the retirement of the shuttle, the nominal orbit of the space station was raised in altitude (from about 350 km to about 400 km). Other, more frequent supply spacecraft do not require this adjustment as they are substantially higher performance vehicles. Atmospheric drag reduces the altitude by about 2 km a month on average. Orbital boosting can be performed by the station's two main engines on the Zvezda service module, or Russian or European spacecraft docked to Zvezda aft port. The Automated Transfer Vehicle is constructed with the possibility of adding a second docking port to its aft end, allowing other craft to dock and boost the station. It takes approximately two orbits (three hours) for the boost to a higher altitude to be completed. Maintaining ISS altitude uses about 7.5 tonnes of chemical fuel per annum at an annual cost of about $210 million. The Russian Orbital Segment contains the Data Management System, which handles Guidance, Navigation and Control (ROS GNC) for the entire station. Initially, Zarya, the first module of the station, controlled the station until a short time after the Russian service module Zvezda docked and was transferred control. Zvezda contains the ESA built DMS-R Data Management System. Using two fault-tolerant computers (FTC), Zvezda computes the station's position and orbital trajectory using redundant Earth horizon sensors, Solar horizon sensors as well as Sun and star trackers. The FTCs each contain three identical processing units working in parallel and provide advanced fault-masking by majority voting. OrientationZvezda uses gyroscopes (reaction wheels) and thrusters to turn itself. Gyroscopes do not require propellant; instead they use electricity to 'store' momentum in flywheels by turning in the opposite direction to the station's movement. The USOS has its own computer-controlled gyroscopes to handle its extra mass. When gyroscopes 'saturate', thrusters are used to cancel out the stored momentum. In February 2005, during Expedition 10, an incorrect command was sent to the station's computer, using about 14 kilograms of propellant before the fault was noticed and fixed. When attitude control computers in the ROS and USOS fail to communicate properly, this can result in a rare 'force fight' where the ROS GNC computer must ignore the USOS counterpart, which itself has no thrusters. Docked spacecraft can also be used to maintain station attitude, such as for troubleshooting or during the installation of the S3/S4 truss, which provides electrical power and data interfaces for the station's electronics. Orbital debris threats The low altitudes at which the ISS orbits are also home to a variety of space debris, including spent rocket stages, defunct satellites, explosion fragments (including materials from anti-satellite weapon tests), paint flakes, slag from solid rocket motors, and coolant released by US-A nuclear-powered satellites. These objects, in addition to natural micrometeoroids, are a significant threat. Objects large enough to destroy the station can be tracked, and therefore are not as dangerous as smaller debris. Objects too small to be detected by optical and radar instruments, from approximately 1 cm down to microscopic size, number in the trillions. Despite their small size, some of these objects are a threat because of their kinetic energy and direction in relation to the station. Spacewalking crew in spacesuits are also at risk of suit damage and consequent exposure to vacuum. Ballistic panels, also called micrometeorite shielding, are incorporated into the station to protect pressurised sections and critical systems. The type and thickness of these panels depend on their predicted exposure to damage. The station's shields and structure have different designs on the ROS and the USOS. On the USOS, Whipple Shields are used. The US segment modules consist of an inner layer made from aluminium, a intermediate layers of Kevlar and Nextel (a ceramic fabric), and an outer layer of stainless steel, which causes objects to shatter into a cloud before hitting the hull, thereby spreading the energy of impact. On the ROS, a carbon fibre reinforced polymer honeycomb screen is spaced from the hull, an aluminium honeycomb screen is spaced from that, with a screen-vacuum thermal insulation covering, and glass cloth over the top. Space debris is tracked remotely from the ground, and the station crew can be notified. If necessary, thrusters on the Russian Orbital Segment can alter the station's orbital altitude, avoiding the debris. These Debris Avoidance Manoeuvres (DAMs) are not uncommon, taking place if computational models show the debris will approach within a certain threat distance. Ten DAMs had been performed by the end of 2009. Usually, an increase in orbital velocity of the order of 1 m/s is used to raise the orbit by one or two kilometres. If necessary, the altitude can also be lowered, although such a manoeuvre wastes propellant. If a threat from orbital debris is identified too late for a DAM to be safely conducted, the station crew close all the hatches aboard the station and retreat into their spacecraft in order to be able to evacuate in the event the station was seriously damaged by the debris. Partial station evacuations have occurred on 13 March 2009, 28 June 2011, 24 March 2012, 16 June 2015, November 2021, and 27 June 2024. The November 2021 evacuation was caused by a Russian anti-satellite weapon test. NASA administrator Bill Nelson said it was unthinkable that Russia would endanger the lives of everyone on ISS, including their own cosmonauts. Visibility from Earth The ISS is visible in the sky to the naked eye as a visibly moving, bright white dot, when crossing the sky and being illuminated by the Sun, during twilight, the hours after sunset and before sunrise, when the station remains sunlit, outside of Earth's shadow, but the ground and sky are dark. It crosses the skies at latitudes between the polar regions. Depending on the path it takes across the sky, the time it takes the station to move across the horizon or from one to the other may be short or up to 10 minutes, while likely being only visible part of that time because of it moving into or out of Earth's shadow. It then returns around every 90 minutes, with the time of the day that it crosses the sky shifting over the course of some weeks, and therefore before returning to twilight and visible illumination. Because of the size of its reflective surface area, the ISS is the brightest artificial object in the sky (excluding other satellite flares), with an approximate maximum magnitude of −4 when in sunlight and overhead (similar to Venus), and a maximum angular size of 63 arcseconds. Tools are provided by a number of websites such as Heavens-Above (see Live viewing below) as well as smartphone applications that use orbital data and the observer's longitude and latitude to indicate when the ISS will be visible (weather permitting), where the station will appear to rise, the altitude above the horizon it will reach and the duration of the pass before the station disappears either by setting below the horizon or entering into Earth's shadow. In November 2012 NASA launched its "Spot the Station" service, which sends people text and email alerts when the station is due to fly above their town. The station is visible from 95% of the inhabited land on Earth, but is not visible from extreme northern or southern latitudes. Under specific conditions, the ISS can be observed at night on five consecutive orbits. Those conditions are 1) a mid-latitude observer location, 2) near the time of the solstice with 3) the ISS passing in the direction of the pole from the observer near midnight local time. The three photos show the first, middle and last of the five passes on 5–6 June 2014. Astrophotography Using a telescope-mounted camera to photograph the station is a popular hobby for astronomers, while using a mounted camera to photograph the Earth and stars is a popular hobby for crew. The use of a telescope or binoculars allows viewing of the ISS during daylight hours. Transits of the ISS in front of the Sun, particularly during an eclipse (and so the Earth, Sun, Moon, and ISS are all positioned approximately in a single line) are of particular interest for amateur astronomers. International co-operation Involving five space programs and fifteen countries, the International Space Station is the most politically and legally complex space exploration programme in history. The 1998 Space Station Intergovernmental Agreement sets forth the primary framework for international cooperation among the parties. A series of subsequent agreements govern other aspects of the station, ranging from jurisdictional issues to a code of conduct among visiting astronauts. Brazil was also invited to participate in the programme, the only developing country to receive such an invitation. Under the agreement framework, Brazil was to provide six pieces of hardware, and in exchange, would receive ISS utilization rights. However, Brazil was unable to deliver any of the elements due to a lack of funding and political priority within the country. Brazil officially dropped out of the ISS programme in 2007. Following the 2022 Russian invasion of Ukraine, continued cooperation between Russia and other countries on the International Space Station has been put into question. Roscosmos Director General Dmitry Rogozin insinuated that Russian withdrawal could cause the International Space Station to de-orbit due to lack of reboost capabilities, writing in a series of tweets, "If you block cooperation with us, who will save the ISS from an unguided de-orbit to impact on the territory of the US or Europe? There's also the chance of impact of the 500-ton construction in India or China. Do you want to threaten them with such a prospect? The ISS doesn't fly over Russia, so all the risk is yours. Are you ready for it?" (This latter claim is untrue: the ISS flies over all parts of the Earth between 51.6 degrees latitude north and south, approximately the latitude of Saratov.) Rogozin later tweeted that normal relations between ISS partners could only be restored once sanctions have been lifted, and indicated that Roscosmos would submit proposals to the Russian government on ending cooperation. NASA stated that, if necessary, US corporation Northrop Grumman has offered a reboost capability that would keep the ISS in orbit. On 26 July 2022, Yury Borisov, Rogozin's successor as head of Roscosmos, submitted to Russian President Putin plans for withdrawal from the programme after 2024. However, Robyn Gatens, the NASA official in charge of the space station, responded that NASA had not received any formal notices from Roscosmos concerning withdrawal plans. Participating countries European Space Agency End of mission Originally the ISS was planned to be a 15-year mission. Therefore, an end of mission had been worked on, but was several times postponed due to the success and support for the operation of the station. As a result, the oldest modules of the ISS have been in orbit for more than 20 years, with their reliability having decreased. It has been proposed to use funds elsewhere instead, for example for a return to the Moon. According to the Outer Space Treaty, the parties are legally responsible for all spacecraft or modules they launch. An unmaintained station would pose an orbital and re-entry hazard. Russia has stated that it plans to pull out of the ISS program after 2025. However, Russian modules will provide orbital station-keeping until 2028. The US planned in 2009 to deorbit the ISS in 2016. But on 30 September 2015, Boeing's contract with NASA as prime contractor for the ISS was extended to 30 September 2020. Part of Boeing's services under the contract related to extending the station's primary structural hardware past 2020 to the end of 2028. In July 2018, the Space Frontier Act of 2018 was intended to extend operations of the ISS to 2030. This bill was unanimously approved in the Senate, but failed to pass in the U.S. House. In September 2018, the Leading Human Spaceflight Act was introduced with the intent to extend operations of the ISS to 2030, and was confirmed in December 2018. Congress later passed similar provisions in its CHIPS and Science Act, signed into law by U.S. President Joe Biden on 9 August 2022. If until 2031 Commercial LEO Destinations providers are not sufficient to accommodate NASA's projects, NASA is suggesting to extend ISS operations beyond 2031. NASA's disposal plans NASA considered originally several possible disposal options: natural orbital decay with random reentry (as with Skylab), boosting the station to a higher altitude (which would delay reentry), and a controlled de-orbit targeting a remote ocean area. NASA determined that random reentry carried an unacceptable risk of producing hazardous space debris that could hit people or property and re-boosting the station would be costly and could also create hazards. Prior to 2010, plans had contemplated using a slightly modified Progress spacecraft to de-orbit the ISS. However, NASA concluded Progress would not be adequate for the job, and decided on a spacecraft specifically designed for the job. In January 2022, NASA announced a planned date of January 2031 to de-orbit the ISS using the "U.S. Deorbit Vehicle" and direct any remnants into a remote area of the South Pacific Ocean that has come to be known as the spacecraft cemetery. NASA plans to launch the deorbit vehicle in 2030, docking at the Harmony forward port. The deorbit vehicle will remain attached, dormant, for about a year as the station's orbit naturally decays to . The spacecraft would then conduct one or more orientation burns to lower the perigee to , followed by a final deorbiting burn. NASA began planning for the deorbit vehicle after becoming wary of Russia pulling out of the ISS abruptly, leaving the other partners with few good options for a controlled reentry. In June 2024, NASA selected SpaceX to develop the U.S. Deorbit Vehicle, a contract potentially worth $843 million. The vehicle will consist of an existing Cargo Dragon spacecraft which will be paired with a significantly lengthened trunk module which will be equipped with 46 Draco thrusters (instead of the normal 16) and will carry of propellant, nearly six times the normal load. NASA is still working to secure all the necessary funding to build, launch and operate the deorbit vehicle. Post mission proposals and plans The follow-up to NASA's program/strategy is the Commercial LEO Destinations Program, meant to allow private industry to build and maintain their own stations, and NASA procuring access as a customer, starting in 2028. Similarly, the ESA has been seeking new private space stations to provide orbital services, as well as retrieve materials, from the ISS. Axiom Station is planned to begin as a single module temporarily hosted at the ISS in 2027. Additionally, there have been suggestions in the commercial space industry that the ISS could be converted to commercial operations after it is retired by government entities, including turning it into a space hotel. Russia previously has planned to use its orbital segment for the construction of its OPSEK station after the ISS is decommissioned. The modules under consideration for removal from the current ISS included the Multipurpose Laboratory Module (Nauka; MLM), launched in July 2021, and the other new Russian modules that are proposed to be attached to Nauka. These newly launched modules would still be well within their useful lives in 2024. At the end of 2011, the Exploration Gateway Platform concept also proposed using leftover USOS hardware and Zvezda 2 as a refuelling depot and service station located at one of the Earth–Moon Lagrange points. However, the entire USOS was not designed for disassembly and will be discarded. Western space industry has suggested in 2022 using the ISS as a platform to develop orbital salvage capacities, by companies such as CisLunar Industries working on using space debris as fuel, instead of plunging it into the ocean. NASA has stated that by July 2024 it has not seen any viable proposals for reuse of the ISS or parts of it. Cost The ISS has been described as the most expensive single item ever constructed. As of 2010, the total cost was US$150 billion. This includes NASA's budget of $58.7 billion ($89.73 billion in 2021 dollars) for the station from 1985 to 2015, Russia's $12 billion, Europe's $5 billion, Japan's $5 billion, Canada's $2 billion, and the cost of 36 shuttle flights to build the station, estimated at $1.4 billion each, or $50.4 billion in total. Assuming 20,000 man-days of use from 2000 to 2015 by two- to six-person crews, each man-day would cost $7.5 million, less than half the inflation-adjusted $19.6 million ($5.5 million before inflation) per man-day of Skylab. In culture The ISS has become an international symbol of human capabilities, particularly human cooperation and science, defining a cooperative international approach and period, instead of a looming commercialized and militarized interplanetary world. In film Beside numerous documentaries such as the IMAX documentaries Space Station 3D from 2002, or A Beautiful Planet from 2016, and films like Apogee of Fear (2012) and Yolki 5 (2016) the ISS is the subject of feature films such as The Day After Tomorrow (2004), Love (2011), together with the Chinese station Tiangong 1 in Gravity (2013), Life (2017), and I.S.S. (2023). In 2022, the movie The Challenge (Doctor's House Call) was filmed aboard the ISS, and was notable for being the first feature film in which both professional actors and director worked together in space.
Technology
Crewed spacecraft
null
624160
https://en.wikipedia.org/wiki/Blister%20agent
Blister agent
A blister agent (or vesicant), is a chemical compound that causes severe skin, eye and mucosal pain and irritation. They are named for their ability to cause severe chemical burns, resulting in painful water blisters on the bodies of those affected. Although the term is often used in connection with large-scale burns caused by chemical spills or chemical warfare agents, some naturally occurring substances such as cantharidin are also blister-producing agents (vesicants). Furanocoumarin, another naturally occurring substance, causes vesicant-like effects indirectly, for example, by increasing skin photosensitivity greatly. Vesicants have medical uses including wart removal but can be dangerous if even small amounts are ingested. Blister agents used in warfare Most blister agents fall into one of four groups: Sulfur mustards – A family of sulfur-based agents, including mustard gas. Nitrogen mustards – A family of agents similar to the sulfur mustards, but based on nitrogen instead of sulfur. Lewisite – An early blister agent that was developed, but not used, during World War I. It was effectively rendered obsolete with the development of British anti-Lewisite in the 1940s. Phosgene oxime – Occasionally included among the blister agents, although it is more properly termed a nettle agent (urticant). Effects Exposure to a weaponized blister agent can cause a number of life-threatening symptoms, including: Severe skin, eye and mucosal pain and irritation Skin erythema with large fluid blisters that heal slowly and may become infected Tearing, conjunctivitis, corneal damage Mild respiratory distress to marked airway damage All blister agents currently known are denser than air, and are readily absorbed through the eyes, lungs, and skin. Effects of the two mustard agents are typically delayed: exposure to vapors becomes evident in 4 to 6 hours, and skin exposure in 2 to 48 hours. The effects of Lewisite are immediate.
Technology
Weapon of mass destruction
null
624166
https://en.wikipedia.org/wiki/Asian%20Highway%20Network
Asian Highway Network
The Asian Highway Network (AH), also known as the Great Asian Highway, is a cooperative project among countries in Asia and the United Nations Economic and Social Commission for Asia and the Pacific (ESCAP) to improve their connectivity via highway systems, funded by G77 Gold Standards. It is one of the three pillars of the Asian Land Transport Infrastructure Development (ALTID) project, endorsed by the ESCAP commission at its 48th session in 1992, comprising Asian Highway, Trans-Asian Railway (TAR) and facilitation of land transport projects. Agreements have been signed by 32 countries to allow the highway to cross the continent and also reach to Europe. Some of the countries taking part in the highway project are India (Act East policy), Sri Lanka, Pakistan, China, Iran, Japan, South Korea, Nepal and Bangladesh. Most of the funding comes from the larger, more advanced Asian nations such as China, South Korea and Singapore as well as international agencies such as the Asian Development Bank (ADB) and the Asian Infrastructure Investment Bank (AIIB). The project aims to make maximum use of the continent's existing highways to avoid the construction of newer ones, except in cases where missing routes necessitate their construction. Project Monitor, an Asian infrastructure news website, has commented that "early beneficiaries of the Asian Highway project are the planners within the national land transport department of the participating countries [since] it assists them in planning the most cost-effective and efficient routes to promote domestic and international trade. Non-coastal areas, which are often negligible, are the other beneficiaries." However, in the mid-2000s some transportation experts were skeptical about the viability of the project given the economic and political climate in both South and Southeast Asia. History The AH project was initiated by the United Nations in 1959 with the aim of promoting the development of international road transport in the region. During the first phase of the project (1960–1970) considerable progress was achieved, however, progress slowed down when financial assistance was suspended in 1975. ESCAP has conducted several projects in cooperation with AH member countries step by step after the endorsement of ALTID in 1992. The Intergovernmental Agreement on the Asian Highway Network (IGA) was adopted on February 28,1997 by the Intergovernmental Meeting ; 37000 kilometers, and was adopted on November 18, 2003, by the Intergovernmental Meeting; the IGA includes Annex I, which identifies 55 AH routes among 32 member countries totalling approximately 140,000 km (87,500 miles), and Annex II "Classification and Design Standards". During the 60th session of the ESCAP Commission at Shanghai, China, in April 2004, the IGA treaty was signed by 23 countries. By 2013, 29 countries had ratified the agreement. In 2007, British drivers Richard Meredith and Phil Colley completed the first full East to West journey of the entire highway in an Aston Martin Vantage which was later sold to raise money for UNICEF. The drive was a marketing stunt promoted by the car manufacturer. Implications The advanced highway network would provide for greater trade and social interactions between Asian countries, including personal contacts, project capitalizations, connections of major container terminals with transportation points, and promotion of tourism via the new roadways. Infrastructure consultant Om Prakash noted that, "It's an excellent step taken by ESCAP to gather all the Asian countries under one crown but the problem with this project is political disputes between some countries, notably Pakistan and Myanmar, which is delaying the project." Future development plans Route AH1 is proposed to extend from Tokyo to the border with Bulgaria (EU) west of Istanbul and Edirne, passing through both Koreas, China and other countries in Southeast, Central and South Asia. The corridor is expected to improve trade links between East Asian countries, India and Russia. To complete the route, existing roads will be upgraded and new roads constructed to link the network. has been spent or committed with additional US$18 billion needed for upgrades and improvements to of highway. Numbering and signage The project new highway route numbers begin with "AH", standing for "Asian Highway", followed by one, two or three digits. Single-digit route numbers from 1 to 9 are assigned to major Asian Highway routes which cross more than one subregion. Two- and three-digit route numbers are assigned to indicate the routes within subregions, including those connecting to neighbouring subregions, and self-contained highway routes within the participating countries. Route numbers are printed in the Latin script and Hindu-Arabic numerals and may simply be added to existing signage, like the E-road network. The actual design of the signs has not been standardized, only that the letters and digits are in white or black, but the color, shape and size of the sign being completely flexible. Most examples feature a blue rectangular shield with a white inscription (similar to German Autobahn signage) with further examples of white on green and black on white rectangular shields. Routes AH1 to AH9: Continent-Wide Routes East-West, from S to N: 2, 1 intermixed, 5, 9, 6. North-South, from E to W: 1 (along East China), 3, 4, 7, 8. – : Tokyo, Japan – Bulgarian border, Turkey Border of Bulgaria – Kapıkule – Istanbul – Gerede – Ankara – Sivas – Refahiye – Aşkale – Doğubayazıt – Gürbulak – Bazargan – Ivughli – Tabriz – Qazvin – Tehran – Semnan – Damghan – Sabzevar – Neishabour-Mashhad – Dowqarun – Islam Qala – Herat – Delaram – Kandahar – Kabul – Torkham – Peshawar – Hassan Abdal – Rawalpindi (– Islamabad) – Lahore – Wagah – Attari – New Delhi – Agra – Kanpur – Varanasi – Sasaram – Kolkata – Petrapole – Benapole – Jashore – Dhaka – Kachpur – Sylhet – Tamabil – Dawki – Shillong – Jorabat (– Guwahati) – Nagaon – Dimapur – Chümoukedima – Kohima – Viswema – Imphal – Moreh – Tamu – Mandalay – Meiktila – Payagyi (– Yangon) – Myawaddy – Mae Sot – Tak – Nakhon Sawan – Bang Pa-in (– Bangkok) – Hin Kong – Kabin Buri – Aranyaprathet – Poipet – Phnom Penh – Bavet – Mộc Bài – Ho Chi Minh City – Biên Hòa (– Vũng Tàu) – Nha Trang – Hội An – Da Nang – Huế – Đông Hà – Vinh – Hanoi – Đồng Đăng – Hữu Nghị – Youyiguan – Nanning – Guangzhou (– Shenzhen – Hong Kong) – Xiangtan – Changsha – Wuhan – Xinyang – Zhengzhou – Shijiazhuang – Beijing – Shenyang – Dandong – Sinuiju – Pyongyang – Kaesong – Munsan – Seoul – Daejeon – Daegu – Gyeongju – Busan ... Fukuoka – Tokyo – : Denpasar, Indonesia – Khosravi, Iran Khosravi – Hamadan – Saveh – Salafchegan (– Tehran) – Yazd – Anar – Kerman – Zahedan – Mirjaveh – Taftan – Quetta – Rohri – Multan – Lahore – Wagah – Attari – New Delhi – Rampur – Banbasa – Bramhadev Mandi – Mahendranagar – Kohalpur – Narayangarh – Pathlaiya – Kakarbhitta – Siliguri – Banglabandha– Rangpur– Hatikumrul – Dhaka – Kachpur – Sylhet – Tamabil – Dawki – Shillong – Jorabat (– Guwahati) – Nagaon – Dimapur – Chümoukedima – Kohima – Viswema – Imphal – Moreh – Tamu – Mandalay – Meiktila – Kengtung – Tachilek – Mae Sai – Chiang Rai – Tak – Nakhon Sawan – Bang Pa-in – Bangkok – Hat Yai – Sadao – Bukit Kayu Hitam – Butterworth – Kuala Lumpur – Seremban – Johor Bahru – Singapore – Sengkang Jakarta (– Merak) – Cikampek (– Bandung) – Semarang – Surakarta – Surabaya – Denpasar – : Northern section: Ulan-Ude, Russia – Tanggu, China Ulan-Ude – Kyakhta – Altanbulag – Darkhan – Ulaanbaatar – Nalaikh – Choir – Sainshand – Zamyn-Üüd – Erenhot – Beijing – Tanggu Southern section: Shanghai, China – Chiang Rai, Thailand Shanghai – Hangzhou – Nanchang – Xiangtan – Guiyang – Kunming – Jinghong (– Daluo – Mong La – Keng Tung) – Mohan, Yunnan – Boten – Nateuy – Houayxay – Chiang Khong – Chiang Rai – : Novosibirsk, Russia – Karachi, Pakistan Novosibirsk – Barnaul – Tashanta – Ulaanbaishint – Khovd – Yarantai Ürümqi – Kashgar – Honqiraf – Khunjerab – Hassanabdal – Rawalpindi – Islamabad – Lahore – Multan – Rohri – Hyderabad – Karachi – : Shanghai, China – Bulgarian border, Turkey Border of Bulgaria – Kapikule – Istanbul – Gerede – Merzifon – Samsun – Trabzon – Sarp – Batumi – Poti – Senaki – (Port of Anaklia – Zugdidi bypass road – Samtredia) Khashuri – Mtskheta – Tbilisi – Red Bridge – Qazax – Ganja – Gazi Mammed – Alat – Baku ... Turkmenbashi – Serdar – Ashgabat – Tejen – Mary – Turkmenabat – Farap – Ələt – Bukhara – Navoi – Samarkand – Syrdaria – Tashkent – Chernyavka – Chernyaevka – Shymkent – Merki – Chaldovar – Kara Balta – Bishkek – Kordai – Kaskelen – Almaty – Khorgas – Jinghe – Kuytun – Ürümqi – Turpan – Lanzhou – Xi'an – Xinyang – Nanjing – Shanghai – : Busan, South Korea – Belarusian border, Russia Border of Belarus – Krasnoye – Moscow – Samara – Ufa – Chelyabinsk – Petukhovo – Chistoe – Petropavl – Karakoga – Isilkul – Omsk – Novosibirsk – Krasnoyarsk – Irkutsk – Ulan-Ude – Chita – Zabaykalsk – Manzhouli – Qiqihar – Harbin – Suifenhe – Pogranichny – Ussuriysk – Razdolnoye (– Vladivostok – Nahodka) – Khasan – Sonbong – Chongjin – Wonsan (– Pyongyang) – Goseong – Ganseong – Gangneung – Gyeongju – Busan – : Yekaterinburg, Russia – Karachi, Pakistan Yekaterinburg – Chelyabinsk – Troisk – Kaerak – Kostanai – Astana – Karaganda – Burubaital – Merke – Chaldovar – Kara-Balta – Osh – Andijon – Tashkent – Syrdaria – Khavast – Khujand – Dushanbe – Nizhniy Panj – Shirkhan – Pol-e Khomri – Jabal Saraj – Kabul – Kandahar – Spin Boldak – Chaman – Quetta – Kalat – Karachi – : Finnish border, Russia – Bandar Emam, Iran Border of Finland – Torfyanovka – Vyborg – St. Petersburg – Moscow – Tambov – Borisoglebsk – Volgograd – Astrakhan – Khasavyurt – Mahachkala – Kazmalyarskiy – Samur – Sumgayit – Baku – Alat – Bilasuvar – Astara – Rasht – Qazvin – Tehran – Saveh – Ahvaz – Bandar-e Emam Khomeyni – 9,222 km (5,730 mi): St. Petersburg, Russia – Lianyungang, China St. Petersburg – Moscow – Ulyanovsk – Toliatti – Samara – Orenburg – Sagarchin – Zhaisan – Aktobe – Kyzylorda – Shymkent – Taraz – Almaty – Khorgas – Urumqi – Lianyungang AH10 to AH29: Southeast Asia Routes – : Vientiane, Laos – Sihanoukville, Cambodia Vientiane – Ban Lao – Thakhek – Seno – Pakse – Veunkham – Tranpeangkreal – Stung Treng – Kratie – Phnom Penh – Sihanoukville – : Nateuy, Laos – Hin Kong, Thailand Nateuy – Oudomxai – Pakmong – Louang Phrabang – Vientiane – Thanaleng – Nong Khai – Udon Thani – Khon Kaen – Nakhon Ratchasima – Hin Kong – : Hanoi, Vietnam – Nakhon Sawen, Thailand Hanoi – Hoa Binh – Son La – Dien Bien – Tây Trang – Pang Hok – Muang Khoua – Oudomxai – Muang Ngeun – Huai Kon – Uttaradit – Phitsanulok – Nakhon Sawan – : Hai Phong, Vietnam – Mandalay, Myanmar Hai Phong – Hanoi – Viet Tri – Lao Cai – Hekou – Kunming – Ruili – Muse – Lashio – Mandalay – : Vinh, Vietnam – Udon Thani, Thailand Vinh – Cau Treo – Keoneau – Ban Lao – Thakhek – Nakhon Phanom – Udon Thani – : Đông Hà, Vietnam – Tak, Thailand Đông Hà – Lao Bao – Densavanh – Seno – Savannakhet – Mukdahan – Khon Kaen – Phitsanulok – Tak – : Đà Nẵng, Vietnam – Vũng Tàu, Vietnam Đà Nẵng – Kon Tum – Pleiku – Ho Chi Minh – Vũng Tàu – : Hat Yai, Thailand – Johor Bahru Causeway, Malaysia Hat Yai – Sungai Kolok – Rantau Panjang – Kota Bahru – Kuantan – Johor Bahru – Johor Bahru Causeway – : Nakhon Ratchasima, Thailand – Bangkok, Thailand Nakhon Ratchasima – Kabin Buri – Laem Chabang – Chonburi – Bangkok AH21 – length unknown: Qui Nhơn, Vietnam – Serei Saophoan, Cambodia Quy Nhon port – Pleiku – Le Thanh – O Yadav – Banlung – Stung Treng – Preah Vihear – Siem Reap – Serei Saophoan Trans-Sumatran Highway (Eastern Route) – : Banda Aceh, Indonesia – Merak, Indonesia Banda Aceh – Medan – Tebingtinggi – Dumai – Pekanbaru – Jambi – Palembang – Tanjung Karang – Bakauheni ... Merak Pan-Philippine Highway – : Laoag, Philippines – Zamboanga, Philippines Laoag – Tuguegarao – Guiguinto – Quezon City (– Manila – Makati) – Makati – Calamba – Legazpi – Matnog ... Allen – Tacloban (– Ormoc City ... Cebu City) – Liloan ... Surigao – Butuan – Davao (– Cagayan de Oro) – General Santos – Cotabato City – Zamboanga AH30 to AH39: East Asia and Northeast Asia Routes AH40 to AH59: South Asian Routes AH60 to AH89: North Asia, Central Asia and Southwest Asia Routes AH100 to AH299: ASEAN Southeast Asia Routes These routes were set up by the Association of Southeast Asian Nations as part of an extension to the Asian Highway Network, known as the ASEAN Highway Network. Distance by country or region The planned network runs a total of .
Technology
Ground transportation networks
null
624231
https://en.wikipedia.org/wiki/Voltage%20regulator
Voltage regulator
A voltage regulator is a system designed to automatically maintain a constant voltage. It may use a simple feed-forward design or may include negative feedback. It may use an electromechanical mechanism, or electronic components. Depending on the design, it may be used to regulate one or more AC or DC voltages. Electronic voltage regulators are found in devices such as computer power supplies where they stabilize the DC voltages used by the processor and other elements. In automobile alternators and central power station generator plants, voltage regulators control the output of the plant. In an electric power distribution system, voltage regulators may be installed at a substation or along distribution lines so that all customers receive steady voltage independent of how much power is drawn from the line. Electronic voltage regulators A simple voltage/current regulator can be made from a resistor in series with a diode (or series of diodes). Due to the logarithmic shape of diode V-I curves, the voltage across the diode changes only slightly due to changes in current drawn or changes in the input. When precise voltage control and efficiency are not important, this design may be fine. Since the forward voltage of a diode is small, this kind of voltage regulator is only suitable for low voltage regulated output. When higher voltage output is needed, a zener diode or series of zener diodes may be employed. Zener diode regulators make use of the zener diode's fixed reverse voltage, which can be quite large. Feedback voltage regulators operate by comparing the actual output voltage to some fixed reference voltage. Any difference is amplified and used to control the regulation element in such a way as to reduce the voltage error. This forms a negative feedback control loop; increasing the open-loop gain tends to increase regulation accuracy but reduce stability. (Stability is the avoidance of oscillation, or ringing, during step changes.) There will also be a trade-off between stability and the speed of the response to changes. If the output voltage is too low (perhaps due to input voltage reducing or load current increasing), the regulation element is commanded, up to a point, to produce a higher output voltage–by dropping less of the input voltage (for linear series regulators and buck switching regulators), or to draw input current for longer periods (boost-type switching regulators); if the output voltage is too high, the regulation element will normally be commanded to produce a lower voltage. However, many regulators have over-current protection, so that they will entirely stop sourcing current (or limit the current in some way) if the output current is too high, and some regulators may also shut down if the input voltage is outside a given range (see also: crowbar circuits). Electromechanical regulators In electromechanical regulators, voltage regulation is easily accomplished by coiling the sensing wire to make an electromagnet. The magnetic field produced by the current attracts a moving ferrous core held back under spring tension or gravitational pull. As voltage increases, so does the current, strengthening the magnetic field produced by the coil and pulling the core towards the field. The magnet is physically connected to a mechanical power switch, which opens as the magnet moves into the field. As voltage decreases, so does the current, releasing spring tension or the weight of the core and causing it to retract. This closes the switch and allows the power to flow once more. If the mechanical regulator design is sensitive to small voltage fluctuations, the motion of the solenoid core can be used to move a selector switch across a range of resistances or transformer windings to gradually step the output voltage up or down, or to rotate the position of a moving-coil AC regulator. Early automobile generators and alternators had a mechanical voltage regulator using one, two, or three relays and various resistors to stabilize the generator's output at slightly more than 6.7 or 13.4 V to maintain the battery as independently of the engine's rpm or the varying load on the vehicle's electrical system as possible. The relay(s) modulated the width of a current pulse to regulate the voltage output of the generator by controlling the average field current in the rotating machine which determines strength of the magnetic field produced which determines the unloaded output voltage per rpm. Capacitors are not used to smooth the pulsed voltage as described earlier. The large inductance of the field coil stores the energy delivered to the magnetic field in an iron core so the pulsed field current does not result in as strongly pulsed a field. Both types of rotating machine produce a rotating magnetic field that induces an alternating current in the coils in the stator. A generator uses a mechanical commutator, graphite brushes running on copper segments, to convert the AC produced into DC by switching the external connections at the shaft angle when the voltage would reverse. An alternator accomplishes the same goal using rectifiers that do not wear down and require replacement. Modern designs now use solid state technology (transistors) to perform the same function that the relays perform in electromechanical regulators. Electromechanical regulators are used for mains voltage stabilisation—see AC voltage stabilizers below. Automatic voltage regulator Generators, as used in power stations, ship electrical power production, or standby power systems, will have automatic voltage regulators (AVR) to stabilize their voltages as the load on the generators changes. The first AVRs for generators were electromechanical systems, but a modern AVR uses solid-state devices. An AVR is a feedback control system that measures the output voltage of the generator, compares that output to a set point, and generates an error signal that is used to adjust the excitation of the generator. As the excitation current in the field winding of the generator increases, its terminal voltage will increase. The AVR will control current by using power electronic devices; generally a small part of the generator's output is used to provide current for the field winding. Where a generator is connected in parallel with other sources such as an electrical transmission grid, changing the excitation has more of an effect on the reactive power produced by the generator than on its terminal voltage, which is mostly set by the connected power system. Where multiple generators are connected in parallel, the AVR system will have circuits to ensure all generators operate at the same power factor. AVRs on grid-connected power station generators may have additional control features to help stabilize the electrical grid against upsets due to sudden load loss or faults. AC voltage stabilizers Coil-rotation AC voltage regulator This is an older type of regulator used in the 1920s that uses the principle of a fixed-position field coil and a second field coil that can be rotated on an axis in parallel with the fixed coil, similar to a variocoupler. When the movable coil is positioned perpendicular to the fixed coil, the magnetic forces acting on the movable coil balance each other out and voltage output is unchanged. Rotating the coil in one direction or the other away from the center position will increase or decrease voltage in the secondary movable coil. This type of regulator can be automated via a servo control mechanism to advance the movable coil position in order to provide voltage increase or decrease. A braking mechanism or high-ratio gearing is used to hold the rotating coil in place against the powerful magnetic forces acting on the moving coil. Electromechanical Electromechanical regulators called voltage stabilizers or tap-changers, have also been used to regulate the voltage on AC power distribution lines. These regulators operate by using a servomechanism to select the appropriate tap on an autotransformer with multiple taps, or by moving the wiper on a continuously variable auto transfomer. If the output voltage is not in the acceptable range, the servomechanism switches the tap, changing the turns ratio of the transformer, to move the secondary voltage into the acceptable region. The controls provide a dead band wherein the controller will not act, preventing the controller from constantly adjusting the voltage ("hunting") as it varies by an acceptably small amount. Constant-voltage transformer The ferroresonant transformer, ferroresonant regulator or constant-voltage transformer is a type of saturating transformer used as a voltage regulator. These transformers use a tank circuit composed of a high-voltage resonant winding and a capacitor to produce a nearly constant average output voltage with a varying input current or varying load. The circuit has a primary on one side of a magnet shunt and the tuned circuit coil and secondary on the other side. The regulation is due to magnetic saturation in the section around the secondary. The ferroresonant approach is attractive due to its lack of active components, relying on the square loop saturation characteristics of the tank circuit to absorb variations in average input voltage. Saturating transformers provide a simple rugged method to stabilize an AC power supply. Older designs of ferroresonant transformers had an output with high harmonic content, leading to a distorted output waveform. Modern devices are used to construct a perfect sine wave. The ferroresonant action is a flux limiter rather than a voltage regulator, but with a fixed supply frequency it can maintain an almost constant average output voltage even as the input voltage varies widely. The ferroresonant transformers, which are also known as constant-voltage transformers (CVTs) or "ferros", are also good surge suppressors, as they provide high isolation and inherent short-circuit protection. A ferroresonant transformer can operate with an input voltage range ±40% or more of the nominal voltage. Output power factor remains in the range of 0.96 or higher from half to full load. Because it regenerates an output voltage waveform, output distortion, which is typically less than 4%, is independent of any input voltage distortion, including notching. Efficiency at full load is typically in the range of 89% to 93%. However, at low loads, efficiency can drop below 60%. The current-limiting capability also becomes a handicap when a CVT is used in an application with moderate to high inrush current, like motors, transformers or magnets. In this case, the CVT has to be sized to accommodate the peak current, thus forcing it to run at low loads and poor efficiency. Minimum maintenance is required, as transformers and capacitors can be very reliable. Some units have included redundant capacitors to allow several capacitors to fail between inspections without any noticeable effect on the device's performance. Output voltage varies about 1.2% for every 1% change in supply frequency. For example, a 2 Hz change in generator frequency, which is very large, results in an output voltage change of only 4%, which has little effect for most loads. It accepts 100% single-phase switch-mode power-supply loading without any requirement for derating, including all neutral components. Input current distortion remains less than 8% THD even when supplying nonlinear loads with more than 100% current THD. Drawbacks of CVTs are their larger size, audible humming sound, and the high heat generation caused by saturation. Power distribution Voltage regulators or stabilizers are used to compensate for voltage fluctuations in mains power. Large regulators may be permanently installed on distribution lines. Small portable regulators may be plugged in between sensitive equipment and a wall outlet. Automatic voltage regulators on generator sets to maintain a constant voltage for changes in load. The voltage regulator compensates for the change in load. Power distribution voltage regulators normally operate on a range of voltages, for example 150–240 V or 90–280 V. DC voltage stabilizers Many simple DC power supplies regulate the voltage using either series or shunt regulators, but most apply a voltage reference using a shunt regulator such as a Zener diode, avalanche breakdown diode, or voltage regulator tube. Each of these devices begins conducting at a specified voltage and will conduct as much current as required to hold its terminal voltage to that specified voltage by diverting excess current from a non-ideal power source to ground, often through a relatively low-value resistor to dissipate the excess energy. The power supply is designed to only supply a maximum amount of current that is within the safe operating capability of the shunt regulating device. If the stabilizer must provide more power, the shunt output is only used to provide the standard voltage reference for the electronic device, known as the voltage stabilizer. The voltage stabilizer is the electronic device, able to deliver much larger currents on demand. Active regulators Active regulators employ at least one active (amplifying) component such as a transistor or operational amplifier. Shunt regulators are often (but not always) passive and simple, but always inefficient because they (essentially) dump the excess current which is not available to the load. When more power must be supplied, more sophisticated circuits are used. In general, these active regulators can be divided into several classes: Linear series regulators Switching regulators SCR regulators Linear regulators Linear regulators are based on devices that operate in their linear region (in contrast, a switching regulator is based on a device forced to act as an on/off switch). Linear regulators are also classified in two types: series regulators shunt regulators In the past, one or more vacuum tubes were commonly used as the variable resistance. Modern designs use one or more transistors instead, perhaps within an integrated circuit. Linear designs have the advantage of very "clean" output with little noise introduced into their DC output, but are most often much less efficient and unable to step-up or invert the input voltage like switched supplies. All linear regulators require a higher input than the output. If the input voltage approaches the desired output voltage, the regulator will "drop out". The input to output voltage differential at which this occurs is known as the regulator's drop-out voltage. Low-dropout regulators (LDOs) allow an input voltage that can be much lower (i.e., they waste less energy than conventional linear regulators). Entire linear regulators are available as integrated circuits. These chips come in either fixed or adjustable voltage types. Examples of some integrated circuits are the 723 general purpose regulator and 78xx /79xx series Switching regulators Switching regulators rapidly switch a series device on and off. The duty cycle of the switch sets how much charge is transferred to the load. This is controlled by a similar feedback mechanism as in a linear regulator. Because the series element is either fully conducting, or switched off, it dissipates almost no power; this is what gives the switching design its efficiency. Switching regulators are also able to generate output voltages which are higher than the input, or of opposite polarity—something not possible with a linear design. In switched regulators, the pass transistor is used as a "controlled switch" and is operated at either cutoff or saturated state. Hence the power transmitted across the pass device is in discrete pulses rather than a steady current flow. Greater efficiency is achieved since the pass device is operated as a low-impedance switch. When the pass device is at cutoff, there is no current and it dissipates no power. Again when the pass device is in saturation, a negligible voltage drop appears across it and thus dissipates only a small amount of average power, providing maximum current to the load. In either case, the power wasted in the pass device is very little and almost all the power is transmitted to the load. Thus the efficiency of a switched-mode power supply is remarkably highin the range of 70–90%. Switched mode regulators rely on pulse-width modulation to control the average value of the output voltage. The average value of a repetitive-pulse waveform depends on the area under the waveform. When the duty cycle is varied, the average voltage changes proportionally. Like linear regulators, nearly complete switching regulators are also available as integrated circuits. Unlike linear regulators, these usually require an inductor that acts as the energy storage element. The IC regulators combine the reference voltage source, error op-amp, and pass transistor with short-circuit current limiting and thermal-overload protection. Switching regulators are more prone to output noise and instability than linear regulators. However, they provide much better power efficiency than linear regulators. SCR regulators Regulators powered from AC power circuits can use silicon controlled rectifiers (SCRs) as the series device. Whenever the output voltage is below the desired value, the SCR is triggered, allowing electricity to flow into the load until the AC mains voltage passes through zero (ending the half cycle). SCR regulators have the advantages of being both very efficient and very simple, but because they can not terminate an ongoing half cycle of conduction, they are not capable of very accurate voltage regulation in response to rapidly changing loads. An alternative is the SCR shunt regulator which uses the regulator output as a trigger. Both series and shunt designs are noisy, but powerful, as the device has a low on resistance. Combination or hybrid regulators Many power supplies use more than one regulating method in series. For example, the output from a switching regulator can be further regulated by a linear regulator. The switching regulator accepts a wide range of input voltages and efficiently generates a (somewhat noisy) voltage slightly above the ultimately desired output. That is followed by a linear regulator that generates exactly the desired voltage and eliminates nearly all the noise generated by the switching regulator. Other designs may use an SCR regulator as the "pre-regulator", followed by another type of regulator. An efficient way of creating a variable-voltage, accurate output power supply is to combine a multi-tapped transformer with an adjustable linear post-regulator. Example of linear regulators Transistor regulator In the simplest case a common base amplifier is used with the base of the regulating transistor connected directly to the voltage reference: A simple transistor regulator will provide a relatively constant output voltage Uout for changes in the voltage Uin of the power source and for changes in load RL, provided that Uin exceeds Uout by a sufficient margin and that the power handling capacity of the transistor is not exceeded. The output voltage of the stabilizer is equal to the Zener diode voltage minus the base–emitter voltage of the transistor, UZ − UBE, where UBE is usually about 0.7 V for a silicon transistor, depending on the load current. If the output voltage drops for any external reason, such as an increase in the current drawn by the load (causing an increase in the collector–emitter voltage to observe KVL), the transistor's base–emitter voltage (UBE) increases, turning the transistor on further and delivering more current to increase the load voltage again. Rv provides a bias current for both the Zener diode and the transistor. The current in the diode is minimal when the load current is maximal. The circuit designer must choose a minimum voltage that can be tolerated across Rv, bearing in mind that the higher this voltage requirement is, the higher the required input voltage Uin, and hence the lower the efficiency of the regulator. On the other hand, lower values of Rv lead to higher power dissipation in the diode and to inferior regulator characteristics. Rv is given by where min VR is the minimum voltage to be maintained across Rv, min ID is the minimum current to be maintained through the Zener diode, max IL is the maximum design load current, hFE is the forward current gain of the transistor (IC/IB). Regulator with a differential amplifier The stability of the output voltage can be significantly increased by using a differential amplifier, possibly implemented as an operational amplifier: In this case, the operational amplifier drives the transistor with more current if the voltage at its inverting input drops below the output of the voltage reference at the non-inverting input. Using the voltage divider (R1, R2 and R3) allows choice of the arbitrary output voltage between Uz and Uin. Regulator specification The output voltage can only be held constant within specified limits. The regulation is specified by two measurements: Load regulation is the change in output voltage for a given change in load current (for example, "typically 15 mV, maximum 100 mV for load currents between 5 mA and 1.4 A, at some specified temperature and input voltage"). Line regulation or input regulation is the degree to which output voltage changes with input (supply) voltage changes—as a ratio of output to input change (for example, "typically 13 mV/V"), or the output voltage change over the entire specified input voltage range (for example, "plus or minus 2% for input voltages between 90 V and 260 V, 50–60 Hz"). Other important parameters are: Temperature coefficient of the output voltage is the change with temperature (perhaps averaged over a given temperature range). Initial accuracy of a voltage regulator (or simply "the voltage accuracy") reflects the error in output voltage for a fixed regulator without taking into account temperature or aging effects on output accuracy. is the minimum difference between input voltage and output voltage for which the regulator can still supply the specified current. The input-output differential at which the voltage regulator will no longer maintain regulation is the dropout voltage. Further reduction in input voltage will result in reduced output voltage. This value is dependent on load current and junction temperature. Inrush current or input surge current or switch-on surge is the maximum, instantaneous input current drawn by an electrical device when first turned on. Inrush current usually lasts for half a second, or a few milliseconds, but it is often very high, which makes it dangerous because it can degrade and burn components gradually (over months or years), especially if there is no inrush current protection. Alternating current transformers or electric motors in automatic voltage regulators may draw and output several times their normal full-load current for a few cycles of the input waveform when first energized or switched on. Power converters also often have inrush currents much higher than their steady state currents, due to the charging current of the input capacitance. Absolute maximum ratings are defined for regulator components, specifying the continuous and peak output currents that may be used (sometimes internally limited), the maximum input voltage, maximum power dissipation at a given temperature, etc. Output noise (thermal white noise) and output dynamic impedance may be specified as graphs versus frequency, while output ripple noise (mains "hum" or switch-mode "hash" noise) may be given as peak-to-peak or RMS voltages, or in terms of their spectra. Quiescent current in a regulator circuit is the current drawn internally, not available to the load, normally measured as the input current while no load is connected and hence a source of inefficiency (some linear regulators are, surprisingly, more efficient at very low current loads than switch-mode designs because of this). Transient response is the reaction of a regulator when a (sudden) change of the load current (called the load transient) or input voltage (called the line transient) occurs. Some regulators will tend to oscillate or have a slow response time which in some cases might lead to undesired results. This value is different from the regulation parameters, as that is the stable situation definition. The transient response shows the behaviour of the regulator on a change. This data is usually provided in the technical documentation of a regulator and is also dependent on output capacitance. Mirror-image insertion protection means that a regulator is designed for use when a voltage, usually not higher than the maximum input voltage of the regulator, is applied to its output pin while its input terminal is at a low voltage, volt-free or grounded. Some regulators can continuously withstand this situation. Others might only manage it for a limited time such as 60 seconds (usually specified in the data sheet). For instance, this situation can occur when a three terminal regulator is incorrectly mounted on a PCB, with the output terminal connected to the unregulated DC input and the input connected to the load. Mirror-image insertion protection is also important when a regulator circuit is used in battery charging circuits, when external power fails or is not turned on and the output terminal remains at battery voltage.
Technology
Functional circuits
null
624361
https://en.wikipedia.org/wiki/Autophagy
Autophagy
Autophagy (or autophagocytosis; from the Greek , , meaning "self-devouring" and , , meaning "hollow") is the natural, conserved degradation of the cell that removes unnecessary or dysfunctional components through a lysosome-dependent regulated mechanism. It allows the orderly degradation and recycling of cellular components. Although initially characterized as a primordial degradation pathway induced to protect against starvation, it has become increasingly clear that autophagy also plays a major role in the homeostasis of non-starved cells. Defects in autophagy have been linked to various human diseases, including neurodegeneration and cancer, and interest in modulating autophagy as a potential treatment for these diseases has grown rapidly. Four forms of autophagy have been identified: macroautophagy, microautophagy, chaperone-mediated autophagy (CMA), and crinophagy. In macroautophagy (the most thoroughly researched form of autophagy), cytoplasmic components (like mitochondria) are targeted and isolated from the rest of the cell within a double-membrane vesicle known as an autophagosome, which, in time, fuses with an available lysosome, bringing its specialty process of waste management and disposal; and eventually the contents of the vesicle (now called an autolysosome) are degraded and recycled. In crinophagy (the least well-known and researched form of autophagy), unnecessary secretory granules are degraded and recycled. In disease, autophagy has been seen as an adaptive response to stress, promoting survival of the cell; but in other cases, it appears to promote cell death and morbidity. In the extreme case of starvation, the breakdown of cellular components promotes cellular survival by maintaining cellular energy levels. The word "autophagy" was in existence and frequently used from the middle of the 19th century. In its present usage, the term autophagy was coined by Belgian biochemist Christian de Duve in 1963 based on his discovery of the functions of lysosome. The identification of autophagy-related genes in yeast in the 1990s allowed researchers to deduce the mechanisms of autophagy, which eventually led to the award of the 2016 Nobel Prize in Physiology or Medicine to Japanese researcher Yoshinori Ohsumi. History Autophagy was first observed by Keith R. Porter and his student Thomas Ashford at the Rockefeller Institute. In January 1962 they reported an increased number of lysosomes in rat liver cells after the addition of glucagon, and that some displaced lysosomes towards the centre of the cell contained other cell organelles such as mitochondria. They called this autolysis after Christian de Duve and Alex B. Novikoff. However Porter and Ashford wrongly interpreted their data as lysosome formation (ignoring the pre-existing organelles). Lysosomes could not be cell organelles, but part of cytoplasm such as mitochondria, and that hydrolytic enzymes were produced by microbodies. In 1963 Hruban, Spargo and colleagues published a detailed ultrastructural description of "focal cytoplasmic degradation", which referenced a 1955 German study of injury-induced sequestration. Hruban, Spargo and colleagues recognized three continuous stages of maturation of the sequestered cytoplasm to lysosomes, and that the process was not limited to injury states that functioned under physiological conditions for "reutilization of cellular materials", and the "disposal of organelles" during differentiation. Inspired by this discovery, de Duve christened the phenomena "autophagy". Unlike Porter and Ashford, de Duve conceived the term as a part of lysosomal function while describing the role of glucagon as a major inducer of cell degradation in the liver. With his student Russell Deter, he established that lysosomes are responsible for glucagon-induced autophagy. This was the first time the fact that lysosomes are the sites of intracellular autophagy was established. In the 1990s several groups of scientists independently discovered autophagy-related genes using the budding yeast. Notably, Yoshinori Ohsumi and Michael Thumm examined starvation-induced non-selective autophagy; in the meantime, Daniel J. Klionsky discovered the cytoplasm-to-vacuole targeting (CVT) pathway, which is a form of selective autophagy. They soon found that they were in fact looking at essentially the same pathway, just from different angles. Initially, the genes discovered by these and other yeast groups were given different names (APG, AUT, CVT, GSA, PAG, PAZ, and PDD). A unified nomenclature was advocated in 2003 by the yeast researchers to use ATG to denote autophagy genes. The 2016 Nobel Prize in Physiology or Medicine was awarded to Yoshinori Ohsumi, although some have pointed out that the award could have been more inclusive. The field of autophagy research experienced accelerated growth at the turn of the 21st century. Knowledge of ATG genes provided scientists more convenient tools to dissect functions of autophagy in human health and disease. In 1999, a landmark discovery connecting autophagy with cancer was published by Beth Levine's group. To this date, relationship between cancer and autophagy continues to be a main theme of autophagy research. The roles of autophagy in neurodegeneration and immune defense also received considerable attention. In 2003, the first Gordon Research Conference on autophagy was held at Waterville. In 2005, Daniel J Klionsky launched Autophagy, a scientific journal dedicated to this field. The first Keystone Symposia on autophagy was held in 2007 at Monterey. In 2008, Carol A Mercer created a BHMT fusion protein (GST-BHMT), which showed starvation-induced site-specific fragmentation in cell lines. The degradation of betaine homocysteine methyltransferase (BHMT), a metabolic enzyme, could be used to assess autophagy flux in mammalian cells. Macro, micro, and Chaperone mediated autophagy are mediated by autophagy-related genes and their associated enzymes. Macroautophagy is then divided into bulk and selective autophagy. In the selective autophagy is the autophagy of organelles; mitophagy, lipophagy, pexophagy, chlorophagy, ribophagy and others. Macroautophagy is the main pathway, used primarily to eradicate damaged cell organelles or unused proteins. First the phagophore engulfs the material that needs to be degraded, which forms a double membrane known as an autophagosome, around the organelle marked for destruction. The autophagosome then travels through the cytoplasm of the cell to a lysosome in mammals, or vacuoles in yeast and plants, and the two organelles fuse. Within the lysosome/vacuole, the contents of the autophagosome are degraded via acidic lysosomal hydrolase. Microautophagy, on the other hand, involves the direct engulfment of cytoplasmic material into the lysosome. This occurs by invagination, meaning the inward folding of the lysosomal membrane, or cellular protrusion. Chaperone-mediated autophagy, or CMA, is a very complex and specific pathway, which involves the recognition by the hsc70-containing complex. This means that a protein must contain the recognition site for this hsc70 complex which will allow it to bind to this chaperone, forming the CMA- substrate/chaperone complex. This complex then moves to the lysosomal membrane-bound protein that will recognise and bind with the CMA receptor. Upon recognition, the substrate protein gets unfolded and it is translocated across the lysosome membrane with the assistance of the lysosomal hsc70 chaperone. CMA is significantly different from other types of autophagy because it translocates protein material in a one by one manner, and it is extremely selective about what material crosses the lysosomal barrier. Mitophagy is the selective degradation of mitochondria by autophagy. It often occurs to defective mitochondria following damage or stress. Mitophagy promotes the turnover of mitochondria and prevents the accumulation of dysfunctional mitochondria which can lead to cellular degeneration. It is mediated by Atg32 (in yeast) and NIX and its regulator BNIP3 in mammals. Mitophagy is regulated by PINK1 and parkin proteins. The occurrence of mitophagy is not limited to the damaged mitochondria but also involves undamaged ones. Lipophagy is the degradation of lipids by autophagy, a function which has been shown to exist in both animal and fungal cells. The role of lipophagy in plant cells, however, remains elusive. In lipophagy the target are lipid structures called lipid droplets (LDs), spheric "organelles" with a core of mainly triacylglycerols (TAGs) and a unilayer of phospholipids and membrane proteins. In animal cells the main lipophagic pathway is via the engulfment of LDs by the phagophore, macroautophagy. In fungal cells on the other hand microplipophagy constitutes the main pathway and is especially well studied in the budding yeast Saccharomyces cerevisiae. Lipophagy was first discovered in mice and published 2009. Targeted interplay between bacterial pathogens and host autophagy Autophagy targets genus-specific proteins, so orthologous proteins which share sequence homology with each other are recognized as substrates by a particular autophagy targeting protein. There exists a complementarity of autophagy targeting proteins which potentially increase infection risk upon mutation. The lack of overlap among the targets of the 3 autophagy proteins and the large overlap in terms of the genera show that autophagy could target different sets of bacterial proteins from the same pathogen. On one hand, the redundancy in targeting the same genera is beneficial for robust pathogen recognition. But, on the other hand, the complementarity in the specific bacterial proteins could make the host more susceptible to chronic disorders and infections if the gene encoding one of the autophagy targeting proteins becomes mutated, and the autophagy system is overloaded or suffers other malfunctions. Moreover, autophagy targets virulence factors and virulence factors responsible for more general functions such as nutrient acquisition and motility are recognized by multiple autophagy targeting proteins. And the specialized virulence factors such as autolysins, and iron sequestering proteins are potentially recognized uniquely by a single autophagy targeting protein. The autophagy proteins CALCOCO2/NDP52 and MAP1LC3/LC3 may have evolved specifically to target pathogens or pathogenic proteins for autophagic degradation. While SQSTM1/p62 targets more generic bacterial proteins containing a target motif but not related to virulence. On the other hand, bacterial proteins from various pathogenic genera are also able to modulate autophagy. There are genus-specific patterns in the phases of autophagy that are potentially regulated by a given pathogen group. Some autophagy phases can only be modulated by particular pathogens, while some phases are modulated by multiple pathogen genera. Some of the interplay-related bacterial proteins have proteolytic and post-translational activity such as phosphorylation and ubiquitination and can interfere with the activity of autophagy proteins. Molecular biology ATG is short for "AuTophaGy"-related, which is applied to both genes and proteins related to the biological process of autophagy. There are about 16-20 conserved ATG genes coding for many core ATG proteins conserved from yeast to humans. ATG may be part of the protein name (such as ATG7) or part of the gene name (such as ATG7), although all ATG proteins and genes do not follow this pattern (such as ULK1). To give specific examples, the UKL1 enzyme (kinase complex) induces autophagosome biogenesis, and ATG13 (Autophagy-related protein 13) is required for phagosome formation. Autophagy is executed by ATG genes. Prior to 2003, ten or more names were used, but after this point a unified nomenclature was devised by fungal autophagy researchers. The first autophagy genes were identified by genetic screens conducted in Saccharomyces cerevisiae. Following their identification those genes were functionally characterized and their orthologs in a variety of different organisms were identified and studied. Today, thirty-six Atg proteins have been classified as especially important for autophagy, of which 18 belong to the core machinery. In mammals, amino acid sensing and additional signals such as growth factors and reactive oxygen species regulate the activity of the protein kinases mTOR and AMPK. These two kinases regulate autophagy through inhibitory phosphorylation of the Unc-51-like kinases ULK1 and ULK2 (mammalian homologues of Atg1). Induction of autophagy results in the dephosphorylation and activation of the ULK kinases. ULK is part of a protein complex containing Atg13, Atg101 and FIP200. ULK phosphorylates and activates Beclin-1 (mammalian homologue of Atg6), which is also part of a protein complex. The autophagy-inducible Beclin-1 complex contains the proteins PIK3R4(p150), Atg14L and the class III phosphatidylinositol 3-phosphate kinase (PI(3)K) Vps34. The active ULK and Beclin-1 complexes re-localize to the site of autophagosome initiation, the phagophore, where they both contribute to the activation of downstream autophagy components. Once active, VPS34 phosphorylates the lipid phosphatidylinositol to generate phosphatidylinositol 3-phosphate (PtdIns(3)P) on the surface of the phagophore. The generated PtdIns(3)P is used as a docking point for proteins harboring a PtdIns(3)P binding motif. WIPI2, a PtdIns(3)P binding protein of the WIPI (WD-repeat protein interacting with phosphoinositides) protein family, was recently shown to physically bind ATG16L1. Atg16L1 is a member of an E3-like protein complex involved in one of two ubiquitin-like conjugation systems essential for autophagosome formation. The FIP200 cis-Golgi-derived membranes fuse with ATG16L1-positive endosomal membranes to form the prophagophore termed HyPAS (hybrid pre-autophagosomal structure). ATG16L1 binding to WIPI2 mediates ATG16L1's activity. This leads to downstream conversion of prophagophore into ATG8-positive phagophore via a ubiquitin-like conjugation system. The first of the two ubiquitin-like conjugation systems involved in autophagy covalently binds the ubiquitin-like protein Atg12 to Atg5. The resulting conjugate protein then binds ATG16L1 to form an E3-like complex which functions as part of the second ubiquitin-like conjugation system. This complex binds and activates Atg3, which covalently attaches mammalian homologues of the ubiquitin-like yeast protein ATG8 (LC3A-C, GATE16, and GABARAPL1-3), the most studied being LC3 proteins, to the lipid phosphatidylethanolamine (PE) on the surface of autophagosomes. Lipidated LC3 contributes to the closure of autophagosomes, and enables the docking of specific cargos and adaptor proteins such as Sequestosome-1/p62. The completed autophagosome then fuses with a lysosome through the actions of multiple proteins, including SNAREs and UVRAG. Following the fusion LC3 is retained on the vesicle's inner side and degraded along with the cargo, while the LC3 molecules attached to the outer side are cleaved off by Atg4 and recycled. The contents of the autolysosome are subsequently degraded and their building blocks are released from the vesicle through the action of permeases. Sirtuin 1 (SIRT1) stimulates autophagy by preventing acetylation of proteins (via deacetylation) required for autophagy as demonstrated in cultured cells and embryonic and neonatal tissues. This function provides a link between sirtuin expression and the cellular response to limited nutrients due to caloric restriction. Functions Nutrient starvation Autophagy has roles in various cellular functions. One particular example is in yeasts, where the nutrient starvation induces a high level of autophagy. This allows unneeded proteins to be degraded and the amino acids recycled for the synthesis of proteins that are essential for survival. In higher eukaryotes, autophagy is induced in response to the nutrient depletion that occurs in animals at birth after severing off the trans-placental food supply, as well as that of nutrient starved cultured cells and tissues. Mutant yeast cells that have a reduced autophagic capability rapidly perish in nutrition-deficient conditions. Studies on the apg mutants suggest that autophagy via autophagic bodies is indispensable for protein degradation in the vacuoles under starvation conditions, and that at least 15 APG genes are involved in autophagy in yeast. A gene known as ATG7 has been implicated in nutrient-mediated autophagy, as mice studies have shown that starvation-induced autophagy was impaired in atg7-deficient mice. Infection Vesicular stomatitis virus is believed to be taken up by the autophagosome from the cytosol and translocated to the endosomes where detection takes place by a pattern recognition receptor called toll-like receptor 7, detecting single stranded RNA. Following activation of the toll-like receptor, intracellular signaling cascades are initiated, leading to induction of interferon and other antiviral cytokines. A subset of viruses and bacteria subvert the autophagic pathway to promote their own replication. Galectin-8 has recently been identified as an intracellular "danger receptor", able to initiate autophagy against intracellular pathogens. When galectin-8 binds to a damaged vacuole, it recruits an autophagy adaptor such as NDP52 leading to the formation of an autophagosome and bacterial degradation. Repair mechanism Autophagy degrades damaged organelles, cell membranes and proteins, and insufficient autophagy is thought to be one of the main reasons for the accumulation of damaged cells and aging. Autophagy and autophagy regulators are involved in response to lysosomal damage, often directed by galectins such as galectin-3 and galectin-8. Repair of damaged DNA involves the recruitment of enzymes to the damaged site, but these enzymes must be removed upon completion of the repair process. Topoisomerase I cleavage complex is employed in the processing of DNA damages (e.g. DNA-protein crosslinks) in vertebrates, and this complex is selectively degraded by autophagy, presumably after it is no longer needed. Programmed cell death One of the mechanisms of programmed cell death (PCD) is associated with the appearance of autophagosomes and depends on autophagy proteins. This form of cell death most likely corresponds to a process that has been morphologically defined as autophagic PCD. One question that constantly arises, however, is whether autophagic activity in dying cells is the cause of death or is actually an attempt to prevent it. Morphological and histochemical studies have not so far proved a causative relationship between the autophagic process and cell death. In fact, there have recently been strong arguments that autophagic activity in dying cells might actually be a survival mechanism. Studies of the metamorphosis of insects have shown cells undergoing a form of PCD that appears distinct from other forms; these have been proposed as examples of autophagic cell death. Recent pharmacological and biochemical studies have proposed that survival and lethal autophagy can be distinguished by the type and degree of regulatory signaling during stress particularly after viral infection. Although promising, these findings have not been examined in non-viral systems. Meiosis Mammalian fetal oocytes face several challenges to survival throughout the stages of meiotic prophase I prior to primordial follicle assembly. Each primordial follicle contains an immature primary oocyte. Before oocytes are enclosed into a primordial follicle, deficiencies of nutrients or growth factors might activate protective autophagy, but this can turn into death of the oocytes if starvation is prolonged. Exercise Autophagy is essential for basal homeostasis; it is also extremely important in maintaining muscle homeostasis during physical exercise. Autophagy at the molecular level is only partially understood. A study of mice shows that autophagy is important for the ever-changing demands of their nutritional and energy needs, particularly through the metabolic pathways of protein catabolism. In a 2012 study conducted by the University of Texas Southwestern Medical Center in Dallas, mutant mice (with a knock-in mutation of BCL2 phosphorylation sites to produce progeny that showed normal levels of basal autophagy yet were deficient in stress-induced autophagy) were tested to challenge this theory. Results showed that when compared to a control group, these mice illustrated a decrease in endurance and an altered glucose metabolism during acute exercise. Another study demonstrated that skeletal muscle fibers of collagen VI in knockout mice showed signs of degeneration due to an insufficiency of autophagy which led to an accumulation of damaged mitochondria and excessive cell death. Exercise-induced autophagy was unsuccessful however; but when autophagy was induced artificially post-exercise, the accumulation of damaged organelles in collagen VI deficient muscle fibres was prevented and cellular homeostasis was maintained. Both studies demonstrate that autophagy induction may contribute to the beneficial metabolic effects of exercise and that it is essential in the maintaining of muscle homeostasis during exercise, particularly in collagen VI fibers. Work at the Institute for Cell Biology, University of Bonn, showed that a certain type of autophagy, i.e. chaperone-assisted selective autophagy (CASA), is induced in contracting muscles and is required for maintaining the muscle sarcomere under mechanical tension. The CASA chaperone complex recognizes mechanically damaged cytoskeleton components and directs these components through a ubiquitin-dependent autophagic sorting pathway to lysosomes for disposal. This is necessary for maintaining muscle activity. Osteoarthritis Because autophagy decreases with age and age is a major risk factor for osteoarthritis, the role of autophagy in the development of this disease is suggested. Proteins involved in autophagy are reduced with age in both human and mouse articular cartilage. Mechanical injury to cartilage explants in culture also reduced autophagy proteins. Autophagy is constantly activated in normal cartilage but it is compromised with age and precedes cartilage cell death and structural damage. Thus autophagy is involved in a normal protective process (chondroprotection) in the joint. Cancer Cancer often occurs when several different pathways that regulate cell differentiation are disturbed. Autophagy plays an important role in cancer – both in protecting against cancer as well as potentially contributing to the growth of cancer. Autophagy can contribute to cancer by promoting survival of tumor cells that have been starved, or that degrade apoptotic mediators through autophagy: in such cases, use of inhibitors of the late stages of autophagy (such as chloroquine), on the cells that use autophagy to survive, increases the number of cancer cells killed by antineoplastic drugs. The role of autophagy in cancer is one that has been highly researched and reviewed. There is evidence that emphasizes the role of autophagy as both a tumor suppressor and a factor in tumor cell survival. Recent research has shown, however, that autophagy is more likely to be used as a tumor suppressor according to several models. Tumor suppressor Several experiments have been done with mice and varying Beclin1, a protein that regulates autophagy. When the Beclin1 gene was altered to be heterozygous (Beclin 1+/-), the mice were found to be tumor-prone. However, when Beclin1 was overexpressed, tumor development was inhibited. Care should be exercised when interpreting phenotypes of beclin mutants and attributing the observations to a defect in autophagy, however: Beclin1 is generally required for phosphatidylinositol 3- phosphate production and as such it affects numerous lysosomal and endosomal functions, including endocytosis and endocytic degradation of activated growth factor receptors. In support of the possibility that Beclin1 affects cancer development through an autophagy-independent pathway is the fact that core autophagy factors which are not known to affect other cellular processes and are definitely not known to affect cell proliferation and cell death, such as Atg7 or Atg5, show a much different phenotype when the respective gene is knocked out, which does not include tumor formation. In addition, full knockout of Beclin1 is embryonic lethal whereas knockout of Atg7 or Atg5 is not. Necrosis and chronic inflammation also has been shown to be limited through autophagy which helps protect against the formation of tumor cells. Colorectal cancer Colorectal cancer incidence is associated with a high-fat diet, and such a diet is linked to elevated levels of bile acids in the colon, particularly deoxycholic acid. Deoxycholic acid induces autophagy in non-cancer colon epithelial cells and this induction of autophagy contributes to cell survival when cells are stressed. Also autophagy is a survival pathway that is constitutively present in apoptosis-resistant colon cancer cells. The constitutive activation of autophagy in colon cancer cells, is thus a colon cancer cell survival strategy that needs to be overcome in colon cancer therapy. Mechanism of cell death Cells that undergo an extreme amount of stress experience cell death either through apoptosis or necrosis. Prolonged autophagy activation leads to a high turnover rate of proteins and organelles. A high rate above the survival threshold may kill cancer cells with a high apoptotic threshold. This technique can be utilized as a therapeutic cancer treatment. Tumor cell survival Alternatively, autophagy has also been shown to play a large role in tumor cell survival. In cancerous cells, autophagy is used as a way to deal with stress on the cell. Induction of autophagy by miRNA-4673, for example, is a pro-survival mechanism that improves the resistance of cancer cells to radiation. Once these autophagy related genes were inhibited, cell death was potentiated. The increase in metabolic energy is offset by autophagy functions. These metabolic stresses include hypoxia, nutrient deprivation, and an increase in proliferation. These stresses activate autophagy in order to recycle ATP and maintain survival of the cancerous cells. Autophagy has been shown to enable continued growth of tumor cells by maintaining cellular energy production. By inhibiting autophagy genes in these tumors cells, regression of the tumor and extended survival of the organs affected by the tumors were found. Furthermore, inhibition of autophagy has also been shown to enhance the effectiveness of anticancer therapies. Therapeutic target New developments in research have found that targeted autophagy may be a viable therapeutic solution in fighting cancer. As discussed above, autophagy plays both a role in tumor suppression and tumor cell survival. Thus, the qualities of autophagy can be used as a strategy for cancer prevention. The first strategy is to induce autophagy and enhance its tumor suppression attributes. The second strategy is to inhibit autophagy and thus induce apoptosis. The first strategy has been tested by looking at dose-response anti-tumor effects during autophagy-induced therapies. These therapies have shown that autophagy increases in a dose-dependent manner. This is directly related to the growth of cancer cells in a dose-dependent manner as well. These data support the development of therapies that will encourage autophagy. Secondly, inhibiting the protein pathways directly known to induce autophagy may also serve as an anticancer therapy. The second strategy is based on the idea that autophagy is a protein degradation system used to maintain homeostasis and the findings that inhibition of autophagy often leads to apoptosis. Inhibition of autophagy is riskier as it may lead to cell survival instead of the desired cell death. Negative regulators of autophagy Negative regulators of autophagy, such as mTOR, cFLIP, EGFR, (GAPR-1), and Rubicon are orchestrated to function within different stages of the autophagy cascade. The end-products of autophagic digestion may also serve as a negative-feedback regulatory mechanism to stop prolonged activity. The interface between inflammation and autophagy Regulators of autophagy control regulators of inflammation, and vice versa. Cells of vertebrate organisms normally activate inflammation to enhance the capacity of the immune system to clear infections and to initiate the processes that restore tissue structure and function. Therefore, it is critical to couple regulation of mechanisms for removal of cellular and bacterial debris to the principal factors that regulate inflammation: The degradation of cellular components by the lysosome during autophagy serves to recycle vital molecules and generate a pool of building blocks to help the cell respond to a changing microenvironment. Proteins that control inflammation and autophagy form a network that is critical for tissue functions, which is dysregulated in cancer: In cancer cells, aberrantly expressed and mutant proteins increase the dependence of cell survival on the "rewired" network of proteolytic systems that protects malignant cells from apoptotic proteins and from recognition by the immune system. This renders cancer cells vulnerable to intervention on regulators of autophagy. Type 2 diabetes Excessive activity of the crinophagy form of autophagy in the insulin-producing beta cells of the pancreas could reduce the quantity of insulin available for secretion, leading to type 2 diabetes.
Biology and health sciences
Cell processes
Biology
624714
https://en.wikipedia.org/wiki/Thomson%20scattering
Thomson scattering
Thomson scattering is the elastic scattering of electromagnetic radiation by a free charged particle, as described by classical electromagnetism. It is the low-energy limit of Compton scattering: the particle's kinetic energy and photon frequency do not change as a result of the scattering. This limit is valid as long as the photon energy is much smaller than the mass energy of the particle: , or equivalently, if the wavelength of the light is much greater than the Compton wavelength of the particle (e.g., for electrons, longer wavelengths than hard x-rays). Description of the phenomenon Thomson scattering is a model for the effect of electromagnetic fields on electrons when the field energy is much less than the rest mass of the electron . In the model the electric field of the incident wave accelerates the charged particle, causing it, in turn, to emit radiation at the same frequency as the incident wave, and thus the wave is scattered. Thomson scattering is an important phenomenon in plasma physics and was first explained by the physicist J. J. Thomson. As long as the motion of the particle is non-relativistic (i.e. its speed is much less than the speed of light), the main cause of the acceleration of the particle will be due to the electric field component of the incident wave. In a first approximation, the influence of the magnetic field can be neglected. The particle will move in the direction of the oscillating electric field, resulting in electromagnetic dipole radiation. The moving particle radiates most strongly in a direction perpendicular to its acceleration and that radiation will be polarized along the direction of its motion. Therefore, depending on where an observer is located, the light scattered from a small volume element may appear to be more or less polarized. The electric fields of the incoming and observed wave (i.e. the outgoing wave) can be divided up into those components lying in the plane of observation (formed by the incoming and observed waves) and those components perpendicular to that plane. Those components lying in the plane are referred to as "radial" and those perpendicular to the plane are "tangential". (It is difficult to make these terms seem natural, but it is standard terminology.) The diagram on the right depicts the plane of observation. It shows the radial component of the incident electric field, which causes the charged particles at the scattering point to exhibit a radial component of acceleration (i.e., a component tangent to the plane of observation). It can be shown that the amplitude of the observed wave will be proportional to the cosine of χ, the angle between the incident and observed waves. The intensity, which is the square of the amplitude, will then be diminished by a factor of cos2(χ). It can be seen that the tangential components (perpendicular to the plane of the diagram) will not be affected in this way. The scattering is best described by an emission coefficient which is defined as ε where ε dt dV dΩ dλ is the energy scattered by a volume element in time dt into solid angle dΩ between wavelengths λ and λ+dλ. From the point of view of an observer, there are two emission coefficients, εr corresponding to radially polarized light and εt corresponding to tangentially polarized light. For unpolarized incident light, these are given by: where is the density of charged particles at the scattering point, is incident flux (i.e. energy/time/area/wavelength), is the angle between the incident and scattered photons (see figure above) and is the Thomson cross section for the charged particle, defined below. The total energy radiated by a volume element in time dt between wavelengths λ and λ+dλ is found by integrating the sum of the emission coefficients over all directions (solid angle): The Thomson differential cross section, related to the sum of the emissivity coefficients, is given by expressed in SI units; q is the charge per particle, m the mass of particle, and a constant, the permittivity of free space. (To obtain an expression in cgs units, drop the factor of 4ε0.) Integrating over the solid angle, we obtain the Thomson cross section in SI units. The important feature is that the cross section is independent of light frequency. The cross section is proportional by a simple numerical factor to the square of the classical radius of a point particle of mass m and charge q, namely Alternatively, this can be expressed in terms of , the Compton wavelength, and the fine structure constant: For an electron, the Thomson cross-section is numerically given by: Examples of Thomson scattering The cosmic microwave background contains a small linearly-polarized component attributed to Thomson scattering. That polarized component mapping out the so-called E-modes was first detected by DASI in 2002. The solar K-corona is the result of the Thomson scattering of solar radiation from solar coronal electrons. The ESA and NASA SOHO mission and the NASA STEREO mission generate three-dimensional images of the electron density around the Sun by measuring this K-corona from three separate satellites. In tokamaks, corona of ICF targets and other experimental fusion devices, the electron temperatures and densities in the plasma can be measured with high accuracy by detecting the effect of Thomson scattering of a high-intensity laser beam. An upgraded Thomson scattering system in the Wendelstein 7-X stellarator uses Nd:YAG lasers to emit multiple pulses in quick succession. The intervals within each burst can range from 2 ms to 33.3 ms, permitting up to twelve consecutive measurements. Synchronization with plasma events is made possible by a newly added trigger system that facilitates real-time analysis of transient plasma events. In the Sunyaev–Zeldovich effect, where the photon energy is much less than the electron rest mass, the inverse-Compton scattering can be approximated as Thomson scattering in the rest frame of the electron. Models for X-ray crystallography are based on Thomson scattering.
Physical sciences
Particle physics: General
Physics
624981
https://en.wikipedia.org/wiki/Substitution%20reaction
Substitution reaction
A substitution reaction (also known as single displacement reaction or single substitution reaction) is a chemical reaction during which one functional group in a chemical compound is replaced by another functional group. Substitution reactions are of prime importance in organic chemistry. Substitution reactions in organic chemistry are classified either as electrophilic or nucleophilic depending upon the reagent involved, whether a reactive intermediate involved in the reaction is a carbocation, a carbanion or a free radical, and whether the substrate is aliphatic or aromatic. Detailed understanding of a reaction type helps to predict the product outcome in a reaction. It also is helpful for optimizing a reaction with regard to variables such as temperature and choice of solvent. A good example of a substitution reaction is halogenation. When chlorine gas (Cl2) is irradiated, some of the molecules are split into two chlorine radicals (Cl•), whose free electrons are strongly nucleophilic. One of them breaks a C–H covalent bond in CH4 and grabs the hydrogen atom to form the electrically neutral HCl. The other radical reforms a covalent bond with the CH3• to form CH3Cl (methyl chloride). Nucleophilic substitution In organic (and inorganic) chemistry, nucleophilic substitution is a fundamental class of reactions in which a nucleophile selectively bonds with or attacks the positive or partially positive charge on an atom or a group of atoms. As it does so, it replaces a weaker nucleophile, which then becomes a leaving group; the remaining positive or partially positive atom becomes an electrophile. The whole molecular entity of which the electrophile and the leaving group are part is usually called the substrate. The most general form for the reaction may be given as Nuc\mathbf{:}- + R-LG -> R-Nuc{} + LG\mathbf{:}- where indicates the substrate. The electron pair (:) from the nucleophile (Nuc:) attacks the substrate (), forming a new covalent bond . The prior state of charge is restored when the leaving group (LG) departs with an electron pair. The principal product in this case is . In such reactions, the nucleophile is usually electrically neutral or negatively charged, whereas the substrate is typically neutral or positively charged. An example of nucleophilic substitution is the hydrolysis of an alkyl bromide, , under basic conditions, where the attacking nucleophile is the base and the leaving group is : R-Br + OH- -> R-OH + Br- Nucleophilic substitution reactions are commonplace in organic chemistry, and they can be broadly categorized as taking place at a carbon of a saturated aliphatic compound carbon or (less often) at an aromatic or other unsaturated carbon center. Mechanisms Nucleophilic substitutions can proceed by two different mechanisms, unimolecular nucleophilic substitution (SN1) and bimolecular nucleophilic substitution (SN2). The two reactions are named according tho their rate law, with SN1 having a first-order rate law, and SN2 having a second-order. The SN1 mechanism has two steps. In the first step, the leaving group departs, forming a carbocation (C+). In the second step, the nucleophilic reagent (Nuc:) attaches to the carbocation and forms a covalent sigma bond. If the substrate has a chiral carbon, this mechanism can result in either inversion of the stereochemistry or retention of configuration. Usually, both occur without preference. The result is racemization. The stability of a carbocation (C+) depends on how many other carbon atoms are bonded to it. This results in SN1 reactions usually occurring on atoms with at least two carbons bonded to them. A more detailed explanation of this can be found in the main SN1 reaction page. The SN2 mechanism has just one step. The attack of the reagent and the expulsion of the leaving group happen simultaneously. This mechanism always results in inversion of configuration. If the substrate that is under nucleophilic attack is chiral, the reaction will therefore lead to an inversion of its stereochemistry, called a Walden inversion. SN2 attack may occur if the backside route of attack is not sterically hindered by substituents on the substrate. Therefore, this mechanism usually occurs at an unhindered primary carbon center. If there is steric crowding on the substrate near the leaving group, such as at a tertiary carbon center, the substitution will involve an SN1 rather than an SN2. Other types of nucleophilic substitution include, nucleophilic acyl substitution, and nucleophilic aromatic substitution. Acyl substitution occurs when a nucleophile attacks a carbon that is doubly bonded to one oxygen and singly bonded to another oxygen (can be N or S or a halogen), called an acyl group. The nucleophile attacks the carbon causing the double bond to break into a single bond. The double can then reform, kicking off the leaving group in the process. Aromatic substitution occurs on compounds with systems of double bonds connected in rings. See aromatic compounds for more. Electrophilic substitution Electrophiles are involved in electrophilic substitution reactions, particularly in electrophilic aromatic substitutions. In this example, the benzene ring's electron resonance structure is attacked by an electrophile E+. The resonating bond is broken and a carbocation resonating structure results. Finally a proton is kicked out and a new aromatic compound is formed. Electrophilic reactions to other unsaturated compounds than arenes generally lead to electrophilic addition rather than substitution. Radical substitution A radical substitution reaction involves radicals. An example is the Hunsdiecker reaction. Organometallic substitution Coupling reactions are a class of metal-catalyzed reactions involving an organometallic compound RM and an organic halide R′X that together react to form a compound of the type R-R′ with formation of a new carbon–carbon bond. Examples include the Heck reaction, Ullmann reaction, and Wurtz–Fittig reaction. Many variations exist. Substituted compounds Substituted compounds are compounds where one or more hydrogen atoms have been replaced with something else such as an alkyl, hydroxy, or halogen. More can be found on the substituted compounds page. Inorganic and organometallic chemistry While it is common to discuss substitution reactions in the context of organic chemistry, the reaction is generic and applies to a wide range of compounds. Ligands in coordination complexes are susceptible to substitution. Both associative and dissociative mechanisms have been observed. Associative substitution, for example, is typically applied to organometallic and coordination complexes, but resembles the Sn2 mechanism in organic chemistry. The opposite pathway is dissociative substitution, being analogous to the Sn1 pathway. Examples of associative mechanisms are commonly found in the chemistry of 16e square planar metal complexes, e.g. Vaska's complex and tetrachloroplatinate. The rate law is governed by the Eigen–Wilkins Mechanism. Dissociative substitution resembles the SN1 mechanism in organic chemistry. This pathway can be well described by the cis effect, or the labilization of CO ligands in the cis position. Complexes that undergo dissociative substitution are often coordinatively saturated and often have octahedral molecular geometry. The entropy of activation is characteristically positive for these reactions, which indicates that the disorder of the reacting system increases in the rate-determining step. Dissociative pathways are characterized by a rate determining step that involves release of a ligand from the coordination sphere of the metal undergoing substitution. The concentration of the substituting nucleophile has no influence on this rate, and an intermediate of reduced coordination number can be detected. The reaction can be described with k1, k−1 and k2, which are the rate constants of their corresponding intermediate reaction steps: L_\mathit{n}M-L <=>[-\mathrm L, k_1][+\mathrm L, k_{-1}] L_\mathit{n}M-\Box ->[+\mathrm L', k_2] L_\mathit{n}M-L' Normally the rate determining step is the dissociation of L from the complex, and [L'] does not affect the rate of reaction, leading to the simple rate equation: Rate = {\mathit k_1 [L_\mathit{n}M-L]}
Physical sciences
Chemical reactions
null
625220
https://en.wikipedia.org/wiki/Proglacial%20lake
Proglacial lake
In geology, a proglacial lake is a lake formed either by the damming action of a moraine during the retreat of a melting glacier, a glacial ice dam, or by meltwater trapped against an ice sheet due to isostatic depression of the crust around the ice. At the end of the last ice age about 10,000 years ago, large proglacial lakes were a widespread feature in the northern hemisphere. Moraine-dammed The receding glaciers of the tropical Andes have formed a number of proglacial lakes, especially in the Cordillera Blanca of Peru, where 70% of all tropical glaciers are. Several such lakes have formed rapidly during the 20th century. These lakes may burst, creating a hazard for zones below. Many natural dams (usually moraines) containing the lake water have been reinforced with safety dams. Some 34 such dams have been built in the Cordillera Blanca to contain proglacial lakes. Several proglacial lakes have also formed in recent decades at the end of glaciers on the eastern side of New Zealand's Southern Alps. The most accessible, Lake Tasman, hosts boat trips for tourists. On a smaller scale, a mountain glacier may excavate a depression forming a cirque, which may contain a mountain lake, called a tarn, upon the melting of the glacial ice. Ice-dammed The movement of a glacier may flow down a valley to a confluence where the other branch carries an unfrozen river. The glacier blocks the river, which backs up into a proglacial lake, which eventually overflows or undermines the ice dam, suddenly releasing the impounded water in a glacial lake outburst flood also known by its Icelandic name a jökulhlaup. Some of the largest glacial floods in North American history were from Lake Agassiz. In modern times, the Hubbard Glacier regularly blocks the mouth of Russell Fjord at 60° north on the coast of Alaska. A similar event takes place after irregular periods in the Perito Moreno Glacier, located in Patagonia. Roughly every four years the glacier forms an ice dam against the rocky coast, causing the waters of the Lago Argentino to rise. When the water pressure is too high, then the giant bridge collapses in what has become a major tourist attraction. This sequence occurred last on 4 March 2012, the previous having taken place four years before, in July 2008. About 13,000 years ago in North America, the Cordilleran Ice Sheet crept southward into the Idaho Panhandle, forming a large ice dam that blocked the mouth of the Clark Fork River, creating a massive lake deep and containing more than of water. Finally this Glacial Lake Missoula burst through the ice dam and exploded downstream, flowing at a rate 10 times the combined flow of all the rivers of the world. Because such ice dams can re-form, these Missoula Floods happened at least 59 times, carving Dry Falls below Grand Coulee. In some cases, such lakes gradually evaporated during the warming period after the Quaternary ice age. In other cases, such as Glacial Lake Missoula and Glacial Lake Wisconsin in the United States, the sudden rupturing of the supporting dam caused glacial lake outburst floods, the rapid and catastrophic release of dammed water resulting in the formation of gorges and other structures downstream from the former lake. Good examples of these structures can be found in the Channeled Scablands of eastern Washington, an area heavily eroded by the Missoula Floods. The following table is a partial list of rivers that had glacial ice dams. Retreating ice sheet The retreating glaciers of the last ice age, both depressed the terrain with their mass and provided a source of meltwater that was confined against the ice mass. Lake Algonquin is an example of a proglacial lake that existed in east-central North America at the time of the last ice age. Parts of the former lake are now Lake Huron, Georgian Bay, Lake Superior, Lake Michigan and inland portions of northern Michigan. Examples in Great Britain include Lake Lapworth, Lake Harrison and Lake Pickering. Ironbridge Gorge in Shropshire and Hubbard's Hills in Lincolnshire are examples of a glacial overspill channel created when the water of a proglacial lake rose high enough to breach the lowest point in the containing watershed.
Physical sciences
Hydrology
Earth science
625226
https://en.wikipedia.org/wiki/Reversible%20reaction
Reversible reaction
A reversible reaction is a reaction in which the conversion of reactants to products and the conversion of products to reactants occur simultaneously. \mathit aA{} + \mathit bB <=> \mathit cC{} + \mathit dD A and B can react to form C and D or, in the reverse reaction, C and D can react to form A and B. This is distinct from a reversible process in thermodynamics. Weak acids and bases undergo reversible reactions. For example, carbonic acid: H2CO3 (l) + H2O(l) ⇌ HCO3−(aq) + H3O+(aq). The concentrations of reactants and products in an equilibrium mixture are determined by the analytical concentrations of the reagents (A and B or C and D) and the equilibrium constant, K. The magnitude of the equilibrium constant depends on the Gibbs free energy change for the reaction. So, when the free energy change is large (more than about 30 kJ mol−1), the equilibrium constant is large (log K > 3) and the concentrations of the reactants at equilibrium are very small. Such a reaction is sometimes considered to be an irreversible reaction, although small amounts of the reactants are still expected to be present in the reacting system. A truly irreversible chemical reaction is usually achieved when one of the products exits the reacting system, for example, as does carbon dioxide (volatile) in the reaction CaCO3 + 2HCl → CaCl2 + H2O + CO2↑ History The concept of a reversible reaction was introduced by Claude Louis Berthollet in 1803, after he had observed the formation of sodium carbonate crystals at the edge of a salt lake (one of the natron lakes in Egypt, in limestone): 2NaCl + CaCO3 → Na2CO3 + CaCl2 He recognized this as the reverse of the familiar reaction Na2CO3 + CaCl2→ 2NaCl + CaCO3 Until then, chemical reactions were thought to always proceed in one direction. Berthollet reasoned that the excess of salt in the lake helped push the "reverse" reaction towards the formation of sodium carbonate. In 1864, Peter Waage and Cato Maximilian Guldberg formulated their law of mass action which quantified Berthollet's observation. Between 1884 and 1888, Le Chatelier and Braun formulated Le Chatelier's principle, which extended the same idea to a more general statement on the effects of factors other than concentration on the position of the equilibrium. Reaction kinetics For the reversible reaction A⇌B, the forward step A→B has a rate constant and the backwards step B→A has a rate constant . The concentration of A obeys the following differential equation: If we consider that the concentration of product B at anytime is equal to the concentration of reactants at time zero minus the concentration of reactants at time , we can set up the following equation: Combining and , we can write . Separation of variables is possible and using an initial value , we obtain: and after some algebra we arrive at the final kinetic expression: . The concentration of A and B at infinite time has a behavior as follows: Thus, the formula can be linearized in order to determine : To find the individual constants and , the following formula is required:
Physical sciences
Basics_3
Chemistry
625229
https://en.wikipedia.org/wiki/Adipocyte
Adipocyte
Adipocytes, also known as lipocytes and fat cells, are the cells that primarily compose adipose tissue, specialized in storing energy as fat. Adipocytes are derived from mesenchymal stem cells which give rise to adipocytes through adipogenesis. In cell culture, adipocyte progenitors can also form osteoblasts, myocytes and other cell types. There are two types of adipose tissue, white adipose tissue (WAT) and brown adipose tissue (BAT), which are also known as white and brown fat, respectively, and comprise two types of fat cells. Structure White fat cells White fat cells contain a single large lipid droplet surrounded by a layer of cytoplasm, and are known as unilocular. The nucleus is flattened and pushed to the periphery. A typical fat cell is 0.1 mm in diameter with some being twice that size, and others half that size. However, these numerical estimates of fat cell size depend largely on the measurement method and the location of the adipose tissue. The fat stored is in a semi-liquid state, and is composed primarily of triglycerides, and cholesteryl ester. White fat cells secrete many proteins acting as adipokines such as resistin, adiponectin, leptin and apelin. An average human adult has 30 billion fat cells with a weight of 30 lbs or 13.5 kg. If a child or adolescent gains sufficient excess weight, fat cells may increase in absolute number until age twenty-four. If an adult (who never was obese as a child or adolescent) gains excess weight, fat cells generally increase in size, not number, though there is some inconclusive evidence suggesting that the number of fat cells might also increase if the existing fat cells become large enough (as in particularly severe levels of obesity). The number of fat cells is difficult to decrease through dietary intervention, though some evidence suggests that the number of fat cells can decrease if weight loss is maintained for a sufficiently long period of time (>1 year; though it is extremely difficult for people with larger and more numerous fat cells to maintain weight loss for that long a time). A large meta-analysis has shown that white adipose tissue cell size is dependent on measurement methods, adipose tissue depots, age, and body mass index; for the same degree of obesity, increases in fat cell size were also associated with the dysregulations in glucose and lipid metabolism. Brown fat cells Brown fat cells are polyhedral in shape. Brown fat is derived from dermatomyocyte cells. Unlike white fat cells, these cells have considerable cytoplasm, with several lipid droplets scattered throughout, and are known as multilocular cells. The nucleus is round and, although eccentrically located, it is not in the periphery of the cell. The brown color comes from the large quantity of mitochondria. Brown fat, also known as "baby fat," is used to generate heat. Marrow fat cells Marrow adipocytes are unilocular like white fat cells. The marrow adipose tissue depot is poorly understood in terms of its physiologic function and relevance to bone health. Marrow adipose tissue expands in states of low bone density but additionally expands in the setting of obesity. Marrow adipose tissue response to exercise approximates that of white adipose tissue. Exercise reduces both adipocyte size as well as marrow adipose tissue volume, as quantified by MRI or μCT imaging of bone stained with the lipid binder osmium. Development Pre-adipocytes are undifferentiated fibroblasts that can be stimulated to form adipocytes. Studies have shed light into potential molecular mechanisms in the fate determination of pre-adipocytes although the exact lineage of adipocyte is still unclear. The variation of body fat distribution resulting from normal growth is influenced by nutritional and hormonal status dependent on intrinsic differences in cells found in each adipose depot. Mesenchymal stem cells can differentiate into adipocytes, connective tissue, muscle or bone. The precursor of the adult cell is termed a lipoblast, and a tumor of this cell type is known as a lipoblastoma. Function Cell turnover Fat cells in some mice have been shown to drop in count due to fasting and other properties were observed when exposed to cold. If the adipocytes in the body reach their maximum capacity of fat, they may replicate to allow additional fat storage. According to some reports and textbooks, the number of adipocytes can increase in childhood and adolescence, though the amount is usually constant in adults. Individuals who become obese as adults, rather than as adolescents, have no more adipocytes than they had before. Body fat cells have regional responses to the overfeeding that was studied in adult subjects. In the upper body, an increase of adipocyte size correlated with upper-body fat gain; however, the number of fat cells was not significantly changed. In contrast to the upper body fat cell response, the number of lower-body adipocytes did significantly increase during the course of experiment. Notably, there was no change in the size of the lower-body adipocytes. Approximately 10% of fat cells are renewed annually at all adult ages and levels of body mass index without a significant increase in the overall number of adipocytes in adulthood. Adaptation Obesity is characterized by the expansion of fat mass, through adipocyte size increase (hypertrophy) and, to a lesser extent, cell proliferation (hyperplasia). In the fatty tissue of obese individuals, there is increased production of metabolism modulators, such as glycerol, hormones, macrophage-stimulating chemokines, and pro-inflammatory cytokines, leading to the development of insulin resistance. Production of these modulators and the resulting pathogenesis of insulin resistance are probably caused by adipocytes as well as immune system macrophages that infiltrate the tissue. Fat production in adipocytes is strongly stimulated by insulin. By controlling the activity of the pyruvate dehydrogenase and the acetyl-CoA carboxylase enzymes, insulin promotes unsaturated fatty acid synthesis. It also promotes glucose uptake and induces SREBF1, which activates the transcription of genes that stimulate lipogenesis. SREBF1 (sterol regulatory element-binding transcription factor 1) is a transcription factor synthesized as an inactive precursor protein inserted into the endoplasmic reticulum (ER) membrane by two membrane-spanning helices. Also anchored in the ER membrane is SCAP (SREBF-cleavage activating protein), which binds SREBF1. The SREBF1-SCAP complex is retained in the ER membrane by INSIG1 (insulin-induced gene 1 protein). When sterol levels are depleted, INSIG1 releases SCAP and the SREBF1-SCAP complex can be sorted into transport vesicles coated by the coatomer COPII that are exported to the Golgi apparatus. In the Golgi apparatus, SREBF1 is cleaved and released as a transcriptionally active mature protein. It is then free to translocate to the nucleus and activate the expression of its target genes. Clinical studies have repeatedly shown that even though insulin resistance is usually associated with obesity, the membrane phospholipids of the adipocytes of obese patients generally still show an increased degree of fatty acid unsaturation. This seems to point to an adaptive mechanism that allows the adipocyte to maintain its functionality, despite the increased storage demands associated with obesity and insulin resistance. A study conducted in 2013 found that, while INSIG1 and SREBF1 mRNA expression was decreased in the adipose tissue of obese mice and humans, the amount of active SREBF1 was increased in comparison with normal mice and non-obese patients. This downregulation of INSIG1 expression combined with the increase of mature SREBF1 was also correlated with the maintenance of SREBF1-target gene expression. Hence, it appears that, by downregulating INSIG1, there is a resetting of the INSIG1/SREBF1 loop, allowing for the maintenance of active SREBF1 levels. This seems to help compensate for the anti-lipogenic effects of insulin resistance and thus preserve adipocyte fat storage abilities and availability of appropriate levels of fatty acid unsaturation in face of the nutritional pressures of obesity. Endocrine role Adipocytes can synthesize estrogens from androgens, potentially being the reason why being underweight or overweight are risk factors for infertility. Additionally, adipocytes are responsible for the production of the hormone leptin. Leptin is important in regulation of appetite and acts as a satiety factor.
Biology and health sciences
Tissues
Biology
625232
https://en.wikipedia.org/wiki/Skeletal%20formula
Skeletal formula
The skeletal formula, line-angle formula, bond-line formula or shorthand formula of an organic compound is a type of molecular structural formula that serves as a shorthand representation of a molecule's bonding and some details of its molecular geometry. A skeletal formula shows the skeletal structure or skeleton of a molecule, which is composed of the skeletal atoms that make up the molecule. It is represented in two dimensions, as on a piece of paper. It employs certain conventions to represent carbon and hydrogen atoms, which are the most common in organic chemistry. An early form of this representation was first developed by organic chemist August Kekulé, while the modern form is closely related to and influenced by the Lewis structure of molecules and their valence electrons. Hence they are sometimes termed Kekulé structures or Lewis–Kekulé structures. Skeletal formulae have become ubiquitous in organic chemistry, partly because they are relatively quick and simple to draw, and also because the curved arrow notation used for discussions of reaction mechanisms and electron delocalization can be readily superimposed. Several other ways of depicting chemical structures are also commonly used in organic chemistry (though less frequently than skeletal formulae). For example, conformational structures look similar to skeletal formulae and are used to depict the approximate positions of atoms in 3D space, as a perspective drawing. Other types of representation, such as Newman projection, Haworth projection or Fischer projection, also look somewhat similar to skeletal formulae. However, there are slight differences in the conventions used, and the reader needs to be aware of them in order to understand the structural details encoded in the depiction. While skeletal and conformational structures are also used in organometallic and inorganic chemistry, the conventions employed also differ somewhat. The skeleton Terminology The skeletal structure of an organic compound is the series of atoms bonded together that form the essential structure of the compound. The skeleton can consist of chains, branches and/or rings of bonded atoms. Skeletal atoms other than carbon or hydrogen are called heteroatoms. The skeleton has hydrogen and/or various substituents bonded to its atoms. Hydrogen is the most common non-carbon atom that is bonded to carbon and, for simplicity, is not explicitly drawn. In addition, carbon atoms are not generally labelled as such directly (i.e. with "C"), whereas heteroatoms are always explicitly noted as such ("N" for nitrogen, "O" for oxygen, etc.) Heteroatoms and other groups of atoms that give rise to relatively high rates of chemical reactivity, or introduce specific and interesting characteristics in the spectra of compounds are called functional groups, as they give the molecule a function. Heteroatoms and functional groups are collectively called "substituents", as they are considered to be a substitute for the hydrogen atom that would be present in the parent hydrocarbon of the organic compound. Basic structure As in Lewis structures, covalent bonds are indicated by line segments, with a doubled or tripled line segment indicating double or triple bonding, respectively. Likewise, skeletal formulae indicate formal charges associated with each atom (although lone pairs are usually optional, see below). In fact, skeletal formulae can be thought of as abbreviated Lewis structures that observe the following simplifications: Carbon atoms are represented by the vertices (intersections or termini) of line segments. For clarity, methyl groups are often explicitly written out as Me or CH3, while (hetero)cumulene carbons are frequently represented by a heavy center dot. Hydrogen atoms attached to carbon are implied. An unlabeled vertex is understood to represent a carbon attached to the number of hydrogens required to satisfy the octet rule, while a vertex labeled with a formal charge and/or nonbonding electron(s) is understood to have the number of hydrogen atoms required to give the carbon atom these indicated properties. Optionally, acetylenic and formyl hydrogens can be shown explicitly for the sake of clarity. Hydrogen atoms attached to a heteroatom are shown explicitly. The heteroatom and hydrogen atoms attached thereto are usually shown as a single group (e.g., OH, NH2) without explicitly showing the hydrogen–heteroatom bond. Heteroatoms with simple alkyl or aryl substituents, like methoxy (OMe) or dimethylamino (NMe2), are sometimes shown in the same way, by analogy. Lone pairs on carbene carbons must be indicated explicitly while lone pairs in other cases are optional and are shown only for emphasis. In contrast, formal charges and unpaired electrons on main-group elements are always explicitly shown. In the standard depiction of a molecule, the canonical form (resonance structure) with the greatest contribution is drawn. However, the skeletal formula is understood to represent the "real molecule" that is, the weighted average of all contributing canonical forms. Thus, in cases where two or more canonical forms contribute with equal weight (e.g., in benzene, or a carboxylate anion) and one of the canonical forms is selected arbitrarily, the skeletal formula is understood to depict the true structure, containing equivalent bonds of fractional order, even though the delocalized bonds are depicted as nonequivalent single and double bonds. Contemporary graphical conventions Since skeletal structures were introduced in the latter half of the 19th century, their appearance has undergone considerable evolution. The graphical conventions in use today date to the 1980s. Thanks to the adoption of the ChemDraw software package as a de facto industry standard (by American Chemical Society, Royal Society of Chemistry, and Gesellschaft Deutscher Chemiker publications, for instance), these conventions have been nearly universal in the chemical literature since the late 1990s. A few minor conventional variations, especially with respect to the use of stereobonds, continue to exist as a result of differing US, UK and European practice, or as a matter of personal preference. As another minor variation between authors, formal charges can be shown with the plus or minus sign in a circle (⊕, ⊖) or without the circle. The set of conventions that are followed by most authors is given below, along with illustrative examples. Implicit carbon and hydrogen atoms For example, the skeletal formula of hexane (top) is shown below. The carbon atom labeled C1 appears to have only one bond, so there must also be three hydrogens bonded to it, in order to make its total number of bonds four. The carbon atom labelled C3 has two bonds to other carbons and is therefore bonded to two hydrogen atoms as well. A Lewis structure (middle) and ball-and-stick model (bottom) of the actual molecular structure of hexane, as determined by X-ray crystallography, are shown for comparison. It does not matter which end of the chain one starts numbering from, as long as consistency is maintained when drawing diagrams. The condensed formula or the IUPAC name will confirm the orientation. Some molecules will become familiar regardless of the orientation. Explicit heteroatoms and hydrogen atoms All atoms that are not carbon or hydrogen are signified by their chemical symbol, for instance Cl for chlorine, O for oxygen, Na for sodium, and so forth. In the context of organic chemistry, these atoms are commonly known as heteroatoms (the prefix hetero- comes from Greek ἕτερος héteros, meaning "other"). Any hydrogen atoms bonded to heteroatoms are drawn explicitly. In ethanol, C2H5OH, for instance, the hydrogen atom bonded to oxygen is denoted by the symbol H, whereas the hydrogen atoms which are bonded to carbon atoms are not shown directly. Lines representing heteroatom-hydrogen bonds are usually omitted for clarity and compactness, so a functional group like the hydroxyl group is most often written −OH instead of −O−H. These bonds are sometimes drawn out in full in order to accentuate their presence when they participate in reaction mechanisms. Shown below for comparison are a skeletal formula (top), its Lewis structure (middle) and its ball-and-stick model (bottom) of the actual 3D structure of the ethanol molecule in the gas phase, as determined by microwave spectroscopy. Pseudoelement symbols There are also symbols that appear to be chemical element symbols, but represent certain very common substituents or indicate an unspecified member of a group of elements. These are called pseudoelement symbols or organic elements and are treated like univalent "elements" in skeletal formulae. A list of common pseudoelement symbols: General symbols X for any (pseudo)halogen atom (in the related MLXZ notation, X represents a one-electron donor ligand) L or Ln for a ligand or ligands (in the related MLXZ notation, L represents a two-electron donor ligand) M or Met for any metal atom ([M] is used to indicate a ligated metal, MLn, when the identities of the ligands are unknown or irrelevant) E or El for any electrophile (in some contexts, E is also used to indicate any p-block element) Nu for any nucleophile Z for conjugating electron-withdrawing groups (in the related MLXZ notation, Z represents a zero-electron donor ligand; in unrelated usage, Z is also an abbreviation for the carboxybenzyl group.) D for deuterium (2H) T for tritium (3H) Alkyl groups R for any alkyl group or even any organyl group (Alk can be used to unambiguously indicate an alkyl group) Me for the methyl group Et for the ethyl group Pr, n-Pr, or nPr for the (normal) propyl group (Pr is also the symbol for the element praseodymium. However, since the propyl group is monovalent, while praseodymium is nearly always trivalent, ambiguity rarely, if ever, arises in practice.) i-Pr or iPr for the isopropyl group All for the allyl group (uncommon) Bu, n-Bu or nBu for the (normal) butyl group i-Bu or iBu (i often italicized) for the isobutyl group s-Bu or sBu for the secondary butyl group t-Bu or tBu for the tertiary butyl group Pn for the pentyl group (or Am for the synonymous amyl group, although Am is also the symbol for americium.) Np or Neo for the neopentyl group (Warning: Organometallic chemists often use Np for the related neophyl group, PhMe2C–. Np is also the symbol for the element neptunium.) Cy or Chx for the cyclohexyl group Ad for the 1-adamantyl group Tr or Trt for the trityl group Aromatic and unsaturated substituents Ar for any aromatic substituent (Ar is also the symbol for the element argon. However, argon is inert under all usual conditions encountered in organic chemistry, so the use of Ar to represent an aryl substituent never causes confusion.) Het for any heteroaromatic substituent Bn or Bzl for the benzyl group (not to be confused with Bz for benzoyl group; However, old literature may use Bz for benzyl group.) Dipp for the 2,6-diisopropylphenyl group Mes for the mesityl group Ph, Φ, or φ for the phenyl group (the use of phi for phenyl has been in decline) Tol for the tolyl group, usually the para isomer Is or Tipp for the 2,4,6-triisopropylphenyl group (the former symbol is derived from the synonym isityl) An for the anisyl group, usually the para isomer (An is also the symbol for a generic actinoid element. However, since the anisyl group is monovalent, while the actinides are usually divalent, trivalent, or even higher valency, ambiguity rarely, if ever, arises in practice.) Cp for the cyclopentadienyl group (Cp was the symbol for cassiopeium, a former name for lutetium) Cp* for the pentamethylcyclopentadienyl group Vi for the vinyl group (uncommon) Functional groups Ac for the acetyl group (Ac is also the symbol for the element actinium. However, actinium is almost never encountered in organic chemistry, so the use of Ac to represent the acetyl group never causes confusion); Bz for the benzoyl group; OBz is the benzoate group Piv for the pivalyl (t-butylcarbonyl) group; OPiv is the pivalate group Bt for the 1-benzotriazolyl group Im for the 1-imidazolyl group NPhth for the phthalimide-1-yl group Sulfonyl/sulfonate groups Sulfonate esters are often leaving groups in nucleophilic substitution reactions. See the articles on sulfonyl and sulfonate groups for further information. Bs for the brosyl (p-bromobenzenesulfonyl) group; OBs is the brosylate group Ms for the mesyl (methanesulfonyl) group; OMs is the mesylate group Ns for the nosyl (p-nitrobenzenesulfonyl) group (Ns was the chemical symbol for nielsbohrium, but that was renamed bohrium, Bh); ONs is the nosylate group Tf for the triflyl (trifluoromethanesulfonyl) group; OTf is the triflate group Nf for the nonaflyl (nonafluorobutanesulfonyl) group, ; ONf is the nonaflate group Ts for tosyl (p-toluenesulfonyl) group (Ts is also the symbol for the element tennessine. However, tennessine is too unstable to ever be encountered in organic chemistry, so the use of Ts to represent tosyl never causes confusion); OTs is the tosylate group Protecting groups A protecting group or protective group is introduced into a molecule by chemical modification of a functional group to obtain chemoselectivity in a subsequent chemical reaction, facilitating multistep organic synthesis. Boc for the t-butoxycarbonyl group Cbz or Z for the carboxybenzyl group Fmoc for the fluorenylmethoxycarbonyl group Alloc for the allyloxycarbonyl group Troc for the trichloroethoxycarbonyl group TMS, TBDMS, TES, TBDPS, TIPS, ... for various silyl ether groups PMB for the 4-methoxybenzyl group MOM for the methoxymethyl group THP for the 2-tetrahydropyranyl group Multiple bonds Two atoms can be bonded by sharing more than one pair of electrons. The common bonds to carbon are single, double and triple bonds. Single bonds are most common and are represented by a single, solid line between two atoms in a skeletal formula. Double bonds are denoted by two parallel lines, and triple bonds are shown by three parallel lines. In more advanced theories of bonding, non-integer values of bond order exist. In these cases, a combination of solid and dashed lines indicate the integer and non-integer parts of the bond order, respectively. Benzene rings In recent years, benzene is generally depicted as a hexagon with alternating single and double bonds, much like the structure Kekulé originally proposed in 1872. As mentioned above, the alternating single and double bonds of "1,3,5-cyclohexatriene" are understood to be a drawing of one of the two equivalent canonical forms of benzene (the one explicitly shown and the one with the opposite pattern of formal single and double bonds), in which all carbon–carbon bonds are of equivalent length and have a bond order of exactly 1.5. For aryl rings in general, the two analogous canonical forms are almost always the primary contributors to the structure, but they are nonequivalent, so one structure may make a slightly greater contribution than the other, and bond orders may differ somewhat from 1.5. An alternate representation that emphasizes this delocalization uses a circle, drawn inside the hexagon of single bonds, to represent the delocalized pi orbital. This style, based on one proposed by Johannes Thiele, used to be very common in introductory organic chemistry textbooks and is still frequently used in informal settings. However, because this depiction does not keep track of electron pairs and is unable to show the precise movement of electrons, it has largely been superseded by the Kekuléan depiction in pedagogical and formal academic contexts. Stereochemistry Stereochemistry is conveniently denoted in skeletal formulae: The relevant chemical bonds can be depicted in several ways: Solid lines represent bonds in the plane of the paper or screen. Solid wedges represent bonds that point out of the plane of the paper or screen, towards the observer. Hashed wedges or dashed lines (thick or thin) represent bonds that point into the plane of the paper or screen, away from the observer. Wavy lines represent either unknown stereochemistry or a mixture of the two possible stereoisomers at that point. An obsolescent depiction of hydrogen stereochemistry that used to be common in steroid chemistry is the use of a filled circle centered on a vertex (sometimes called H-dot/H-dash/H-circle, respectively) for an upward pointing hydrogen atom and two hash marks next to vertex or a hollow circle for a downward pointing hydrogen atom. An early use of this notation can be traced back to Richard Kuhn who in 1932 used solid thick lines and dotted lines in a publication. The modern solid and hashed wedges were introduced in the 1940s by Giulio Natta to represent the structure of high polymers, and extensively popularised in the 1959 textbook Organic Chemistry by Donald J. Cram and George S. Hammond. Skeletal formulae can depict cis and trans isomers of alkenes. Wavy single bonds are the standard way to represent unknown or unspecified stereochemistry or a mixture of isomers (as with tetrahedral stereocenters). A crossed double-bond has been used sometimes; it is no longer considered an acceptable style for general use but may still be required by computer software. Hydrogen bonds Hydrogen bonds are generally denoted by dotted or dashed lines. In other contexts, dashed lines may also represent partially formed or broken bonds in a transition state.
Physical sciences
Concepts_2
Chemistry